Laws, Guidance, and Recommendations: What to Consider When Using AI to Hire Employees in Italy

2 Oktober 2024

Employers are increasingly using Artificial Intelligence (AI) in the hiring process. For example, chatbots can answer candidates’ questions, tools can screen resumes and profile candidates, and software can score interviews.

As employers continue to explore how AI can streamline processes and decrease bias, they must be aware of the limits of AI, their legal obligations when using AI, and guiding principles regarding the implementation of AI.

In Italy, those legal obligations and guiding principles include Legislative Decree no. 104/20221 (the Transparency Decree), the Code of Conduct for Staff Leasing Agencies2 (the Code of Conduct), which was approved by the Data Protection Authority’s measure no. 12/2024,3 Italy’s Artificial Intelligence Bill4 (the AI Bill), which is currently under review by the Italian Parliament, and the European Union Artificial Intelligence Act5 (the EU AI Act). 

This alert will discuss employer obligations under the Transparency Decree, Code of Conduct, and AI Bill, and will provide recommendations to employers as they navigate the ever-changing landscape of AI use in hiring. For more information on the EU AI Act, please check out K&L Gates’ materials on the EU AI Act, available here,6 here,7 and here.8 

I.    The Transparency Decree

In August 2022, the Transparency Decree took effect and, as a result, Directive (EU) 2019/1152,9 also known as the Directive on Transparent and Predictable Working Conditions, was implemented in Italy. The Transparency Decree addresses automated decision-making tools and monitoring systems. Regarding the use of AI in employment, the Transparency Decree requires employers to provide their employees, job applicants, any trade union representatives within the company, and, if no such trade union representatives exist, the territorial trade unions’ bodies with information about:

  • What aspects of the employment relationship may be affected by AI.
  • The purpose and operation of the AI tools in use.
  • What data and parameters are used to train the AI.
  • What control measures, corrective measures, quality control systems, and cybersecurity tools are in use.

After the Transparency Decree took effect, the Italian Ministry of Labour and Social Policies (Ministry of Labour) issued a circular to clarify the decree’s scope. The Italian Ministry of Labour’s 20 September 2022 circular (Circular No. 19)10 explained that obligations under the Transparency Decree apply if there is only incidental human intervention when implementing the AI tool and provided examples of cases in which there is an obligation to provide information under the Transparency Decree, such as:

  • Recruitment or assignment using chatbots during the interview, automated profiling of applicants, screening of resumes, and use of emotional recognition software and psycho-aptitude tests.
  • Management or termination of employment with automated assignment or revocation of tasks, duties or shifts, definition of working hours, productivity analysis, determination of salary, promotions, etc. through statistical analysis, data analytics or machine learning tools, neural networks, and deep learning.

Circular No. 19 also reminded employers that they must ensure that their use of AI (as well as other decision-making tools or monitoring systems) complies with Article 22 of the General Data Protection Regulation (GDPR),11 including by performing a risk analysis and an impact assessment of the processing activities carried out. 

In March 2023, the Court of Palermo issued the first judgment in relation to the Transparency Decree (Court of Palermo, judgment no. 14491/2023),12 finding that the employer violated the Transparency Decree when it failed to communicate to the relevant trade unions the criteria to be used as part of the functioning of the AI algorithm. The ruling serves as an important reminder to employers to familiarize themselves with their obligations when it comes to using AI in employment.

II.    The Code of Conduct

In January 2024, Italy’s Data Protection Authority issued the Code of Conduct.13 Under the GDPR, codes of conduct are voluntary sets of rules that assist the issuing member with data protection compliance and accountability in specific sectors or relating to particular processing operations. Codes of conduct must be approved and monitored by the member state’s GDPR supervisory authority, here, Italy’s Data Protection Authority. 

This Code of Conduct provides a set of best practices for staff leasing agencies handling workers’ and applicants’ personal data, including by addressing the use of AI in selection and recruitment processes. Specifically, the Code of Conduct explains that:

  • Staff leasing agencies can use automated systems in the selection and recruiting process so long as they carry out a detailed impact assessment and provide workers and applicants with clear information as to AI’s periodical reviews and mechanisms.
  • If fully automated systems are used, workers should be permitted to at least obtain human intervention, express their opinion about the decision, and challenge the decision. 

Although the Code of Conduct only applies to staff leasing agencies and is not binding, it contains helpful information for all employers that can be used to comply with existing legal obligations that implicate AI, including GDPR. 

III.    The AI Bill

Finally, to ensure they do not run afoul of AI-related rules in the future, employers should familiarize themselves with Italy’s AI Bill and take steps to prepare for its passage. In the employment context, the AI Bill:

  • Reaffirms the principle of fairness and nondiscrimination in the use of AI.
  • Asserts that AI must be used to improve working conditions, protect the mental and physical integrity of employees, and increase the quality of work performance and productivity of people in accordance with EU law.

The AI Bill also specifies that the use of AI in the workplace must be safe, reliable, and transparent, and cannot affect human dignity or breach confidentiality of personal data.

IV.    Recommendations 

In light of the existing and anticipated laws, guidance, and directives regarding the use of AI in sourcing and hiring talent, employers should consider taking the following steps when assessing the benefits and implementation of AI tools in hiring:

  • Understand what the AI is doing for you and the risk levels associated with those functions. Is it a high-risk tool because it is making the decision for you and, thus, lacks human intervention at key points in the process (e.g., resume screening tools, gamified assessments, AI-scored interviews)? Is it a medium-risk tool because it is suggesting a decision and, thus, has some human involvement and oversight at key points (e.g., ranking candidates, pushing online job postings to certain people)? Or is it a lower-risk tool because it is creating content and, thus, allows for human review and oversight before materials are finalized (e.g., drafting job postings, policies, or text summaries of meetings)? 
  • Regularly audit the tool to identify potential algorithmic bias and to evaluate the tool’s results and implement corrective measures as necessary, including by engaging outside experts to do a validation study.
  • Create a task force or internal governing body to coordinate internal oversight and ethical guidelines.
  • Implement a company policy on the use of AI, conduct periodic training on AI use, and only permit those with proper training to use the AI tool and approve any AI-suggested employment decisions.
  • Provide employees and job applicants with a proper written information notice stating the AI’s rules, purposes, and mechanisms.
  • Provide trade union representatives (or relevant territorial trade unions’ bodies) with a proper written notice stating the AI’s rules, purposes, and mechanisms.
  • Include human review in the job application process (e.g., in-person interview without electronics).
  • Include human validation of the final decisions and results given by AI.
  • Ensure that private and sensitive job applicant and employee information used by AI is shielded from improper disclosure and that the AI tool otherwise complies with privacy and data protection regulations, including GDPR.
  • Should the AI also enable monitoring of the activities of your employees, ensure that an agreement with your trade unions’ representatives is reached in advance, or—in their absence—that the formal authorization by the relevant district labor office is properly given.
  • Ensure that any AI vendors you use have considered and taken steps to combat algorithmic bias, including through criterion validity studies; ensure that the vendor contracts allow you to access and, if necessary, disclose the vendor’s records; and discuss with vendors what would happen if there was a legal violation, finding of algorithmic bias, or other issue with their tool.

The firm’s global Labor, Employment, and Workplace Safety and Data Protection, Privacy, and Security practice groups, and members of its cross-practice group global Artificial Intelligence team are ready to help you navigate this intricate and ever-changing legal landscape, ensuring that your business not only meets regulatory requirements but also thrives in a competitive market.