Payroll
Legal Expert Recommends Regular Audit of AI Tools Used for Employment
Ensuring you continually evaluate your business objectives to ensure your use (or non-use) of A.I. aligns with those objectives, will help you decide what risks and rewards you’re willing to take.
Oct. 31, 2023
By Christopher Wood, CPP.
On September 11, 2023, the Equal Employment Opportunity Commission (EEOC) announced that a China-based tutoring company agreed to settle an employment discrimination lawsuit that claimed the employer programmed its online software to automatically reject applicants based on their age, in violation of the Age Discrimination in Employment Act (ADEA) (see Payroll Guide ¶20,480 ).
[This article first appeared on the Thomson Reuters blog. Used with permission. Some links may go to the Thomson Reuters Checkpoint research system, which requires subscription.]
This first-of-its-kind settlement comes just months after the EEOC released a new resource regarding artificial intelligence (A.I.) used in the hiring process under Title VII of the Civil Rights Act of 1964 (see Payroll Guide ¶20,450 ). According to the EEOC, employers are using automated systems (including those with A.I.) more frequently “to help them with a wide range of employment matters, such as selecting new employees, monitoring performance, and determining pay or promotions.”
The federal agency, established 58 years ago to enforce civil rights laws against workplace discrimination, warned that the use of A.I. in the hiring process may “run the risk of violating existing civil rights laws” unless certain precautionary measures are taken. The guidance includes a list of questions and answers designed to help employers avoid potential discrimination issues when utilizing A.I. tools for hiring workers.
The EEOC’s settlement with iTutorGroup resulted in the employer agreeing to pay $365,000 to the more than 200 applicants who were rejected based on their age and furnish other relief. In addition to enforcement from the EEOC, Congress and state legislatures are considering legislation that aims to avert discrimination practices when it comes to hiring workers.
For example, the “No Robot Bosses Act of 2023” ( L. 2023, S2419 ) was introduced in July of this year by Senators Bob Casey (D-PA) and Brian Schatz (D-HI) that would add protections for job applicants and employees related to automated decision systems and would require employers to disclose when and how these systems are being used. Also, both New Jersey ( L. 2022, A4909 ) and New York ( L. 2023, A7859 ) have pending legislation regarding the use of automated tools in hiring decisions.
Recently, Checkpoint Payroll Update spoke with John L. Litchfield, Partner at the international law firm of Foley & Lardner, LLP on the increasingly complex subject of A.I. being used in employment decisions. Litchfield, whose primary practice includes counseling clients on various labor and employment-related matters, answered questions about the first steps an employer should take when considering using an A.I. tool for hiring practices, who may be liable when it comes to compliance, proactive measures employers should take when implementing automated decisions systems for their hiring process, and more.
The following interview has been edited for length and clarity.
Checkpoint Payroll Update: You work a bit with clients regarding a variety of workplace matters. How would you first advise a client thinking of using an A.I. tool for their hiring practices?
John L. Litchfield: Ask yourself, first and foremost, why you need the tool. What’s the goal you’re trying to achieve, and how does the A.I. tool help you get there? If you can articulate a business need or advantage to using A.I. in hiring, then do your homework – check to see whether the vendor’s tool has been adequately vetted and if so, ask to see the results and consider making reference checks with existing and/or former vendor customers; or, if you’re developing your own A.I. hiring tool, test it to ensure the results do not generate or magnify unintended biased outcomes.
Consider having an outside vendor help audit the tool before and after implementation, to monitor for and correct for any potential disparate impact on protected groups. If, after doing some diligence and ensuring the tool you select aligns with your business goals, you’re comfortable moving forward with implementing an A.I. recruitment and/or hiring tool, then carefully review the contract to understand what, if any, indemnities or other protections you’re getting from a vendor to understand your legal risks should a legal challenge to your use of A.I. arise.
Checkpoint Payroll Update: Can you talk a bit about the exposure or liability a business might face when it comes to discriminatory practices in the hiring process? Is it solely on the employer or can the company providing the A.I. also be held accountable to a certain extent?
John L. Litchfield: The buck will stop with the employer, as its use of A.I. will almost certainly be the target of a discrimination claim. Ultimately, this is because the employer is making the hiring and onboarding decisions, even if it’s with the aid of an A.I. tool. Moreover, most vendors will have standard provisions in their contracts disclaiming their liability and allocating most of the risk of the use of the tool to the employer, thereby contractually shifting the lion’s share of the risk to their customers. While A.I. developers may face some scrutiny or be named as a co-defendant in a lawsuit, employers will want to review their contracts with those vendors carefully to understand the liabilities they’re taking on.
Checkpoint Payroll Update: Can A.I. tools be used to help avoid discrimination in the hiring process and other areas of employment?
John L. Litchfield: The utility of A.I. tools in rooting out certain potential discrimination in hiring, compensation analyses, and performance management is promising. By using A.I. technologies in these areas, employers can – if carefully monitored and properly utilized – reduce the risk of individual biases or miscommunications resulting in disparate treatment style discrimination claims by helping “objectify” the decision-making process around hiring, promotion, compensation, demotion, and corrective action.
Checkpoint Payroll Update: In order to avoid lawsuits, settlements, penalties, fees, etc., what are some protocols that an employer should put in place for compliance purposes? Should human resource and payroll departments be represented for internal/external audits?
John L. Litchfield: As the EEOC has made clear in its guidance and the recent settlement with iTutorGroup, regular and systematic auditing of A.I. tools is a “must” for employers that utilize them in any personnel-related actions. While internal information technology resources may be able to help monitor and audit the A.I. systems, the best approach is to hire an outside auditor who can analyze the data, run tests, and provide objective results and recommendations for any changes or modifications that need to occur. Doing so under the guise of legal analysis, so as to assert privilege where possible, is a wise approach, particularly if such an audit is done in connection with an actual or threatened legal claim.
Remember, an employer can always waive a privilege if it wants to do so; but it cannot retroactively assert a legal privilege it never had in the first place, so involving legal counsel in determining the risks and rewards of conducting privileged versus non-privileged audits may be desirable.
Checkpoint Payroll Update: A payroll analyst recently said that A.I. would not replace but enhance the landscape for the industry. Do you see it that way for A.I. and hiring practices? Is the key to try and find that sweet spot where A.I. enhances the practice but still involves human input?
John L. Litchfield: Human input will always be needed and it’s the critical centerpiece for any personnel-related decision. While A.I. tools can, and do, enhance the efficiency and effectiveness of certain decision-making processes, it’s ultimately the human side of the equation that will direct the enterprise. Think about it this way: humans are needed to provide A.I. tools to the data points they collect and synthesize; and humans are needed to decide what to do with the outputs A.I. generates.
A.I. is here to stay, whether your company uses it or not. Monitoring developments in the news and in the law, and ensuring you continually evaluate your business objectives to ensure your use (or non-use) of A.I. aligns with those objectives, will help you decide what risks and rewards you’re willing to take on in light of the evolving A.I. landscape.