NYC’s New AI Hiring Law Won’t Be Enforced Until April

(Golden Dayz/Shutterstock)

Companies with employees in New York City have four more months to figure out Local Law 411, a first-in-the-nation law that will regulate the use of AI for hiring. The law went into effect on January 1, but the city deferred enforcement until April 15.

Local Law 411 provides restrictions on the use of use of automated employment decision tools (AEDTs) that assist companies with candidate assessments and help them decide who to hire. The law is aimed squarely at AI- and ML-assisted hiring, and requires AEDT users to conduct a “bias audit” by a certified auditor that looks at the selection rate for race/ethnicity and gender for each job category.

According to the law, the term “automated employment decision tool” means “any computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that issues simplified output, including a score, classification, or recommendation, that is used to substantially assist or replace discretionary decision making for making employment decisions that impact natural persons.” You can read the entire law here.

The new law marks a turning point in AI’s intersection with the law, according to Andrew Burt, the co-founder and managing partner at BNH.AI, a Washington D.C. law firm created to address the legal implications of adopting AI.

“The law represents a tectonic shift in the way that AI systems are regulated–the first of a much larger transition into an environment where increased oversight for AI makes it harder for organizations to deploy and use these systems,” Burt tells Datanami via email.

“Overall, this shift is a good thing, requiring more time and attention to be paid to minimizing the risks of AI,” Burt continues. “But most companies are, at the moment, not yet ready for this shift, which requires new and more thorough review of AI systems, and more fulsome testing of these systems for risks.”

(Andrey_Popov/Shutterstock)

While Local Law 411 was designed to reign in abuses of AI in hiring, a broad interpretation of its new legal requirements potentially could ensnare a larger pool of companies, including those that are using other computerized methods, according to lawyers with the Mintz law firm based in New York City.

“While AEDTs that incorporate artificial intelligence (“AI”) are certainly covered by the law, the law broadly defines ‘AEDT’ as including far more than only those processes utilizing advanced AI, and extends to computational processes and systems utilizing more rudimentary analytics, ranking systems, and decision analyses calculated to ‘substantially assist’ employers in the decision-making process,” write Michelle Capezza, Corbin Carter, and Evan Piercey in a blog post on the Mintz website. “Thus, in coming into compliance with the law, employers should closely scrutinize all processes that could be potentially construed as AEDTs.”

After holding two rounds of public comments and a single hearing in October, the NYC Department of Consumer and Worker Protection decided in mid December to delay enforcement of the law in order to hold a second public hearing, where it can address the numerous questions raised during the public comment period. While companies were encouraged to be in compliance with the new law as of January, the DCWP won’t take any enforcement actions until mid April, which gives companies an additional 15 weeks to figure out if their hiring methods fall within the scope of the new law.

Local Law 411 might be the first new law regulating AI in the nation, but it certainly won’t be the last, says Burt, the former chief counsel at Immuta.

“New regulations and AI oversight mechanisms, such as the local municipal law in NYC that targets AI systems in the employment context, will make AI systems more difficult to deploy in practice–increasing the compliance burden on AI in general,” he says. “In 2023, we can anticipate a host of other jurisdictions will adapt similar mechanisms that require more oversight of AI systems. DC’s Stop Discrimination by Algorithms Act of 2021 is likely next, offering more sweeping regulation of AI systems, following by other local- and state-level efforts. All of which will culminate in the EU AI Act, which will do to AI what GDPR did for privacy–making AI risks a high-level concern for executives at nearly every global company.”

Related Items:

Europe’s New AI Act Puts Ethics In the Spotlight

Do We Need to Redefine Ethics for AI?

Europe’s AI Act Would Regulate Tech Globally

 

Like this post? Please share to your friends:
Leave a Reply

;-) :| :x :twisted: :smile: :shock: :sad: :roll: :razz: :oops: :o :mrgreen: :lol: :idea: :grin: :evil: :cry: :cool: :arrow: :???: :?: :!: