Using AI to Hire? Here’s What We’ve Learned Since NYC Changed the Game
In July 2023, New York City introduced Local Law 144, a first-of-its-kind regulation targeting the use of AI in hiring. For many employers, it was the first time they had to confront the idea that their recruitment tools, especially the ones labeled as “automated,” “smart,” or “AI-powered”, might be doing more harm than good if left unchecked. The law didn’t ban algorithmic tools outright but demanded something radical: transparency. If you were using automated tools to help decide who got hired or promoted in NYC, you had to commission an annual bias audit, make the results public, and notify candidates that AI was involved.
Nearly two years later, it’s safe to say the game really has changed. While many companies scrambled to comply with the technical requirements, others quietly paused or abandoned AI hiring tools altogether, unsure how to meet the new expectations. But the ripple effect didn’t stop at New York’s borders. Employers across the country, especially those operating in multiple jurisdictions, started paying closer attention. This law may have started locally, but its impact is anything but.
For employers using AI in hiring today, the real question isn’t whether you’re operating in New York City. It’s whether your hiring systems can stand up to scrutiny, legally, ethically, and operationally. Local Law 144 became a conversation starter, and what we’ve learned since then offers important lessons for every organization trying to balance efficiency, fairness, and innovation in recruitment.
Lesson One: AI Tools Aren’t “Plug and Play”
One of the clearest takeaways from NYC’s approach is that AI hiring tools aren’t as hands-off as many vendors claim. Employers quickly discovered that using a third-party platform didn’t mean you were off the hook when it came to compliance. If the tool was used to “substantially assist” with employment decisions, scoring, ranking, or filtering candidates, it triggered the law’s requirements. That meant annual independent bias audits, transparency about what data was being used, and a willingness to share results publicly.
For many companies, this was the first time they had to really interrogate how their tools worked. Some found that vendors were reluctant to provide the necessary data. Others realized they didn’t fully understand the metrics the tools used to rank candidates or how those metrics might impact different demographic groups. In some cases, the lack of transparency from vendors left employers scrambling to find alternatives or do damage control with internal stakeholders.
This was a wake-up call: AI in hiring isn’t a simple plug-in to make things faster or easier. It’s a system that needs to be managed, evaluated, and audited just like any other part of your business that affects people’s lives. And the responsibility doesn’t end with the software provider. Employers are on the hook for the outcomes, which means oversight has to be baked into the process from day one.
Lesson Two: Compliance Isn’t the Same as Fairness
Another lesson learned is that following the rules doesn’t always equal doing the right thing. Local Law 144 created a baseline, bias audits, public disclosure, and candidate notice, but many early adopters found themselves thinking more deeply about what fairness actually means. Just because a tool passes a statistical audit doesn’t mean it’s inclusive. And just because a candidate is notified that AI is in use doesn’t mean they understand how that AI is making decisions.
Some employers used the law as an opportunity to start broader conversations. Legal, HR, and data teams began collaborating more closely to define what “fair hiring” should look like in their organization. Others brought in outside experts or formed ethics review boards to assess the tools they were using, not just for compliance, but for alignment with company values. These weren’t just check-the-box exercises. They were culture shifts that forced organizations to confront whether their technology was actually serving their goals or just reinforcing old patterns under a new name.
At the same time, some companies chose the bare minimum route. They ran audits, posted PDFs, and hoped no one would ask follow-up questions. But what became clear over time is that candidates, regulators, and even internal stakeholders are becoming more informed and more curious. Transparency isn’t a one-time notice; it’s a mindset. And companies that embraced that mindset are better prepared for whatever regulation comes next.
Lesson Three: This Is Just the Beginning
What started in New York hasn’t stayed in New York. Other states and cities—including California, Illinois, and the District of Columbia—have begun proposing or enacting similar laws. Federal agencies are also turning up the heat. The EEOC has made it clear that AI bias in employment decisions is on their radar, and the White House Blueprint for an AI Bill of Rights has laid the groundwork for more accountability in automated systems. In short, if your company is still treating Local Law 144 as a one-off or a regional quirk, it’s time to think bigger.
The last two years have shown us that AI in hiring is no longer a futuristic concept; it’s a present-day reality with real risks and real consequences. If your systems aren’t designed with fairness and transparency in mind, you’re not just at legal risk; you’re also at reputational and operational risk. Today’s job seekers are savvy. They know when they’re being filtered out by a black box, and they’re asking tougher questions about how decisions are made.
Smart employers are getting ahead by treating AI governance as a strategic priority, not a compliance burden. They’re building internal processes for tool evaluation, engaging vendors in meaningful conversations about explainability, and exploring how to balance automation with human oversight. They’re investing in education and upskilling their teams to understand the implications of using AI in recruitment. And perhaps most importantly, they’re choosing to lead, not just follow.
Final Takeaway: Responsible Hiring Starts With Transparency
If your organization is using AI or plans to in any part of the hiring process, NYC’s Local Law 144 offers more than just a legal roadmap. It’s a chance to reflect on what kind of employer you want to be. Are you building hiring systems that treat people with dignity, or are you chasing efficiency at the cost of fairness? Are you ready to answer questions from regulators, candidates, and your own team, or are you still in the dark about how your tools actually work?
Two years after NYC changed the game, the path forward is becoming clearer. AI in hiring isn’t going away, and neither are the demands for fairness, accountability, and transparency. The companies that will thrive in this next chapter are the ones willing to ask hard questions, make thoughtful choices, and do the work of responsible innovation, not just because they have to, but because they believe it matters.
Need help navigating AI in hiring or preparing for your next compliance audit?
At The AI Shift, we help organizations make sense of fast-changing AI regulations, evaluate their tools, and build ethical, human-centered hiring systems. Whether you're untangling compliance requirements or designing for fairness from the ground up, we're here to guide the shift, one smart, responsible decision at a time. Let’s talk.