
Welcome to The Hero 🗞️. This is approximately a 3-minute read.
😬 The 3 ways AI in hiring gets teams in real trouble
🧑⚖️The laws already live in the U.S. (and spreading fast)
✅ A “minimum viable compliance” checklist you can do in 30 minutes
🎬 Here’s The Scene
A candidate emails you: "Hey - why did I get rejected?"
And your internal answer is: "Uh… the AI said no."
That, my friends, is a lawsuit waiting to happen.
Here's the thing:
Legally speaking, AI isn't just "automation" -
It's a decision system.
And if that decision system can't be explained, measured, or overridden…
You've just built a black box - and shoved all your candidates inside.

The full breakdown is just below - don’t miss it! 😉
Links of the Day:
🔗 Best Links
Here are some of the best links I’ve found since last time I emailed you:
🗺️ Email Outreach & Recruiting Templates
50+ Recruiting Email Templates To Win Candidates in 2025 (link)
Best Cold Email Templates for Recruiters (link)
🔎 Recruiting & HR Industry Trends
AI Adoption in Recruiting (link)
Recruitment Best Practices Shaping 2025 (link)
🧑💼 Interview Scheduling & Process
Top 8 Interview Scheduling Tools for 2025 (link)
Top Interviewing Techniques to Streamline Hiring (link)
✅ Staffing Industry & Technology Insights
Future of Staffing Technology: Trends to Watch in 2025 (link)
Top 5 Staffing & Recruitment Technology Trends (link)
🍳 The 3 Ways Teams Get Cooked
1) “We didn’t mean to bias it.”
Nobody ever does.
But if your AI disproportionately screens out a protected group and you can't justify why?
That's not just "AI being AI."
It’s legal risk.
And the U.S. is already in a patchwork era - different cities and states are writing their own AI hiring rules.

2) NYC already turned “AI hiring” into homework
If you hire in NYC and use an automated tool to screen or rank candidates, you’re now dealing with:
Annual bias audits
Candidate notice requirements
Public summary disclosures
And even if you're not in NYC…
Other states will follow (so consider this your warning: get ready to justify every AI hiring decision you make ⚠)
3) Video interviews with AI analysis = legal minefield
Anything that analyzes faces, expressions, tone, or voice - that’s where regulators are cracking down first.
In fact, many states already have rules for AI video interviews covering:
Notice and consent
What gets evaluated
Limits on data sharing
Deletion requirements
The bottom line:
"The AI didn't like your tone" is impossible to defend in court (if it gets that far).
😅 Minimum Viable “Don’t Get Sued” Checklist
If AI touches hiring decisions, you need three things:
✅ 1) Tell people (disclosure)
When and where AI is being used, and what it's doing.
This isn’t a 9-page policy, just simple English to get consent.
✅ 2) Keep a human steering wheel (control)
Don't let your AI tool make the decision.
Let it be an input.
A human needs to overrule the tool and own the final call.
✅ 3) Keep receipts (evidence)
Audit/validation from the vendor + your own tracking.
So if someone asks you to "prove it," you can.
To Sum It Up…
AI isn't the danger.
Unaccountable AI is.
Use AI like a power tool - massive leverage, but easy to lose a finger 😱

And To Wrap It Up…
Use AI like a power tool - massive leverage, but easy to lose a finger 😱
If you can explain it, prove it, and override it…
You're fine.
If you can't...
You're playing with fire.

HOW WE CAN HELP?
There are a few ways:
Or you can just reply to this email.
I reply to absolutely everyone who writes me back 🙂

