New Virginia AI Law: What Employers Need to Know About Automating HR Decisions
By Eric Postow and Jason Ehrenberg
March 2025
Effective July 1, 2026, Virginia’s new law (HB 2094), the High-Risk Artificial Intelligence Developer and Deployer Act, establishes strict obligations for businesses that use AI in ways that affect employment decisions. This law creates both compliance requirements and potential legal exposure for employers who use AI-driven systems in HR functions like hiring, promotion, or performance evaluation.
Key Terms Defined
- High-Risk AI System: Any AI system that autonomously or substantially influences consequential decisions like employment, housing, healthcare, or legal outcomes.
- Consequential Decision: A decision with material legal or similarly significant effects—such as hiring, promotion, or job termination.
- Algorithmic Discrimination: When an AI system causes unlawful differential treatment or impact based on protected classes (e.g., race, sex, age, disability, etc.).
- Deployer: A business using a high-risk AI system to make consequential decisions.
Obligations for Employers Under HB 2094
If your business uses an AI system that plays a significant role in employment-related decisions, you are likely considered a “deployer of a high-risk AI system” under the new law. This comes with specific legal obligations:
- Risk Management Program
- Design and implement a documented risk management policy consistent with the NIST AI Risk Management Framework, ISO/IEC 42001, or another recognized framework.
- The policy must address the size of the company, scope of use, sensitivity of data, and foreseeable risks.
- Impact Assessments
- Complete an initial and ongoing assessment (every 90 days post-update) for each high-risk AI system.
- Assessments must cover data inputs, outputs, purpose, risks of algorithmic discrimination, and post-deployment monitoring.
- Disclosure to Applicants and Employees
- Disclose that an AI system is in use.
- Explain its purpose and role in decision-making.
- Offer the opportunity to correct inaccurate data and appeal adverse decisions with human review, if feasible.
- Ongoing Monitoring and Updating
- Keep documentation current within 90 days of substantial updates to the AI system.
- Summarize publicly how risks of algorithmic discrimination are managed.
Risks of Using High-Risk AI in HR
Employers using AI in HR decision-making face two core legal risks under HB 2094:
- Regulatory Enforcement by the Virginia Attorney General
- Civil penalties of up to $10,000 per violation, plus legal fees.
- Investigative authority includes demanding documentation and assessments.
- Emerging Discrimination Liability
- Although the law does not create a private right of action, claimants may attempt to use AI bias as the basis for federal or state law discrimination claims.
- AI-based discrimination, even if unintentional, could open new legal frontiers.
Employment Law Discrimination and the Intersection with AI
Companies have increasingly used AI tools to screen and analyze résumés and cover letters; scour online platforms and social media networks for potential candidates; and analyze job applicants’ speech and facial expressions in interviews. In addition, companies are using AI to onboard employees, write performance reviews, and monitor employee activities and performance.
AI bias can occur in any of the above use cases, throughout every stage of the employment relationship—from hiring to firing and everything in between—and can result in discrimination lawsuits.
By way of example, the Equal Employment Opportunity Commission ( “EEOC”) settled its first AI hiring discrimination lawsuit in August 2023. In Equal Employment Opportunity Commission v. iTutorGroup, Inc., the EEOC sued three companies providing tutoring services under the “iTutorGroup” brand name (“iTutorGroup”) on the basis that iTutorGroup violated the Age Discrimination in Employment Act of 1967 (“ADEA”) because the AI hiring program it used “automatically reject[ed] female applicants age 55 or older and male applicants age 60 or older,” resulting in screening out over 200 applicants because of their age. Subsequently, iTutorGroup entered into a consent decree with the EEOC, under which iTutorGroup agreed to pay $365,000 to the group of automatically rejected applicants, adopt antidiscrimination policies, and conduct training to ensure compliance with equal employment opportunity laws.
The ongoing Mobley v. Workday, Inc. litigation, one of the first major class-action lawsuits in the United States alleging discrimination through algorithmic bias in applicant screening tools, presents another example. The plaintiff, Derek Mobley, a man over 40 years old, sued Workday Inc. claiming that Workday’s artificial intelligence (AI)-driven applicant screening tools have systematically disadvantaged him and other older job seekers. Mobley submitted more than 100 applications to companies using Workday’s platform, and he was rejected every time. Mobley alleged that the AI tools – designed to score, sort, rank or screen applicants – unfairly penalized older candidates. The court initially dismissed Mobley’s complaint but granted permission to file an amended version. Subsequently, Workday attempted to have this amended complaint dismissed as well but was unsuccessful.
The court, by denying Workday’s motion to dismiss, recognized Mobley’s claim as plausible under the Age Discrimination in Employment Act (ADEA), based on a disparate impact theory. In granting preliminary certification under the ADEA, the court allowed the lawsuit to move forward as a nationwide collective action – similar to a class action. The case involves Mobley and four other plaintiffs representing all job applicants ages 40 and older who were denied employment recommendations through Workday’s platform since Sept. 24, 2020. The court determined that the main issue – whether Workday’s AI system disproportionately affects applicants over 40 – can be addressed collectively, despite the challenges in identifying all potential members of the collective action.
Implications for Employers
This decision marks a decisive moment in the evolving legal landscape surrounding AI. It stands as one of the most closely watched cases in the nation concerning the use of AI in employment decisions. The ruling underscores the growing scrutiny of AI in employment decisions and the potential for significant legal exposure. Employers must proactively assess algorithmic tools for potential bias and ensure compliance with evolving legal standards.
Need Help Assessing AI Risk in Your Hiring or HR Process?
Holon Law’s Artificial Intelligence, Employment Law, and Litigation and Dispute Resolution groups are working collaboratively to identify issues and to provide sound legal advice and updates on the evolving regulatory landscape governing AI use in employment and assist clients in navigating these complexities.
