Case Study: Mobley v. Workday and the Future of AI in Hiring sits at the center of a fast-moving debate about how employers use automated systems and whether those systems comply with disability law. In this context, AI in hiring usually refers to software that screens resumes, ranks applicants, recommends interview candidates, analyzes assessments, or automates communication at scale. The ADA, or Americans with Disabilities Act, prohibits disability discrimination in employment and requires reasonable accommodation for qualified applicants and employees. When AI tools mediate access to jobs, ADA compliance is no longer a narrow HR concern. It becomes a product design issue, a procurement issue, a litigation issue, and a governance issue.
Mobley v. Workday matters because it tests a question I have seen companies struggle with in practice: when a vendor provides the algorithmic infrastructure for hiring, who bears responsibility if disabled applicants are screened out unfairly? Many employers assume the software vendor owns the technical risk while they own only the final hiring decision. That assumption is weak. Courts, regulators, and plaintiffs are looking at the entire decision chain, including screening thresholds, chatbot interactions, online assessments, and accommodation pathways. A system can create barriers even before a human recruiter sees an application.
This article serves as a hub for AI and ADA issues by explaining the case, the legal theories behind it, the technical mechanisms that can create disability bias, and the compliance steps employers and vendors should take now. It also connects related topics that organizations should understand alongside this case, including EEOC guidance on algorithmic decision-making, the Uniform Guidelines on Employee Selection Procedures, Title VII disparate impact analysis, Section 508 and digital accessibility concepts, and state and local automated employment decision tool rules. The central point is straightforward: if an AI hiring system disadvantages applicants with disabilities, liability may attach not only to the employer using it but potentially to the company that designed, marketed, or operated it.
That is why Mobley v. Workday is more than a single lawsuit. It is a signal case about how anti-discrimination law adapts when employment decisions are partially delegated to software. For legal teams, HR leaders, product managers, and procurement officers, the future of AI in hiring will turn on whether they can prove job relatedness, provide meaningful accommodations, validate tools properly, and maintain human oversight that is real rather than cosmetic.
What Mobley v. Workday is about and why the case matters
Mobley v. Workday was filed as a proposed class action by a job applicant who alleged that Workday’s AI-driven screening tools discriminated against applicants who were Black, over 40, and disabled. The suit drew immediate attention because Workday is widely used by major employers as a hiring and talent platform. The allegations did not merely target a single employer’s recruiting choices. They focused on a software provider whose products can influence applicant flow across many employers and industries. That scale is exactly why the case matters. If a common hiring platform embeds exclusionary logic, the impact can repeat thousands of times.
One of the most important developments in the litigation was the court’s willingness to let certain claims proceed under the theory that a vendor can, in some circumstances, function as an employment agency or otherwise participate sufficiently in the hiring process to face anti-discrimination scrutiny. That does not mean every software provider is automatically liable. It does mean courts may look beyond formal labels and examine operational reality: who designed the ranking model, who set default criteria, who collected data, who interpreted scores, and who materially shaped who was visible to employers. In the AI hiring market, those are not peripheral questions. They are core liability questions.
For ADA analysis, the case highlights a recurring problem. Many digital hiring systems were built around efficiency metrics such as completion rates, response speed, keyword matching, and historical profile similarity. Those inputs can penalize applicants with disabilities unless accommodations are built into the workflow from the start. I have seen organizations discover too late that timed assessments could not be extended easily, chatbot interfaces did not support screen readers well, or resume parsers mishandled nonstandard career paths associated with medical leave. None of those failures looks dramatic in a product demo. In litigation, each can become evidence of a barrier to equal opportunity.
How the ADA applies to AI hiring systems
The ADA applies to the employment process from recruitment through selection, onboarding, and employment terms. In hiring, the law prohibits screening out qualified individuals with disabilities unless the standard, test, or selection criterion is job related and consistent with business necessity. It also requires reasonable accommodation so applicants can participate in the process. Those two concepts are the foundation for evaluating AI hiring systems. First, does the tool create a screen-out effect for people with disabilities? Second, if it does, can the employer justify the criterion and provide accommodation?
The EEOC has made this concrete in technical guidance on algorithmic decision-making and disability discrimination. The agency has warned that employers may be responsible when software vendors design tools that unlawfully screen out disabled applicants, even if the employer did not intend discrimination. That is consistent with long-standing principles. Delegating a step in the hiring process does not delegate legal responsibility. If an assessment platform rejects candidates who need more time because of a cognitive disability, the employer cannot simply say the vendor set the timer. The question will be whether the process allowed an accommodation and whether the time limit was necessary for the job.
The ADA also intersects with accessibility law and digital design principles, even though those frameworks are not identical. An inaccessible interface can itself be discriminatory if it prevents an applicant from completing a required step. For example, if a video interview platform requires speech analysis but an applicant has a speech impairment, the issue is not just usability. It is whether the selection method unfairly excludes the applicant and whether an alternative assessment exists. Similarly, if an online form is incompatible with assistive technology, the barrier appears before merit is evaluated at all.
Employers should not treat ADA compliance as a last-mile accommodation hotline. It must be embedded in the entire procurement and deployment process. That means documenting essential job functions, defining valid selection criteria, testing for adverse impact, enabling accommodation requests at each stage, and retaining audit trails that show how decisions were made. Without that foundation, defending an AI-enabled process becomes difficult very quickly.
Where disability bias enters an automated hiring workflow
Disability bias in AI hiring does not arise from one source. It can enter through training data, feature selection, interface design, assessment structure, and operational rules. Historical data is a common problem because many models learn patterns from prior hiring outcomes. If past hiring favored uninterrupted work histories, standard communication styles, or conventional educational trajectories, a model may replicate those preferences even when they are poor proxies for job performance. Disability-related patterns, including treatment gaps, rehabilitation periods, or assistive technology use, can be misread as lower quality signals.
Feature engineering can create similar issues. Seemingly neutral variables such as response time, typing speed, eye contact, facial expressiveness, commute assumptions, or calendar availability can correlate with disability. In practice, I am especially cautious about tools that claim to infer traits from voice, video, or game-based behavior. These products often overstate scientific validity, and they create obvious accommodation challenges. The more a tool relies on behavioral proxies instead of direct measures of job skills, the harder it is to defend under a business necessity analysis.
Another source of risk is workflow rigidity. A candidate may need an alternative format, extra time, or a human contact when the system fails. If the platform offers no clear accommodation path, the discrimination may stem from process architecture rather than the algorithm alone. This is why product teams and legal teams need to work together. In several audits I have conducted, the decisive compliance issue was not model bias in the abstract. It was the absence of a usable exception process when applicants encountered barriers.
| Hiring stage | Common AI use | ADA risk example | Practical safeguard |
|---|---|---|---|
| Application intake | Resume parsing and knockout questions | Parser misreads employment gaps linked to disability | Manual review path and parser testing on varied resumes |
| Assessment | Timed skills or cognitive tests | No extra-time option for applicants needing accommodation | Accommodation workflow before and during testing |
| Interviewing | Video or speech analysis | Tool penalizes speech impairment or limited eye contact | Alternative interview formats and validation by job task |
| Ranking | Candidate scoring models | Proxy features correlate with disability status | Feature review, adverse impact testing, score explainability |
| Communication | Chatbots and automated scheduling | Interface inaccessible to screen reader users | Accessible design and staffed support channel |
What employers and vendors must prove when challenged
When an AI hiring tool is challenged under the ADA, broad claims of innovation or efficiency are not enough. Employers and vendors need evidence. The first requirement is a clear definition of the job and its essential functions. If the tool screens for traits or behaviors that are not actually necessary for successful performance, the defense weakens immediately. The second requirement is validation. Under established employee selection principles, employers should be able to show that the assessment or ranking process predicts job performance or measures a relevant capability. A vendor white paper with marketing language is not a substitute for a validation study.
Third, organizations need accommodation procedures that work in real conditions. That means notice to applicants, accessible request channels, prompt human follow-up, and alternative methods of assessment where appropriate. A buried help link is not enough. If a candidate cannot complete a required step because of a disability, the system should route the issue to a trained reviewer quickly. Fourth, auditability matters. Companies should preserve logs showing the version of the model used, the criteria applied, changes made over time, and outcomes by stage. Without that record, it becomes difficult to rebut a claim that the tool systematically screened out disabled applicants.
Vendors face a parallel burden. If they market a system as capable of identifying best-fit candidates, they should be ready to explain what data the model uses, what it does not use, what populations were tested, and what limitations exist. Contract language that disclaims all responsibility while promoting strong predictive power is increasingly untenable. Sophisticated buyers now demand bias testing, accessibility conformance documentation, security controls, model change notices, and cooperation clauses for investigations. That is not overcautious procurement. It is the new baseline for responsible adoption.
The future of AI in hiring after Mobley
Mobley v. Workday points toward a future in which AI hiring tools are treated less like neutral software and more like regulated decision systems. That does not mean automation will disappear. It means successful tools will be narrower, better validated, more transparent, and easier to override. The market is already moving in that direction. Employers are asking tougher questions about adverse impact, explainability, and accommodation. Regulators are signaling that accessibility and anti-discrimination duties apply even when the process is technologically sophisticated. Courts are showing interest in how much real control vendors exercise.
The smartest organizations are responding by redesigning governance, not just tweaking models. They inventory every automated touchpoint in recruiting, classify each by legal risk, and set approval gates before deployment. They require vendor due diligence, including accessibility reviews against recognized standards such as WCAG, selection validation evidence, and incident response commitments. They test outcomes periodically rather than assuming a one-time review is enough. They also train recruiters to understand when human intervention is required. Human oversight matters only if the human reviewer has authority, context, and time to act.
For this AI and ADA hub, the practical lesson is clear. The future of AI in hiring will belong to employers and vendors that can combine efficiency with defensible fairness. Start with the basics: map the hiring workflow, identify where disability barriers may arise, create accommodation options at every stage, validate each selection criterion against actual job requirements, and document everything. If your organization uses automated assessments, chatbot screening, resume ranking, or video interview analytics, review them now. The legal frontier is no longer theoretical. It is operational, and the organizations that act early will be in the strongest position to hire well and withstand scrutiny.
Frequently Asked Questions
What is Mobley v. Workday, and why is it considered such an important case for AI in hiring?
Mobley v. Workday is widely discussed because it raises a foundational legal question: when employers use software to screen, rank, and process job applicants, who is responsible if that technology creates barriers for people with disabilities? The case has become a focal point in the broader debate over AI-driven employment tools because Workday’s platform is used by many employers to manage hiring workflows, including applications, applicant tracking, and candidate evaluation processes. That makes the case relevant not just to one company, but to the broader ecosystem of vendors and employers relying on automated hiring systems.
At a practical level, the case matters because modern hiring often depends on software to sort through high volumes of applicants. These tools may score resumes, filter candidates by selected criteria, automate communications, or help decide who advances to interviews. If those systems are designed or deployed in ways that disadvantage applicants with disabilities, the legal consequences can be significant under the Americans with Disabilities Act. Mobley v. Workday highlights the idea that discrimination in hiring does not have to come from a human interviewer making an openly biased decision. It can also arise from digital processes, screening logic, inaccessible interfaces, or automated assessments that exclude qualified individuals.
The case is important for another reason as well: it reflects how employment law is adapting to the realities of algorithmic decision-making. Courts, regulators, and employers are all grappling with whether existing anti-discrimination laws apply cleanly to AI systems, and if so, how responsibility should be assigned among software vendors, employers, and other actors. In that sense, Mobley v. Workday is not only about one dispute. It is a signal that AI in hiring is no longer just a compliance side issue. It is becoming a central employment law, disability law, and risk management concern.
How does the ADA apply to AI hiring tools like resume screeners, ranking systems, and automated assessments?
The ADA applies to hiring practices regardless of whether decisions are made by a recruiter, a hiring manager, or software. In other words, using AI does not remove an employer’s obligation to avoid disability discrimination. If an automated system screens out qualified applicants with disabilities, relies on criteria that unfairly disadvantage them, or fails to allow reasonable accommodations, the technology can become part of an unlawful hiring process. The legal standard does not disappear simply because the decision was partially automated.
In the hiring context, this can happen in several ways. A resume screener may prioritize keywords or employment patterns that do not reflect the qualifications of some disabled applicants. A chatbot or application portal may be inaccessible to applicants using assistive technology. A timed cognitive or personality assessment may disadvantage an individual with a disability if no reasonable accommodation is provided. Video interview analysis tools can also present ADA concerns if they evaluate speech patterns, facial expressions, eye contact, or response speed in ways that may correlate with disability rather than job-related ability.
The ADA also requires reasonable accommodation during the application and hiring process. That means employers generally need to provide an accessible path for applicants who need adjustments, whether that involves alternative assessment formats, extra time, a non-automated review option, or assistance with the application process. The core legal principle is that qualification should be measured in a fair and accessible way. Employers cannot simply point to software outputs and assume they are neutral. If the tool becomes a barrier to equal opportunity, ADA compliance issues are likely to follow.
Can an employer be held liable if it relies on a third-party AI vendor to help make hiring decisions?
Yes. In general, employers cannot outsource their legal responsibilities by delegating part of the hiring process to a technology vendor. If an employer uses a third-party platform to receive applications, score candidates, recommend interviewees, or otherwise influence employment decisions, that employer may still face liability if the process discriminates against applicants with disabilities. The fact that a vendor built the tool does not automatically shield the employer from ADA claims. From a compliance standpoint, employers are expected to evaluate whether the systems they use are lawful, accessible, and appropriate for the job-related decisions being made.
That said, Mobley v. Workday is especially notable because it also raises questions about when a software provider itself may face legal exposure. If a vendor plays a sufficiently direct role in the hiring process, helps structure the screening criteria, or effectively participates in employment decision-making, courts may be asked to consider whether the vendor has responsibilities under employment discrimination laws as well. This is one reason the case is being watched so closely. It tests the legal boundaries between a company that merely supplies software and one that meaningfully shapes how applicants are evaluated.
For employers, the takeaway is clear: vendor contracts, marketing assurances, and generic claims about fairness are not enough. Organizations should conduct meaningful due diligence, review accessibility, assess whether the tool has adverse effects on disabled applicants, and ensure there is a process for accommodation requests and human intervention. For vendors, the lesson is similar. If a product is marketed as an AI hiring solution, the design choices built into that product may carry serious legal and reputational consequences. Shared involvement can mean shared risk, even if the exact scope of liability depends on the facts and the court’s interpretation.
What specific risks do AI hiring systems create for applicants with disabilities?
AI hiring systems can create both obvious and subtle risks for applicants with disabilities. One of the most visible risks is accessibility failure. If an online application system does not work properly with screen readers, keyboard navigation, captioning, voice input tools, or other assistive technologies, qualified applicants may be blocked before they are even considered. That is a direct barrier, and it can raise serious ADA concerns independent of any scoring algorithm.
Another major risk involves how automated tools define and measure “fit,” “professionalism,” “communication,” “attention,” or “cognitive ability.” These systems may rely on proxies that unintentionally penalize disability-related traits. For example, a video interview tool may score candidates lower because of atypical eye contact, facial movement, speech cadence, or delayed response times. A gamified assessment may disadvantage individuals with motor impairments, neurodivergence, vision limitations, or certain mental health conditions if the format is rigid and no accommodation option is built in. Even systems that appear neutral on their face can create exclusion if they reward behavioral patterns that are not actually necessary for job performance.
There is also the risk of cumulative exclusion. Many employers use multiple layers of automation: resume parsing, knockout questions, online assessments, ranking models, and automated scheduling. A disabled applicant may encounter friction or bias at several points rather than one. This can make discrimination harder to spot because no single step seems decisive in isolation, yet the end result is still exclusion from fair consideration. That is why disability-related risk in AI hiring should be evaluated across the full applicant journey. Employers need to ask not only whether the system is efficient, but whether it is accessible, job-related, accommodation-ready, and genuinely fair to qualified individuals with disabilities.
What should employers do now to reduce ADA risk when using AI in hiring?
Employers should start by treating AI hiring tools as high-risk compliance systems, not simple productivity software. The first step is to map where automation is used in the recruiting process, including application portals, resume screening, assessments, interview analysis, ranking tools, and communications platforms. Once that process is visible, employers should evaluate whether each tool is necessary, what it measures, whether those measurements are job-related, and whether the tool could disadvantage applicants with disabilities. If there is no clear business justification or no reliable way to validate fairness, the tool may create more legal risk than operational value.
Accessibility and accommodation should be built into the process from the outset. Employers should ensure application systems are usable with assistive technology, provide clear instructions for requesting accommodations, and offer practical alternatives when a standard assessment or automated workflow creates a barrier. Human review is also important. Applicants should not be left with a fully automated process that gives them no meaningful way to seek help, explain a disability-related issue, or request reconsideration. When an AI system influences who gets filtered out, a human oversight mechanism can be essential for both fairness and legal defensibility.
Finally, employers should strengthen governance around vendor management, auditing, and documentation. That includes asking vendors detailed questions about training data, validation studies, accessibility testing, adverse impact analysis, and accommodation capabilities. Internal teams should document why a tool is used, how it is monitored, and what steps are taken to identify and correct potential bias. Legal, HR, procurement, and IT should all be involved. Mobley v. Workday underscores that the future of AI in hiring will not be shaped by innovation alone. It will be shaped by whether employers can deploy these tools in ways that comply with disability law, preserve equal opportunity, and stand up to scrutiny from courts, regulators, and applicants themselves.