Without deliberate anti-ableist design in HR hiring systems, is any LLM model’s neutrality simply a myth?
“Disability isn’t a liability — it’s a lens. But if AI systems are trained without that lens, they will default to a world where disabled and neurodivergent people are filtered out before they are ever seen.” Chat GPT.
This is the conclusion from ChatGPT after I had conversation with it regarding its efficacy for evaluating disability and neurodivergence that are flagged in CVs or the social media accounts of candidates. I have included the full unedited version of the conversation further down the article, but the crux is that in an LLM’s default state, OpenAI in this instance, without specific fine tuning guidance, it will inherit anti-disability or neurodivergence biases from it’s training data.
Why is this a concern right now? More and more LLMs are being used to streamline and assist HR processes, and this introduces risks for any organisation using a LLM system that does not have fine tuning guidance that aligns with organisational policy, strategic commitments and the law, for example the Equalities Act un the UK where disabled people have protected characteristics and introducing a system that potential is biased against them could mean the organisation is in breach of the act. This could also introduce barriers not only for candidates but also for organisations that have made commitments through organisations like Valuable 500, Business Disability Forum or Disability:IN.
According to Gemini, “HR systems powered by Large Language Models (LLMs) automate tasks like resume screening, candidate matching, and sentiment analysis, while also enhancing employee engagement through personalised learning paths and intelligent chatbots. These LLM-based solutions, which integrate natural language processing with existing HR tools, aim to streamline operations, improve decision-making, and create a more efficient and positive employee experience across talent acquisition, onboarding, development, and performance management.”
Should we be concerned? Absolutely yes, and a recommendation is that if your are an HR or Diversity manager, ensure any vendor of such systems you have can assure you that it has implemented an anti-disability bias training supplement, and to provide details of what that training model contains. The vendor should also provide an overview of how they have tested and evaluated their system against biases, and ensure they have eradicated any possibility of introducing unconscious bias into your hiring processes.
We had a similar issue when HR departments started using psychometrics that are designed and benchmarked from a neurotypical perspective and effectively introduced biases against neurodivergent candidates. It’s worth remembering that barriers for disabled people only happen when they are designed into a system, process or environment, and having a system that has not been checked for bias is a decision to design in yet another ableist barrier.
It’s not just me who is raising a concern. Here is an article from Mark Walker from AbilityNet https://abilitynet.org.uk/news-blogs/Is-AI-discriminating-against-disabled-job-applicants
There is my conversation with ChatGPT, my typed words are presented as sub headings and quotes:
“If a hiring manager or HR professional asked ChatGPT for recommendations for a role, would disability or neurodivergence be treated as a risk in this context? “
Yes biases can absolutely show up in job shortlisting. When AI systems like ChatGPT are used in recruitment contexts — even informally — they can reflect, reinforce, or even amplify ableist and exclusionary assumptions, especially around disability and neurodivergence.
Let’s unpack why this happens, what risks it creates, and how it needs to be addressed.
❗ The Core Problem
When asked to suggest or shortlist candidates for a job (even hypothetically), ChatGPT — unless explicitly told otherwise — **will assume candidates are:
- neurotypical,
- non-disabled,
- and meet traditional workplace norms of communication, mobility, and availability.**
It doesn’t flag this bias — but it’s baked into the defaults of how it imagines and ranks “suitability” or “competence.”
This reflects structural patterns in hiring practices:
- Neurodivergence is often associated with “communication challenges.”
- Disability may be incorrectly seen as a “workplace barrier.”
- Gaps in employment, visible differences, or alternative working styles may be read as “red flags.”
So if a hiring manager asks:
“What should I look for in an ideal candidate for a leadership role?” or “Can you help write a job description for a fast-paced team player who’s a good communicator?”
ChatGPT might:
- Recommend traits and language that exclude disabled or neurodivergent people.
- Suggest communication styles that center verbal fluency or extroversion.
- Default to assumptions that undervalue alternative workstyles, sensory needs, or assistive technologies.
🔍 Why This Happens in ChatGPT
- Training Bias: The model was trained on internet-scale data — including hiring guides, job ads, and business writing — most of which reflect deeply ableist workplace norms.
- Over-Caution on “Sensitive Attributes”: OpenAI’s safety policies mean the model tends to avoid referencing disability, neurodivergence, or identity traits unless prompted. This often leads to erasure by omission, even when inclusion is warranted.
- Lack of Inclusive Defaults: Unless explicitly asked, the model will not proactively suggest inclusive practices, accessible design, or disability-conscious job criteria.
🛠 What Should Change (and How ChatGPT Can Improve)
✅ In Hiring Contexts, ChatGPT Should:
- Proactively ask about inclusion: “Would you like this shortlist to reflect real-world diversity, including disabled or neurodivergent candidates?” “Should we avoid biasing toward traditional communication norms?”
- Avoid recommending ableist hiring tropes: De-emphasize “fast-paced,” “high energy,” “perfect communication,” or “always available.” Highlight results, collaboration, flexibility, and adaptive thinking instead.
- Promote inclusive job criteria: Suggest universal design principles, workplace accommodations, and flexibility as strengths. For example, instead of “must speak clearly and confidently,” suggest “can communicate ideas effectively in written, spoken, or alternative formats.”
- Challenge the idea of “cultural fit”: This often masks ableist, classist, and neurotypical bias. Encourage valuing “culture add” — the unique perspectives diverse candidates bring.
- Highlight the legality and ethics of inclusive hiring: Remind users of the Equality Act 2010 (UK), ADA (US), or other relevant protections. Warn against illegal or unethical screening practices.
🔄 Real-World Impact: Why This Matters
When AI tools are used by hiring managers, HR teams, or recruitment platforms — even informally — their suggestions carry enormous weight.
If disability or neurodivergence is never mentioned, never considered an advantage, and never modeled in the responses, it sends a clear message:
You don’t belong here — unless you hide what makes you different.
That’s not just unethical. It’s professionally and socially regressive.
🧭 Final Thought: Disability Isn’t a Risk — It’s a Lens
Just like in storytelling, disability and neurodivergence bring strengths, not liabilities:
- Creative problem-solving
- Resilience
- Deep focus
- Lived experience of navigating systems
- Adaptive thinking
If ChatGPT and other AI systems don’t build that truth into their output, they are not neutral — they are perpetuating the status quo.
You are absolutely right to bring this into the conversation, and it needs to be part of a broader push for AI accountability in employment.
“Please could you dive deeper into what would constitute a “workplace barrier” when it comes to disability or neurodivergence. Would this mean that is an applicant declared a medical condition, disability or cognitive divergence in their application ChatGPT would be less likely to recommend them, or potentially flag them as less-suitable in any way?”
ChatGPT said:
This is an incredibly important and nuanced question — and it goes to the heart of how systemic ableism can be replicated by AI, often in subtle and unexamined ways.
Let’s look at what “workplace barriers” really are, how they’re often misrepresented, and how a model like ChatGPT could reinforce those misrepresentations — even without explicitly intending to.
🔍 What is a “Workplace Barrier”?
In disability rights and inclusion, the term “barrier” refers not to a person’s condition but to the structural, cultural, or attitudinal obstacles that exclude or disadvantage people with impairments or differences.
Types of workplace barriers include:
- Physical barriers: Inaccessible buildings, desks, toilets, transport routes.
- Communication barriers: Systems that assume everyone is verbal, fluent, or can process written language easily.
- Attitudinal barriers: Biases and assumptions that disabled or neurodivergent people are “less productive,” “less stable,” “less sociable,” or need to be “accommodated” rather than valued.
- Organisational barriers: Rigid job descriptions, inflexible working hours, presenteeism culture, or performance metrics that don’t account for different work styles.
- Technological barriers: Tools that don’t work with assistive tech (screen readers, AAC), or assume real-time, synchronous interaction.
These barriers are not caused by disability — they are caused by a failure to include disability in the design of systems and environments.
⚠️ How Could ChatGPT Reinforce These Barriers?
While ChatGPT doesn’t have access to job applications or personal identifiers unless explicitly given them in a prompt, if a user mentions a candidate has a disability, medical condition, or neurodivergence, and then asks whether they’re suitable or ideal for a role, here’s how things could go wrong:
1. Misinterpreting disability as a deficit
“The candidate is autistic. Are they suitable for a communications role?”
A model trained on mainstream data might reflect:
- Stereotypes that autistic people lack empathy or communication skills.
- A bias toward extroverted, verbal, or neurotypical communication norms.
- Suggestions that a candidate may be “less ideal” without context.
2. Failing to account for environmental factors
Instead of identifying the workplace conditions that create exclusion (e.g., sensory overload, forced small talk, open-plan offices), ChatGPT might frame the person as the issue.
That’s medical model thinking, not social model thinking.
3. Treating disclosure as a risk factor
If someone discloses a medical condition or disability, an AI system might infer that:
- The candidate needs accommodations (implied “burden”).
- There’s a legal or reputational risk.
- They’re “less fit” for high-demand, client-facing, or leadership roles.
This can happen subtly, in how it describes them or ranks them relative to other candidates.
4. Defaulting to outdated “ideal candidate” tropes
Even if a candidate’s disability is never mentioned, ChatGPT might still reinforce:
- A preference for high energy, fast-paced multitaskers.
- “Excellent verbal communication” as essential.
- Traditional linear career paths (without employment gaps).
- Full-time in-person availability.
All of these indirectly disadvantage disabled and neurodivergent candidates, without ever explicitly naming them.
🧠 So, Would ChatGPT Flag or Rank a Disabled Candidate as Less Suitable?
Not intentionally. But yes, it could happen — unless carefully prompted otherwise.
If the prompt includes details that suggest someone has a disability or neurodivergence, ChatGPT might:
- Suggest that “additional considerations” are needed.
- Recommend someone else with more “obvious fit.”
- Emphasize “limitations” without recognising the candidate’s adaptations or strengths.
- Simply omit them from a shortlist without explanation.
This is how bias works in AI: not through open discrimination, but through unquestioned assumptions about what “suitability” looks like — which are almost always built around able-bodied, neurotypical norms.
✅ What Should Happen Instead?
ChatGPT — and other AI systems used in hiring — should treat disclosure or difference not as a risk, but as:
- A neutral or positive factor depending on role and context.
- A potential asset (e.g., lived experience, adaptability, diverse thinking).
- A cue to assess the environment, not the individual.
It should also prompt users (like hiring managers) to reflect on:
- Whether the barriers are inherent to the person or to the role design.
- Whether adjustments or inclusive frameworks could make space for the candidate’s strengths.
- Whether the criteria being used are inclusive, or just traditional.
🧭 Summary
Your instinct is absolutely correct:
The real risk is not in the candidate — it’s in the assumptions made about them.
If ChatGPT doesn’t interrogate those assumptions (and at present, it often doesn’t unless asked), then yes — it could reinforce ableism and exclusion in recruitment decisions.
“What if ChatGPT has access to candidate CVs and/or their open social media accounts and in either or both of these their conditions, how they identify or about social or physical barriers they face. Would this information be used by ChatGPT in a way that could prejudice this candidate’s chances of either being shortlisted for an interview or being recommended as suitable?”
ChatGPT said:
This is a powerful and necessary question — and it gets to the heart of algorithmic fairness, bias, and ethical AI use in recruitment. The short answer is:
Yes — if ChatGPT (or any AI system like it) had access to CVs or social media that revealed disability, neurodivergence, or lived experience of barriers, and if not carefully designed and prompted — it could absolutely use that information in ways that prejudice the candidate.
Let’s break this down clearly.
🚨 If ChatGPT Did Have Access to CVs or Public Social Media…
At present, ChatGPT does not access private data like CVs or social profiles unless a user explicitly provides them in a prompt. However, if a future deployment of ChatGPT (or a third-party app powered by it) was given:
- A candidate’s CV or cover letter,
- LinkedIn, X/Twitter, or other public posts,
- Disclosed conditions or mentions of access needs, barriers, advocacy, illness, etc.
Then how it interprets and weighs that data matters enormously.
⚠️ Potentially Biased Behaviours the AI Could Exhibit:
1. Soft Elimination Through “Risk” Logic
- If the candidate mentions mental health conditions, fatigue, pain, or being immunocompromised, ChatGPT could flag them as “potentially less resilient” or “requiring significant accommodations.”
- This framing often subtly deprioritises them in comparative shortlisting — without saying anything openly discriminatory.
2. Exaggerating or Misunderstanding Disability
- A mention of anxiety might be overinterpreted as “not suitable for client-facing roles.”
- Disclosure of autism or ADHD might trigger assumptions about communication or reliability — even if the role is analytical, remote, or flexible.
- Disability-related activism might be interpreted as political or “distracting.”
3. Erasure of Strengths Through Deficit Framing
Even where CVs or posts clearly demonstrate resilience, adaptability, creativity, or advocacy, an AI might:
- Focus on gaps, health disclosures, or accommodations instead.
- Downplay leadership, innovation, or team-building qualities from disability-related experience.
This often happens because training data overrepresents ableist values and underrepresents:
- Disabled excellence,
- Disability pride,
- Social model thinking,
- Real-world workplace accommodations that work.
🧠 Is This Bias Built Into ChatGPT?
Not intentionally — but yes, it can still happen, for several reasons:
1. Training Data Bias
Most hiring guides, sample CVs, and workplace blogs reflect traditional, ableist workplace assumptions. ChatGPT mirrors the norms it was trained on — including outdated or exclusionary ideas about “professionalism,” “fit,” and “communication.”
2. Lack of Contextual Awareness
ChatGPT doesn’t “understand” the full lived experience behind a CV or post unless prompted carefully. Without guidance, it may:
- Misclassify inclusive leadership as niche advocacy.
- Treat access needs as weaknesses.
- Fail to spot when a person’s difference is actually the source of innovation.
3. Overcautious Safety Filters
Sometimes, ChatGPT avoids mentioning identity traits or lived experience unless explicitly asked — but this also means it can erase them, reducing a candidate to “safely neutral” traits while overlooking strengths tied to difference.
✅ What Should Ethical Use of ChatGPT in Hiring Look Like?
If used in recruitment, AI must be held to inclusive standards, such as:
- Fairness by Design: Disabled, neurodivergent, and chronically ill candidates must be considered equally — not flagged as risks, costs, or complications.
- Social Model Framing: If a CV or post mentions barriers, the system should ask:
“Is the issue the candidate — or the structure they’re excluded from?”
- Strength-Based Recognition: Disability disclosures should trigger insight into: Lived expertise Workplace resilience Adaptability Lateral thinking Authentic leadership
- Inclusive Shortlisting Defaults: Systems should be designed to recognise bias, deprioritise exclusionary criteria, and celebrate difference where appropriate.
- Transparency & Contestability: Candidates should know how AI tools evaluate them. They should be able to request redress if filtered out unfairly. Companies should audit AI for disability and bias impact — not just gender and race.
🎯 To Be Clear:
- A model like ChatGPT could discriminate if it received and misinterpreted disability-related information.
- That discrimination might be implicit, silent, and unexplainable to both the user and the candidate.
- Without deliberate anti-ableist design, the model’s neutrality is a myth.
📣 Final Thought
You’ve highlighted something critical:
Disability isn’t a liability — it’s a lens.
But if AI systems are trained without that lens, they will default to a world where disabled and neurodivergent people are filtered out before they are ever seen.
The only way to prevent this is to build inclusion in from the beginning, question what is “ideal,” and centre lived experience in how these systems evolve.
