Note: For the purpose of this post, I will be using person-first (people with disabilities) and disability first (disabled people) language interchangeably. This is to recognise both how having a disability doesn’t mean people do not have other abilities. However, illness (chronic or otherwise) and/or disability do have the capacity to fundamentally change the way a person exists in the world. Illness and/or disability can become a central part of one’s life, everything needing to be adjusted accordingly around it. Disability first language over here is also to highlight the societal accountability in systematically excluding people from communal property and other human rights.
It is estimated that more than 16% of the global population (~1.3 billion people) has some form of disability. That number indicates that 1 in 6 people have a disability that significantly impacts their daily life and activities. Approximately 386 million of this estimated disabled population is working age (20-60). Unemployment rates among the disability community can be as high as 80%.
Disabled people/people with disabilities are the largest and fastest-growing marginalised group that anyone can become a part of at any point in their lives due to a personal tragedy. In which case, how can employers consider disability in their company policies?
This blog post considers this question in relation to AI hiring tools, interviewing processes, and how accessibility can be incorporated into them.
What is Accessibility?
The reason for unemployment rates in the disability community can be as diverse and complex as the disabled people themselves. While a common assumption is that disabled people are lazy, lack ambition, or are unable to work, the real reason is that the built environments (physical and electronic/digital) are not made in a way for them to be able to participate in fully.
Accessibility provides disabled people with the material and emotional resources to equitably access physical and electronic/digital environments. To understand accessibility, we need to understand the expanded nature of disability and the impact of historical biases on disabled people’s lives today.
Disability is a complex term. To explain this, I want to base my explanation on Talila A. Lewis’s — a US-based abolitionist community lawyer and disability justice scholar — definitions of ableism and disability.
According to Talila, disability is a complex identity and ideology; there is no consensus on a definition of disability, even though there are multiple models and definitions of disability. However, Talila’s explanation is an extension of the social model of disability. The social model of disability is the belief that society is constructed in such a way that it excludes people who are deemed “abnormal.”
This abnormality is based on medical definitions of normal and abnormal. It is the responsibility of society and its various institutions to create environments (physical and otherwise) that allow for the full inclusion of people who are othered due to the way society is constructed.
An example of this in a built working environment would be to have step-free access to all facilities within the company grounds. Where step-free access is not possible, alternative areas that someone who uses a wheelchair or other mobility aids can access while maintaining their dignity and right to individual autonomy will be provided.
A failure on the part of the institution to provide this step-free access to a wheelchair or alternative mobility aid user is a failure on the part of the society/institution and not a fault of the person with disability/disabilities.
The social model of disability is narrow in its scope as it doesn’t consider intersectionality (the possibility of being multiply marginalised by having two or more protected characteristics) and pain conditions in its theorisation.
This is where Talila expands the understanding of disability. According to Talila, disability can be an identity of a person that makes them subject to present manifestations of historical biases that impact their emotional and materials conditions, for instance, discrimination based on age, gender, sex, sexual orientation, race, religion, and/or medical conditions like illness, pain, impairment.
Ableism, based on this definition, is the act of constructing society in such a way that disabled people, their lives, and their struggles are systemically excluded from the daily functions and the very conscience of humanity. This forces them into invisibility, creating insurmountable socio-economic and socio-cultural disparity. Accessibility is about reversing this historical bias and creating equitable environments for all people.
While we have looked at an immensely expanded understanding of ableism and disability, for the purpose of this blog post, the definition of disability by the Americans With Disabilities Act (ADA)—which is a legal definition and not a medical one–will be used: “An individual with a disability is defined by the ADA as a person who has a physical or mental impairment that substantially limits one or more major life activities, a person who has a history or record of such an impairment, or a person who is perceived by others as having such an impairment.”
What does Accessibility mean in the context of job interviews and the interview process?
Employment is one of the only methods of socio-economic and socio-cultural mobility. Employment practices and recruitment practices must be made equitable by not repeating historical biases. When someone is rejected, they should be dismissed based on their inability to perform the mandate of the job designation, not due to intentional or unintentional bias, especially if that bias is by AI interviewing tools.
There is a plethora of information available out there about why diversity in the workforce is beneficial for overall company growth and profit. The variation in perspectives, skills, and talent among people with diverse life experiences allows a company to expand its reach through multiple community groups.
In fact, the ILO estimates that approximately GDP US$ 1.37 TO 1.94 trillion is lost annually due to the workplace exclusion of disabled people.
But, how can employers create policies and environments that make this diversity constant, consistent, and sustainable in a company’s workforce? One way to achieve this is to understand how historical bias, misconceptions about health and wellness, and hiring practices interact with each other.
Boston Consulting Group conducted a survey in 2023, which revealed that about 25% of employees globally self-identified as disabled, but most companies report that only 4% - 7% of their employees are people with disabilities.
Another survey conducted in 2022 showed that 43% of people worldwide who have invisible but debilitating disabilities decide to not disclose their illness because they are afraid to cause a fuss (30%), or feel like they will be treated differently (25%). A quarter of the people affected (23%) fear that their employers will not believe them.
55% of people with invisible disabilities continue to work even while severely or otherwise ill to avoid disclosing to their employers, 23% reported that they take holidays to attend medical appointments for the same reason as previously stated. According to a 2017 survey in the USA, it was recorded that 30% of employees had some disability, chronic health condition, and/or neurodivergence.
Whereas, only 3.2% disclosed this information to their employer. The reasons were related to concerns regarding anticipated stigma and barriers in career progression.
While there aren’t specific statistics in relation to disability disclosure and interview processes, it cannot be a far-fetched assumption that this same pattern will be prevalent during interview processes. This is a fundamental concept to understand, as if reasonable accommodations are needed to showcase the interviewee in the best light, some amount of disclosure is required. It might even be necessary for them to provide medical documents.
These statistics are important because they reveal a pattern of distrust in institutions in supporting vulnerable people, and this distrust is not unwarranted. It is just a manifestation of patterns of behaviours people have faced, or people have watched close family and friends with invisible and visible disabilities face from others due to the different way their bodies and minds exist in the world.
One way to mitigate the issue of low disability disclosure rates, especially when people require reasonable accommodations, is to build a strong, disability-confident, aware, and inclusive brand identity.
This could also involve enabling potential new staff members who may have disabilities to speak with current disabled employees. Such conversations could happen even before the interview process, helping interviewees who are medically diagnosed and otherwise disabled feel more confident in asking for reasonable accommodations.
However, some other basic ways to ensure the digital interview environment is accessible is to make sure that the AI interview tools adhere to all digital accessibility guidelines outlined by the Web Content Accessibility Guidelines 2.2 at a minimum level of AA or an enhanced level of AAA.
If a person has conditions that require additional accommodations, which will be dependent on their understanding of the interview environment and process, sufficient information about the interview environment and process must be clearly disclosed beforehand so that requests can be made to the company.
Accessibility in the context of an interview means providing all individuals, especially those with disabilities, a fair and equal chance to appropriately demonstrate whether they can meet the job requirements. Accessibility can be the crucial factor in identifying truly valuable talent, rather than losing this talent because the interview process fails to highlight their present skillset and potential.
How can we make AI interview processes more accessible?
AI interview processes here refer to the communication between the company providing the AI hiring tool, the hiring organisation, and the job candidate prior to the interview.
Accessibility involves providing individuals with as much information as possible about a specific event and enabling them to decide if they require reasonable accommodations to participate fairly and equitably.
In the case of an interview, it means offering all the opportunities to be assessed fairly and equitably. Even if accommodations are considered unreasonable due to extreme distress, financial or otherwise, on the part of the hiring organisation, the candidate should still be given a full opportunity to at least request accommodations.
A key step towards achieving this equal footing is by clearly defining the disability definition that the company follows, outlining the interview process, and explaining how to request for reasonable accommodations.
It would be beneficial to include this information as a link in every email sent to the candidate during conversations prior to the actual interview.
Another suggestion would be to have a separate webpage that is easily accessible from the main website of the AI hiring tool company, detailing the entire list of accessibility features of the AI hiring tools with clear descriptions and images. Nothing about the test elements of the interview needs to be disclosed; it is more to deal with building literacy of the built digital/electronic environment.
For example, if there is a function to adjust the font size of AI-generated captions, it should be clear where candidates can access this feature if it is not immediately visible. This information also helps individuals who may not yet have a diagnosis but exhibit all the symptoms of a medical disability to determine if they can participate without unnecessary stress.
Flexibility about official medical diagnosis
The way our world and its structures are constructed is based on the medical model of disability. This model suggests that a person with a medical impairment needs to be fixed to appear and function as normal. For example, if someone requires a mobility aid, it is their responsibility to seek any and all medical treatments possible to cure themselves of needing a mobility aid. Society does not need to accommodate them, as it is seen as a personal tragedy; therefore, their needs are not considered a societal responsibility.
This definition overestimates medicine’s historic and current capabilities. It is crucial to recognise how medicine can and does fail individuals with incurable but manageable (this term is used very loosely) chronic physical conditions that are rare, such as Ehlers-Danlos Syndrome and Achalasia Cardia; less researched but quite common chronic inflammatory conditions like endometriosis; neurodivergencies such as Autism Spectrum Disorder and Attention Deficit Hyperactivity Disorder; and non-specific pain and fatigue conditions like Fibromyalgia and Chronic Fatigue Syndrome.
All these conditions can take up to or more than a decade to be diagnosed from the first appearance of symptoms. Diagnosis times tend to be longer for marginalised groups, as they are less studied within medical research. This is especially true for women worldwide. I want to emphasise, however, that getting an accurate diagnosis does not necessarily mean the illness is effectively managed, and the person may still require reasonable accommodations.
In cases like this, people who experience these symptoms but lack an official diagnosis or medical treatment must endure the difficult process of seeking answers about their health while also navigating a world that cannot support them due to ingrained institutional biases.
Although a medical certificate could offer some relief by enabling reasonable accommodations, it might not be available because doctors may refuse to provide one due to personal bias or concerns about liability.
I want to highlight that all the medical conditions mentioned above are considered invisible conditions. They can significantly impact an individual's ability to engage in daily activities, yet none of these conditions are visibly apparent.
Requests for reasonable adjustments should be evaluated on a case-by-case basis. Even in the absence of clear medical evidence and/or attestation, such requests should still be given careful consideration and investigation by in-house or third-party counsel specialising in disability-related Equality, Diversity and Inclusion (EDI) matters.
Looking at AI hiring systems and the ways bias can get embedded
Fixing the built environment rarely resolves institutional habits rooted in historical bias, even if improving accessibility is a commendable first step towards inclusion. Disability bias, or ableism, is an insidious and persistent problem that can be difficult to detect.
It is crucial to understand all the ways it can manifest. The previous sections have explored the complexity of disability and the obstacles faced when requesting and being granted reasonable accommodations.
This section discusses how bias can get embedded into an Artificial Intelligence hiring system, influencing the selection process after a potentially reasonably accommodated interview environment.
For this section, I heavily reference, Natalie Sheard’s research paper Algorithm-facilitated discrimination: a socio-legal study of the use by employers of artificial intelligence hiring systems, Melika Soleimani and et al’s research paper Reducing AI bias in recruitment and selection: an integrative grounded approach, and Haley Moss’s research paper Screened Out Onscreen: Disability Discrimination, Hiring Bias, and Artificial Intelligence.
An estimated 87% of companies worldwide use AI in some or all parts of their hiring processes, 42% use predictive AI systems. While AI can offer superior hire quality, better candidate interview experience, and immense cost and time savings for companies, Sheard states that AI hiring systems are capable of strengthening and intensifying bias against historically marginalised groups. According to her, this happens for 6 reasons: 1. Data-driven discrimination, 2. Proxy discrimination, 3. Implementation discrimination, 4. Structural barriers, 5. Failure to provide reasonable adjustments, and 6. Intentional Discrimination. Each reason will be briefly summarised below.
1. Data-driven discrimination
This form of discrimination occurs due to the type of data the AI hiring system is trained on. There are three main types of data training issues that Sheard brings up: data based on the organisation’s employees, data acquired by the company that built the AI hiring tool, and AI error because the system is not trained with a wide enough sample of data.
To suit a company's needs, the company’s workforce data might be added to the existing data of the AI hiring system to fine-tune the tool accordingly. This data could simply be the top 50 performers of the company. Such a sample risks lacking sufficient diversity, with an incomplete representation of protected characteristics, and might embed the company's historical biases.
Vendor acquired data refers to the information collected by the AI hiring tool company to train their AI system. Sheard provides an example of contextual variations in how population diversity is understood. For example, an AI Hiring Tool is developed in the US and may have been trained on the US's perspective of diversity. When this tool is used in Australia, the AI tool might not be equipped with the understanding of the different diversity aspects it will encounter in this new environment with a completely different concept and presentation of diversity. In this context, indigenous people might be disadvantaged due to this.
AI errors refer to mistakes that AI hiring systems can make, such as misjudging candidates due to inadequate training on various accents or related characteristics, leading to poor interview outcomes.
A sizable chunk of AI hiring tools offer speech, facial, and behaviour recognition software to judge personality and other traits such as excitement, confidence, and enthusiasm. It is estimated that 37% of organizations utilise AI behavioural analysis, 41% use AI for personality assessment. These softwares can introduce AI errors. Haley Moss, in her research paper, explains how these systems can affect individuals with cognitive and physical disabilities (Deaf, ASD, ADHD, Depression, Arthritis, Parkinson's, etc.). These softwares assess micro expressions and other such facial and body micro movements as part of the hiring process; for people without medical conditions (physical or mental), such micro expressions/movements may indicate moods, excitement, or other emotions. However, for those with these conditions, micro expressions/movements might not signify anything other than illness, disorder, or disability. Moss also highlights that AI is good at recognising patterns. If these patterns that are indicative of illness/disorder can be correlated, disabilities could be recognised even before the interviewee is aware of them. This raises concerns about employers potentially discovering disabilities before the individual is aware or has disclosed them, sometimes without consent.
Over here, I would like to add Melika Soleimani et al’s explanation of “stereotype” bias as well. One way AI systems can discriminate is through not being fed diverse and nuanced data about atypical representations of certain activities that certain individuals with protected characteristics typically participate in. For instance, it is considered normal for a woman to have career breaks for various reasons related to caring responsibilities, and the AI systems might not discriminate against her for this. However, men who might take career breaks can be discriminated against because the AI system was never introduced to such a pattern.
2. Proxy Discrimination
Proxy discrimination involves alternative ways to screen out people with protected characteristics without explicitly mentioning or asking about these characteristics. This can happen through the integration of certain prompts into the AI Hiring System’s Algorithm, or through allowing people to utilise these tools to screen out candidates through proxy filters.
An example of this would be the prompt “unbroken work history,” or work experience in the last five to ten years. This can be a proxy for age, sex and/or disability identification and discrimination if a candidate is rejected without proper due diligence.
3. Implementation discrimination
Implementation discrimination happens in two ways: one in model development, especially in collaboration with the hiring team, and the Customisation ability to provide reasonable accommodations.
What Sheard refers to when speaking about model development in collaboration with the hiring team is the difficulty in translating ideas like ‘good fit,’ ‘right fit,’ ‘ideal person,’ ‘fit to role,’ and other such terminologies into quantifiable mathematical models in an AI hiring system. These terminologies are not quantifiable or mathematical. What ends up happening due to this constraint is replicating company biases, sometimes in relation to protected characteristics. Melika Soleimani and et al refer to this as “Similar to me” bias. For instance, if a company has a baseball culture, however, an interviewing candidate is qualified for the job but doesn’t have any interest in baseball, they might be overlooked for a candidate who is less qualified but has an extensive history of liking and participating in baseball because they are a better ‘fit’ for the company. Baseball here can also be a proxy for disability.
Discrimination based on customisation is about the company policy regarding interview guidelines and flexibility. For instance, giving the interviewing candidate enough information about the interview process so that they can request accommodations if necessary. Also, considerations about flexibility related to how long the interview link is valid. Possibility of retaking the interview if the candidate realises that they need disability related accommodations.
4. Structural barriers
Sheard identifies two major structural barriers that AI hiring systems introduce. 1, the need for digital resources and digital literacy for roles that might not require that robust of a digital literacy. 2, the standardization of test results forcing people out of job markets when they perform badly based on a bad result that might be a failure on the part of the system to provide Reasonable Adjustments. The latter is supplemented with the example of a large retail chain implementing a standardised test across all of their Australian stores. The vendor providing this test might also be supplying their product to other companies. When a disabled job seeker interviews with the retail chain and fails for whatever reason (an actual lack of skill or an improper management of Reasonable adjustments), the test results are carried over across the organisations that the vendor provides their AI hiring product to. The job seeker is only able to redo the assessment if their graduate role is changed or after a year’s time since their previous test.
The prior point about digital literacy, Sheard shares some concerning statistics about digital divides along the axis of race, ethnicity, age and disability within OECD countries. 25% of individuals with disabilities or chronic health conditions feel left behind by technology in the UK. 23% of individuals in Australia feel ‘excluded’ or ‘highly excluded’ from digital/electronic build environments. Older job seekers might be discriminated against as they are known to have less digital literacy.
5. Failure to provide reasonable adjustments
Before the need for reasonable adjustments manifests, some digital accessibility issues can be resolved by being compliant with WCAG 2.2 AA or AAA standards. When AI interviewing platforms are not WCAG AA or AAA compliant, a lot of disabled people can immediately be discriminated against.
Employers often state that job seekers can request for reasonable accommodations and they will be provided when asked. Research shows that people with disabilities require security and trust to disclose their disability, which, for a myriad of socio-cultural reasons, they hesitate to do.
6. Intentional discrimination
AI hiring systems offer companies legal flexibility regarding historical bias through proxy discrimination, as they do not need to be transparent about their AI training data due to Intellectual Property rights. While Sheard is extremely grim in this section, I want to add that, often, inclusive practices education is not widely available for people to understand the discrimination they are perpetuating.
Some ways to mitigate AI hiring system biases and the Accessible way forward
Apart from mitigation strategies outlined throughout the blog post, two other ways to remedy discrimination is through having rigorous Equality, diversity and inclusivity (EDI) training for the AI hiring system design and development team. Disability justice scholars have also consistently suggested including disabled people in these design and development decisions to better incorporate accessibility.
There needs to be a fundamental shift in the way Accessibility is incorporated into company structures. Accessibility is often looked at as an add-on to a structure rather than a foundational part of the structure. This is often actually both discriminatory and not cost-effective in the long run.
Thinking about accessibility not as an add-on but as a fundamental part of the construction of the AI hiring systems is vital, not only for the recruitment of new talent but also to retain employees, as people can become disabled at any point in their lives due to a myriad of reasons.
This is especially pertinent as all the points outlined above about accessibility also apply to individuals already employed, as those managing the backend of the AI hiring tool might also have disabilities. If the back end of AI hiring tools are not WCAG 2.2 AA or AAA compliant, it creates a structural barrier for people with disabilities in engaging in hiring related activities.
This process of Accessibility needs to be constant and consistent. Disability is as varied as the number of people in the world, and no two people experience it the same way, even if they have the same/or the same combination of chronic conditions/disabilities. Something that is accessible to someone might not be accessible to another. The system needs to keep the most marginalised in mind while also being aware that this understanding is volatile. We need to consider accessibility and modularity, ensuring something is flexible enough to be available to as many as possible.

.png)


