(RESPOND TO ONE OF THE QUESTIONS AT THE END OF THE ARTICLE & RESPONSE I ATTACH AT THE END)
Introduction
In 2026, companies in diverse sectors implemented AI-powered recruitment software to help them sort through large numbers of resumes. In theory, this would increase the process’ objectivity while ensuring faster and more effective identification of potential applicants. Unfortunately, in most cases, the Artificial Intelligence (AI) tools employed turned out to be discriminatory, which led to significant controversy in the industry. As Kahneman mentions in Thinking, Fast and Slow, systems designed to simplify decision-making often rely on System 1 thinking fast, automatic judgments, which can unintentionally reinforce prior biases rather than eliminate them. One popular case was of an American-based tech organization whose AI-powered hiring tool automatically discriminated against women applying for developer positions three times more often than against men. The algorithm was based on a database collected over the course of ten years and containing biased information regarding historically male-dominated hiring processes. Similarly, other discriminatory practices included favoring skin tones, rejecting candidates with certain accents and certain educational backgrounds.
Identified Decision-Making Biases:
Automation Bias is the over-reliance on automated processes in decision-making while underplaying the importance of human judgment. In this case, Human Resource (HR) professionals completely trusted AI systems and rejected candidates based solely on the algorithms recommendation without taking human factors into account. As such, organizations justified the removal of human input in the hiring process as they believed AI was inherently objective and followed the systems recommendations. This reduces the role of critical thinking and interpretation which makes the hiring process more rigid without highlighting individual differences of candidates. It also reflects the distinction between System 1 and System 2 thinking, where reliance on automation reduces the engagement of System 2, the slower, more engaged mode, resulting in unchecked acceptance of outputs.
The Illusion of Validity is the false confidence in the effectiveness of decision-making tools without concrete evidence of their reliability. People often overestimate their ability to predict outcomes, even when their data is unreliable or incomplete. This overestimation results in overconfidence. Kahneman shows this illusion of validity through the Army Camp example, in which Kahneman and his colleagues were confident that their observations were accurate enough to predict leadership abilities, although their data did not correctly predict which soldiers would become leaders. Even after learning that their data was not accurate, Kahneman and his colleagues were still confident that their predictions were correct. Organizations thought their AI hiring processes were effective because the outputs appeared consistent and accurate when in fact, AI only replicated the patterns of previous hiring decisions by selecting similar candidates but failing to find the best candidates in the current pool. This created a sense of false objectivity. Since the outputs and data looked more structured, they appeared trustworthy and credible resulting in increased confidence in the systems effectiveness and credibility. Yet, the output was mere repetition of existing patterns containing biases that appeared fair and efficient.
Confirmation Bias further fueled this problematic approach towards hiring through AI. Many companies were convinced of the efficacy of their previous hiring processes, trusting the quality of their staff. Hence, anything that confirmed their beliefs was immediately validated as proof of effectiveness of the AI hiring algorithm. Research shows that the employees who oversee hiring using AI systems mirror these biases rather than correct them. Hiring staff reinforced the AI systems recommendations causing the confirmation bias cycle to continue without correction.
Representativeness Heuristic is the psychological habit of assessing the probability or appropriateness of something depending on its similarity to an existing prototype as opposed to actual probabilities. This heuristic resulted in the formation of prototypes of ideal employees, based on successful candidates of the past, often male, with certain qualities, education, and experience. Those not meeting the prototype but possessing similar skillsets and prospects were automatically filtered out of the process. In effect, mere similarity to previous hires was incorrectly regarded as an indicator of future success. One famous example of this Heuristic is a recruiting engine created in 2014 by Amazon which was meant to accelerate finding of top applicants, but Amazon found out it was not rating resumes in a gender-neutral manner.
Implications of biased decision-making
Several trends in biased decision-making have emerged from these cases:
- Over-Reliance on AI results (Automation Bias)
- Unquestioning Acceptance of AI algorithms Input Data (Illusion of Validity)
- Strengthening Existing Beliefs through Decision-Making (Confirmation Bias)
- Application of Simple Mental Models to Complex Choices (Representativeness Heuristic)
Instead of addressing and mitigating human bias, the AI systems only escalated and scale it from individual actions of individual recruiters to decisions affecting thousands of candidates. In simpler terms, where one recruiter could discriminate against only so many, AI inadvertently discriminated against many applicants within the system. Kahneman challenges that when biases are present within systems rather than individuals, their impact becomes even more amplified, turning small errors into major systemic consequences.
Examples of Real-World Decision-Making Bias:
AI Frenzy and Investments – The fast growth of AI companies like Tesla and OpenAI has sparked massive investments into AI technology and generated unrealistic expectations about its benefits. Decisions regarding the technologys prospects have been largely influenced by biases like optimism bias, availability heuristic, herd mentality, and the Dunning-Kruger effect.
Hospitals’ Failure to Plan for Staff Shortage – Although there were indications of a gradual improvement in the situation, many hospitals failed to consider possible staffing problems after the pandemic. This failure is attributed to the planning fallacy, optimism bias, and availability heuristic. As described by Kahneman, planning fallacy causes decision-makers to underestimate risks and timelines they have decided on by focusing on best-case scenarios rather than realistic outcomes, also showing optimism bias. Carrying on based on previous staffing trends despite the changes caused by the pandemic is a classic example of the availability heuristic.
Unnecessary Recruitment Followed by Massive Layoffs in Tech Companies – The rapid recruitment of tech companies during the boom caused by the pandemic and mass layoffs following not so long after were influenced by herd mentality, recency bias, and overconfidence.
Conclusion
As seen from the examples, technology does not help eliminate bias but amplifies it. The belief in the ability of technological advances to provide organizations with quality information and assist in making correct decisions resulted in repetition of past mistakes. Proper managerial decision-making calls for critical thinking and constant System 2 thinking factchecking. Where biases like optimism, status quo, and satisficing go unmitigated, organizational risks may go unnoticed until they cause significant damage. Decision-making mistakes do not happen randomly; they occur predictably and systematically because of the fallacy of System 1 thinking and approach in adoption of new technology. The more AI is used, the bigger the risks involved because where an individual bias affects perhaps only a few dozen people, AI biases affect hundreds of thousands. Thus, it is crucial for organizations to identify and mitigate these kinds of managerial decision-making biases.
Questions – Please Answer One of the Following:
- Do you think AI can ever be completely unbiased in the hiring process?
- What are some ways that we can identify bias in the use of AI?
- How can companies reduce bias in their AI hiring systems?
- What are the ethical risks of relying heavily on AI in hiring decisions?
- How can managers ensure they are activating System 2 thinking with AI systems?
References:
AI Empire Media. (2026)The real AI startups are failing in 2026. Medium.
Ruiz,N. (2026, February 19) In 2026, AI frustration is the new customer service crisis. Forbes Business Council.
Gold, A. (2026, April 13). The work AI boom is outrunning oversight. Axios.
Barrabi, T(2026, April 9). Googles AI overviews spew out millions of false answers per hour, bombshell study
Helpful Professor. (n.d.) 22heuristics example (the types of heuristics).
Gadinis, S. (2025, June 25). Beyond the corporate culture wars: How companies are revolutionizing decision-making on social issues. Harvard Law School Forum on Corporate Governance.
Number analytics.(n.d). Heuristics in decision making.
Grant Thornton. (2026, April 13). A widening AI proof gap is emerging, but well-governed AI is showing results.
Dastin, J. (2018, October 10).Amazon scraps secret AI recruiting tool that showed bias against women. Reuters
Telford, T. (2025, November 25). Why you shouldnt count on humans to prevent AI hiring bias.
Pilat,D., & Krastev, S. (2021) Illusion of validity. The Decision Lab.
Kahneman, D. (2011). Thinking, fast and slow. Farrar, Straus and Giroux
2. RESPOND TO THIS RESPONSE (PREBI B.)
What are the ethical risks of relying heavily on AI in hiring decisions?
One of the many risks of heavy reliance on automated hiring processes is scaled discrimination. In contrast to individual decision-makers, AI can discriminate on a mass scale affecting thousands of applicants at once thus creating an unfair, structurally discriminatory process. Another ethical concern is the lack/absence of accountability because when organizations shift accountability to machines, it creates gray areas around questions of responsibility. More courts now hold organizations accountable even in cases of AI-powered recruiting.
In addition, with increased automation, managers fail to engage System 2 thinking relying excessively on machine outputs. Basically, AI does not necessarily eliminate bias but changes its manifestation and in the case of hiring people, critical evaluation and transparency is pertinent. With System 2 thinking safeguards removed, AI simply packages biased opinions in technical language.
Leave a Reply
You must be logged in to post a comment.