AI & Emerging Tech

AI in HR needs human judgement, not just blind automation, says ILO chief economist

Article cover image

ILO's Chief Economist Berg says AI tools in HR are increasingly automating critical people decisions, often without HR teams fully understanding how outcomes are generated.

As organisations increasingly turn to artificial intelligence to manage hiring, scheduling, performance, and workforce decisions, questions are growing around whether these systems are truly improving HR outcomes, or simply automating flawed processes at scale.


According to Janine Berg, managing people at work has always been complex, but the rise of online applications and generative AI has made recruitment even more overwhelming for employers. The growing volume of applicants has pushed organisations towards AI-assisted hiring systems in an attempt to simplify decision-making.


However, Berg argues that this rush toward automation is happening faster than organisations’ understanding of whether these systems actually work effectively.


In a recent working paper by the International Labour Organization, Berg and co-author Hannah Johnston examined AI applications across key HR functions including recruitment, compensation, scheduling, and performance management. Their research evaluated these systems based on three factors: the objective of the system, the quality and suitability of data used, and how the algorithms are programmed.


One major challenge, Berg notes, is that HR decisions involve complex human behaviours that are difficult to reduce into measurable variables. While AI works well for straightforward tasks such as calculating routes or processing structured data, people management involves “messy” human realities that are far harder to quantify.


For example, one organisation attempted to use AI to identify candidates with a “growth mindset” by measuring how frequently applicants used words like “growth”, “development”, or “learning” during interviews. Berg questioned whether such systems were actually measuring meaningful human qualities or merely counting keywords.


The paper also highlights growing concerns around the quality and relevance of data feeding AI systems. Many organisations rely on historical company data or third-party datasets that may not accurately represent their workforce realities. In some cases, companies have introduced unconventional assessment tools, including games designed to evaluate risk-taking behaviour or employee monitoring systems tracking keystrokes and screen activity.


But Berg cautions that more data does not necessarily mean better insights.


“Data on screentime shouldn’t be equated with how well a person does their job,” the paper noted, warning against confusing digital activity with actual productivity or performance.


The research further points to concerns around algorithmic opacity and unintended bias. In one example cited in the paper, researchers found that a gender-neutral STEM job advertisement was disproportionately shown to men because the advertising algorithm learned it was cheaper to target male audiences, despite no intentional gender targeting.


According to Berg, many AI tools used in HR today are highly automated and often deployed through “off-the-shelf” systems that HR teams may not fully understand. As a result, organisations risk outsourcing critical people decisions to systems without fully understanding how those outcomes are generated.


Instead of blindly automating HR processes, the paper recommends deeper involvement of HR professionals and employees in designing, implementing, and monitoring AI systems.


The study points to examples where collaborative approaches delivered stronger outcomes. One multinational organisation reportedly spent two years refining an AI recruitment platform alongside HR teams before adopting a hybrid human-AI model with explainable decision-making. 


In another case, a telecommunications company co-designed a workforce scheduling system with field technicians, resulting in a 10% productivity improvement and a significant reduction in mental health-related absences.


For Berg, the future of AI in HR is not simply about adopting more technology. It is about ensuring systems are transparent, meaningful, and aligned with the realities of how people actually work.


The report concludes that meaningful stakeholder involvement may slow down implementation timelines, but it remains essential if organisations want AI systems that genuinely improve workplace outcomes rather than amplify existing flaws.

Loading...

Loading...