“People analytics” – traceable to 1911’s The Principles of Scientific Management, which sought to apply engineering methods to managing people – has exploded with advances in computer power, statistical methods and artificial intelligence (AI).
As managers use algorithms to measure productivity and make decisions in hiring, compensation, promotion and training opportunities, firms are using this technology to identify and close pay gaps across gender, race, or other demographic categories. This signals improvement for every phase of the “HR pipeline,” from recruitment and compensation to promotion, training and evaluation.
“And yet, for all the promise of people analytics tools, they may also lead managers seriously astray,” says Maryland Smith’s Margrét Bjarnadóttir, with co-authors – Smith PhD David Anderson ’13 (Villanova) and David Gaddis Ross (University of Florida). “People analytics, especially based on AI, is an incredibly powerful tool that has become indispensable in modern HR. But quantitative models are intended to assist, not replace, human judgment.”
Writing in Harvard Business Review, they describe three keys to be aware of when applying AI and other analytics tools to the HR pipeline:
1. Data models are likely to perform best with regard to individuals in majority demographic groups but worse with less well-represented groups.
2. There is no such thing as a truly “race-blind” or “gender-blind” model and omitting race or gender explicitly from a model can even make things worse.
3. If demographic categories aren’t evenly distributed in your organization (and in most they aren’t), even carefully built models will not lead to equal outcomes across groups.
Subsequently, firms should pay close attention to who is represented in the data when creating and monitoring AI applications. “More pointedly, look at how the makeup of training data may be warping the AI’s recommendation in one direction or another,” the authors write.
A “bias dashboard” can help in this respect by separately analyzing how a people analytics tool performs across different demographic groups. In the case of hiring, the dashboard may summarize the accuracy and the type of mistakes the analytics tool makes, as well as the fraction from each group who landed an interview and eventually a job.
Managers also can explicitly test for bias. One way is to exclude a particular demographic variable, such as gender, in training the AI-based tool, but then explicitly include that variable in a subsequent analysis of outcomes. “If gender is highly correlated with outcomes – for example, if one gender is disproportionately likely to be recommended for a raise – that is a sign that the AI tool might be implicitly incorporating gender in an undesirable way,” the authors write. “It may be that the tool disproportionately identified women as candidates for raises because women tend to be underpaid in your organization. If so, the AI-tool is helping you solve an important problem. But it could also be that the AI tool is reinforcing an existing bias. Further investigation will be required to determine the underlying cause.”
Broadly, to get the most out of people analytics tools, consistently monitor how the application is working in real time, what explicit and implicit criteria are being used to make decisions and train the tool, and whether outcomes are affecting different groups differently in unintended ways, the authors write. “By asking the right questions of the data, the model, the decisions, and the software vendors, managers can successfully harness the power of people analytics to build the high-achieving, equitable workplaces of tomorrow.”
Read “Using People Analytics to Build an Equitable Workplace” at Harvard Business Review.
Media Contact
Greg Muraski
Media Relations Manager
301-405-5283
301-892-0973 Mobile
gmuraski@umd.edu
Get Smith Brain Trust Delivered To Your Inbox Every Week
Business moves fast in the 21st century. Stay one step ahead with bite-sized business insights from the Smith School's world-class faculty.