Maryland Smith Research / March 14, 2025

Flipping the Deepfake Narrative

Smith Researchers Harness the Stigmatized Technology for High-Stakes Decision Making

Professors Siva Viswanathan, Balaji Padmanabhan, and PhD student Yizhi Liu at UMD’s Smith School developed a patent-pending deepfake method to detect and mitigate bias in decision-making. Their study shows how AI-generated images can improve fairness in hiring, healthcare, and criminal justice.

Deepfake technology, synonymous with misinformation and digital deception, has been raising concerns about the erosion of truth in an AI-driven world. But a study from the University of Maryland’s Robert H. Smith School of Business flips the narrative in a groundbreaking way by producing a patent-pending deepfake method that leverages AI-generated facial images to detect, measure and ultimately mitigate bias in high-stakes decision-making.

The method is described in “Deception to Perception: The Surprising Benefits of Deepfakes for Detecting, Measuring, and Mitigating Bias,” by Dean's Professor of Information Systems Siva Viswanathan with co-researchers Balaji Padmanabhan, director of Smith’s Center for Artificial Intelligence in Business, and information systems PhD student Yizhi Liu.

Viswanathan puts the finding in science history context. “With deepfakes, we approach this as another opportunity to repurpose a harm-inducing phenomenon for societal good,” he says. “A good example is the repurposing of toxic mold in cattle feed killing livestock in the 1920s to warfarin for treating or preventing blood clots to reduce stroke or heart attack risk.”

“Beyond ‘mitigating bias for fairness,’ we’re utilizing deepfake technology to help eliminate distorted results and wrong conclusions in such areas as hiring, criminal justice and healthcare— like when a physician assesses pain level,” says Padmanabhan.

And pain assessment, Viswanathan adds, is an especially compelling application. “Research has long suggested that racial and age-based disparities affect medical professionals' evaluations of patient pain levels.”

The researchers generated deepfake images to retain the key facial action units used to compute the Prkachin and Solomon Pain Intensity score for assessing pain. Utilizing the images, they tested for whether subtle changes—such as altering a subject’s perceived race or age—would lead to different pain assessments.

“The results were striking,” Viswanathan says. “White patients were consistently rated as experiencing more pain than Black patients, and older individuals were perceived as suffering more than younger ones, despite the images being otherwise identical.”

So, beyond merely diagnosing bias, this research “takes an unprecedented step toward correcting it,” Viswanathan adds. “By integrating deepfake-enhanced datasets into AI training models, we show that machine learning systems can be recalibrated to produce a blueprint for reducing decision-compromising bias—not only in AI-assisted medical diagnostics, but also in criminal justice risk assessments and corporate hiring algorithms.”

Regarding the latter, the researchers say their work underscores deepfake technology emerging as critical to ensuring AI-driven decisions are more transparent, accountable, equitable and ultimately productive. In the corporate world, human bias can subconsciously seep into candidate vetting, with applicant faces a prominent part of submitted video applications or in preliminary online video screenings, says Padmanabhan. “So, a company hiring, say a software developer or project manager, wants to control for that—to mitigate bias from interfering in making the ‘best business decision.’”

Such bias aversion also is critical for legal practitioners hearing witness testimony from online interactions. This, Padmanabhan adds, is especially an impetus to the researchers further working on an AI system that incorporates audio with video and images to create the relevant deepfakes for the AI training models.

The work is a UMD Invention of the Year Award finalist —a rare recognition for business school research, Viswanathan says. And as AI continues to shape society, he adds, “this research signals a pivotal shift: Deepfakes are not just a threat to truth—they might just be the key to uncovering it.”

Read the study: “Deception to Perception: The Surprising Benefits of Deepfakes for Detecting, Measuring, and Mitigating Bias.”

Media Contact

Greg Muraski
Media Relations Manager
301-405-5283  
301-892-0973 Mobile
gmuraski@umd.edu 

Get Smith Brain Trust Delivered To Your Inbox Every Week

Business moves fast in the 21st century. Stay one step ahead with bite-sized business insights from the Smith School's world-class faculty.

Subscribe Now

Read More Research

Back to Top