Shadows and Light: The Thrilling Odyssey of Trusted AI
IIT Jodhpur’s Richa Singh on the paradox of AI applications
March 4, 2025 | Keerthivas S, Asian College of Journalism
Richa Singh, a Professor of Computer Science and Engineering from IIT Jodhpur, spoke about the positive and negative aspects of Artificial Intelligence using case studies from her group’s research. (Photo: IMSc Media)
On a pleasant Sunday evening, the Music Academy in the heart of Chennai hosted the 8th edition of its annual public event “Science at the Sabha”. The event was conducted by The Institute of Mathematical Sciences (IMSc), Chennai. The place was alive and teeming with visitors, as members of the general public made their way to hear a four-part science discourse.
The audience in the auditorium was a diverse mix of people. There was a chatter of excited school kids, probably having their initial taste of a science talk, accompanied by their parents. There were also academics scattered among members of the general public.
At 4:00 pm the lights were dimmed as the stage lit up commencing the start of the event. The first couple of hours rolled by as Annagiri Sumana gave a fascinating talk on nest relocation in ants followed by UK Anandavardhanan’s talk on congruent numbers, which saw enthusiastic responses from young members of the audience.
After a break for High Tea, there was a buzz amongst the audience as the stage dimmed as a burst of brilliant blue filled the stage before the third talk began. Richa Singh, a Professor of Computer Science and Engineering at the Indian Institute of Technology (IIT) Jodhpur, strode onto the stage to deliver her talk on “Shadows and Light: The Thrilling Odyssey of Trusted AI”.
She first introduced foundation models in machine learning, which are of two types -
Discriminative models - These models classify data into different categories.
Generative models - These models generate new content (language, audio, visual, multimodal) based on similar training data.
Richa spoke in detail about Generative AI (GenAI), such as ChatGPT and other Large Language Models (LLMs), which generates new content by learning patterns from large amounts of pre-existing data. Different types of data ranging from text, audio, images and videos can be created using these models, marking a significant leap in the capabilities of AI.
She described the GenAI paradox associated with the positives and negatives of AI.
She pointed out the positives of GenAI to be -
Creativity and Productivity - Enhancing creativity by constantly churning out new ideas and increasing productivity.
Democratizing access - Making information and technology more accessible to people, especially the marginalized or disadvantaged.
Social good applications
Richa discussed a few case studies to demonstrate the beneficial aspects of AI.
When her group started their research on face recognition with people with facial injuries, with up to 50-70% disfigurement, the existing models fared very poorly. So Richa’s group developed models with much better accuracy in matching images of people with facial injuries with their earlier photographs without injuries.
Her group was able to put this model to use to identify victims of the 2023 Odisha train collision incident which claimed the lives of hundreds of people. Using face-recognition AI models developed by her research group, over 120 unclaimed bodies, often with facial injuries, were matched with photos in a database of identification documents within 18 hours.
In another example, Richa highlighted how her research group assisted law enforcement by using an AI model to generate a realistic image of a suspect from a police sketch. These examples showcase the power of AI in helping government agencies.
Richa elaborated on the negative aspects of AI including -
Bias and fairness concerns - Bias can find its way into GenAI via skewed real-world training data that mirrors existing societal inequities and discrimination.
Potential for misuse - Usage of AI with malintent to harm people. For example, the creation of deepfakes that could victimize people or spread of misinformation among the public.
Opacity and explainability issues - The lack of transparency about the inner workings and decisions made by an AI model, which generates its output.
Illustrating the difficulty of recognising deepfakes, Richa played a few audio recordings and asked the audience to identify the fake and real voices. Using case studies from her group’s work on deepfake detection, she highlighted how misuse of AI tools to spread misinformation and victimize vulnerable sections of the public.
She discussed how AI models developed by other countries perform poorly with Indian data because of biases in model training. One example was the low accuracy of deepfake detection with Indian voices. This is because AI models do not recognize the diversity in Indian accents as their training data predominantly come from Western countries.
Richa also spoke about how negation prompts to GenAI fail to give the desired results.
These prompts deal with descriptions of the elements one wants to exclude from the output, which as her group discovered, gives unexpected results. They are currently working on improving the performance of AI models with negation prompts.
Richa concluded her talk with the simple yet powerful quote “GenAI is not a magic wand: handle with care”, cautioning the audience about harnessing the powers of GenAI judiciously.
Edited by Bharti Dharapuram
--
Richa's talk is available to watch online on the Matscience YouTube channel.
Keerthivas S can be contacted at skeerthivas [at] gmail [dot] com.