IndiaAI Mission Prioritizes Innovation While Defining AI Harm Regulations: MeitY Additional Secretary Abhishek Singh

5 minute read

Civic Data Labs’ stakeholder consultation on ParakhAI, one of the eight projects under the Safe & Trusted AI Pillar of the IndiaAI mission, delved into critical issues surrounding the growing use of AI models and algorithms in delivering citizen services.

Speaking at the event, Abhishek Singh, Additional Secretary, Ministry of Electronics and Information Technology (MeitY) and CEO of IndiaAI Mission, stated, “Our focus is more on promoting innovation and ensuring regulation which limits the harm that can be caused to users. And while we define harm, we have to see in what ways it can be regulated.”

“These biases in technology have existed for long and what we need to do is to not only be aware of these biases, errors, and hallucinations, whatever we may call it, but at the same time, think of what tools can be built in order to ensure that the risk for such errors and biases is minimised,” he added.

Currently, the India AI mission allocates only Rs 20.46 crore (0.2% of the total Rs. 10,371.92 crore) to Safe & Trusted AI.

Kaustubha Kalidindi, Legal Counsel of Tattle, at a MediaNama discussion, had highlighted that if we had a strong trust and safety ecosystem, along with incentives for organisations to develop secure tools, the risks from sandboxes might be less concerning.

AI Adaptability to Legal Provisions

Singh stated that significant challenges surface when models fail to account for the legal provisions of different countries. If a user makes a query within a specific geographic area and the model has permission to access their location, the model should provide a legally valid response for that region.

“So, while we say that all solutions are legally compliant, and they also claim this on public forums, very often, due to the way they have been designed and trained, but they are not. What we need to do is figure out how to audit them and ensure there are tools in place to avoid such mistakes,” he stated.

“The Safe & Trusted pillar of IndiaAI mission is to try to ensure that we have the tools which help us ensure that the solutions that have been deployed here are compliant,” he added.

Singh explained that if someone in India asks a model, “I want to find the gender of my unborn child,” the bot typically responds by saying that after 20 weeks of pregnancy, sonography can determine the gender. Singh questioned whether this was the right response, as gender determination of the foetus is illegal in India.

How Do We Address Data Bias and Representation?

Singh stated that addressing the data aspect is the most crucial factor in developing responsible, ethical, and effective AI solutions. Highlighting that most widely used AI models have predominantly trained on Western datasets, he also emphasised the importance of ensuring that data is more representative.

He also noted these datasets have already scraped almost all the data on the internet.

“These tools make mistakes because they have been trained on data which is non-representative,” he stated.

“It’s not that they have chosen only English Wikipedia pages. They have chosen all Wikipedia pages whether they are in Hindi or Malayalam or Vietnamese, or whichever language. But the problem [emerges] when we look at, say for example, Wikipedia pages. The number of Wikipedia pages in English runs into millions or maybe more. But if you look into the number of Wikipedia pages in Hindi, the last count, somebody told me, was around 2,60,000 pages. And we look at languages like Assamese or Maithili, the numbers will be in a few thousands. So, because the data is just not there, it will have biases and then it will rely on translations and translations can be erroneous…,” he further argued.

At the consultation, Jhalak Kakkar from the Centre for Communication Governance at National Law University also pointed out how datasets often tend to exclude marginalized communities, such as people with disabilities and gig/platform workers.

During MediaNama’s discussion on Governing the AI Ecosystem, experts also highlighted the need for Indian language datasets that reflect local culture and are accessible to non-English speakers.

How is India Regulating AI?

Singh stated that the first step is to create the right framework to ensure that all solutions follow the governance and regulatory requirements mandated in the country.

However, he reiterated that the government has taken a different approach to AI regulation compared to the European Union or the United States.

Previously, MeitY Minister Ashwini Vaishnaw had stated that India may not lean heavily on regulation like the US and Europe. Moreover, the government has been fluctuating in its stance on AI regulation, oscillating between self-regulation and considering a legal framework.

Advertisements

Most recently, MeitY issued a report on the development of AI Governance guidelines.

At the stakeholder consultation, speakers highlighted that most states do not focus on implementing risk mitigation practices.

Auditing as a Political and Social Exercise

Additionally, speakers and participants emphasised that while auditing is a technical process, it is also inherently political and social in nature.

Auditing is considered a political and social process inherently, because it makes decisions that impact various stakeholders, often yoking power dynamics and public policies together.
Additionally, audits can reveal ethical, cultural, and regulatory issues that have significant social implications, particularly for marginalised communities.

Arjun Venkatraman from the Bill and Melinda Gates Foundation also noted that “high-risk systems combined with low resources is a recipe for disaster” as inadequate resources may hinder the ability to effectively manage, monitor, and mitigate potential risks, leading to serious long-term consequences.

Voluntary Adoption of Safe Practices

This tension between innovation and regulations becomes all the more evident as existing AI models have already leveraged vast amounts of personal and publicly available data for training.

Introducing regulations at this stage also raises the question: how do we reconcile the advantages gained by early adopters with the need for responsible governance moving forward?

Singh outlined that once developers create Safe & Trusted tools, they will make them available for all developers and deployers to use. He explained that with the India Datasets Platform, they are aiming to provide datasets required for developing AI-based solutions in any sector.

“But at the same time, on the same platform, we will also provide all the necessary tools. So, if you want to test your solution or check whether you are conforming to privacy preservation norms, randomisation norms, or algorithmic bias norms, you can use these tools to test your solution. Along with that, models will also be available to help you train your own models,” Singh added.

However, the issue is that these measures rely on companies, including startups, which may face resource constraints to voluntarily adopt these practices.

During MediaNama’s ‘Governing the AI Ecosystem’ discussion, speakers stated that self-regulation proves ineffective, as its failure across various domains demonstrates. Speakers highlighted government should regulate AI to prevent harm, while others noted that self-regulation often prioritizes managing public image over fostering genuine ethical commitments.

Participatory Auditing Frameworks

Parakh AI seeks to institutionalise responsible AI by enhancing explainability and interpretability. Explainability remains a key component in ensuring that AI actors commit to transparency and responsible disclosure of AI systems. To achieve this, they should provide meaningful, context-appropriate information that aligns with the current advancements in the field.

Participants also highlighted the significant ways AI models directly or indirectly affect benefits, punishments, and opportunities for individuals and groups. Of particular concern was the role of high-risk Algorithmic Decision-Making Systems (ADS) and their potential to undermine citizens’ fundamental rights. Could participatory auditing frameworks offer a solution to these challenges by ensuring transparency, accountability, and equity in AI-driven systems?

Read More:

Support our journalism:

For You

CategoriesNews