“How can you anticipate and mitigate some of the harmful consequences of technology?,” asked Madhulika Srikumar, the program lead at Partnership on AI, during a session of the Observer Research Foundation’s CyFy event held on October 21. The session focused on the roles of multiple stakeholders in curbing dark patterns but also, more broadly, on curbing harmful impacts of technology while retaining space for innovation.
Regulation of AI research is important as AI can help detect dark patterns, according to Srikumar. Dark patterns present in privacy policies, terms and conditions, or user interfaces find discreet ways to trick you into giving your consent for things. Governments and the private sector expect sandboxes and established processes, respectively, to address this.
Beyond Srikumar, the panelists included Deepika Raman of India’s IT Ministry (MEITY), Venkatesh Krishnamoorthy of the BSA Software Alliance, Regina Mihindukulasuriya from The Print, and moderator Sunaina Kumar who is a Senior Fellow at ORF.
How can tech research be regulated?
“Research often travels faster than even products do. So it’s even harder to kind of inform regulators to get them to regulate research,” Srikumar said on top-down regulation of research. She also said that research should be open and free of any regulatory capture. Flagging that AI research was being commercialised today at speeds not seen earlier, Srikumar suggested that AI researchers add a line in their proposals and papers on the potential misuse of such technology and the guardrails needed to prevent the same.
The government view on such regulation
“…Innovations don’t work in silos. It’s not a health care innovation oftentimes it’s health insurance so it needs to work alongside the financial sector as well. So what really it means for multiple parties to come together and work with respect to the government,” Raman said, giving the example of healthcare to explain the consultations taken up by the government on technology regulation. As part of her work at the International Innovation Corps and MEITY, Raman said that they frequently interact with multiple stakeholders to go back to the ministry and decide interventions.
However, she also said that the government was exploring sandboxes as a way to promote innovation. “There’s a lot of cognisance about the fact that innovation has sort of superseded regulation and there’s also an understanding that we don’t want regulatory hindrances towards innovation either,” she said, pointing to sandboxes by the Securities Exchange Board of India (SEBI), Insurance Regulatory and Development Authority of India (IRDAI), and National Health Authority (NHA).
What does the private sector think of this?
On dark patterns, Krishnamoorthy recommended that the private sector set up processes to identify such dark patterns, their risks, indicate those risks, and build awareness about them. He also recommended that diverse teams, across sectors, should collaborate on thinking on AI and dark pattern regulation – such as policy, legal, design teams, and so on. “As many teams come together, what we are seeing consistently is many more questions being asked within the companies, many more solutions coming up,” Krishnamoorthy said
Partially concurring with him, Raman and Srikumar raised questions on what would such diverse participation look like and who would be involved?
“Are we bringing diversity just in terms of gender, class, caste? Or are we talking about the discipline in which we’re trained? Are we having anthropologists in the conversation of (data as) property versus (data as) rights? Or is it only technologists? So I think what’s very helpful is to bring multidisciplinary teams and representative teams to think about these conversations,” Raman said.
Srikumar said that inter-disciplinary expertise must be part of such a conversation. “You can consider social sciences as another sounding board, perhaps think through just the history of technology in different societies because many of these versions of these challenges have been kind of observed in the past as well,” she said.
Other conversations on dark patterns and AI regulation – PrivacyNama 2021
AI regulation and dark patterns were also on the cards at PrivacyNama 2021, a global conference on privacy regulations held by MediaNama on October 6 and 7.
“I’m probably going to say something, which is really provocative. But I think consent is the biggest dark pattern of them all,” Beni Chugh of Dvara Research said in a session focused on bodies and data protection. She further elaborated: “I mean consent as it exists, it is not free, it is not a contract amongst equals it…it does not give you complete information, it can be changed unilaterally, if we were to apply the lens of contracts to consent, it would fail tremendously, right? …So I don’t think that consent really needs dark patterns.”
Further into the discussion, Jhalak Kakkar of the Centre for Communication Governance (CCG-NLUD) raised concerns around the use of AI – its bias and lack of diversity. “AI systems are being developed in the western world and very often just being translated fairly blindly into other contexts, right. They are being translated from high resource context to low resource context,” she said.
Also read:
- #PrivacyNama2021: How to think about consent in biometric data collection
- #NAMAprivacy: Challenges with consent; the Right to Privacy judgment
- #PrivacyNama2021: Taking stock of biometric data collection and regulation
Have something to add? Post your comment and gift someone a MediaNama subscription.
I cover health and education technology for MediaNama. Reach me at anushka@medianama.com

You must be logged in to post a comment Login