MeitY shows deep concern over deepfakes, DoCA comes out with dark patterns guidelines, MIB gets busy with a new broadcasting bill and CERT-In is now exempt from the Right to Information Act. Covering these issues, here is this month's techticker.
Flavor of the month for regulators? Deepfakes
About a few weeks back, a deepfake video of Indian actress Rashmika Mandanna went viral. Following that, other Indian actresses like Katrina Kaif, Kajol and most recently Alia Bhatt have been targets of similar deepfake videos. Even a deepfake video of Prime Minister Modi went viral (edit: which later turned out not to be a deep fake). These viral videos have become a trigger for the government to step in. Ashwini Vaishnaw, Minister for Electronics and Information Technology and Rajeev Chandrasekhar, Minister of State for Electronics and Information Technology, both stated that the government is immediately looking to bring in regulations on deepfakes.
The 2010s had us saying ‘everything is a construct’. 2020 brought us ‘everything is cake’. In 2023, it's probably going to be ‘hey, that’s a deepfake.’ Sherlock Holmes said, “when you have eliminated all which is impossible, then whatever remains, however improbable, must be the truth.” While a great thumb-rule for detective work, deepfakes defy this logic. Maybe we can go with, ‘if it seems highly unlikely, it probably is’? That could be one (very unscientific) way of identifying deepfakes? As we grapple with what’s real and what’s not, deepfakes only add to our misery. Identifying and preventing the spread of deepfakes has everyone’s attention. So what’s all the noise about?
‘Deepfake’ is a blend of the words ‘deep learning’ and ‘fake’. Deep learning is a subset of machine learning, where AI is capable of learning complex patterns. Typically, deep learning relies on large data sets to learn these complex patterns. The most common use-case for deepfakes is creating realistic-looking fake videos or images, that is, synthetic media. This is already finding use in movies and TV shows. Education too is likely to benefit from synthetic media, where simulations can perhaps aid understanding. As with most AI technology though, synthetic media is a double-edged sword (check out more details on this theme from an AI safety and cybersecurity colloquium we hosted). And it is the negatives that this technology brings that have become talking points most recently. Deepfake imagery lends itself as fodder to misinformation and fake news. It has been a critical issue for political manipulation. For instance, in 2022 a deepfake video of Ukrainian president Volodymyr Zelenskyy asking his troops to surrender was released. Similarly, other deepfakes of Russian president Vladimir Putin, former USA president Donald Trump have already made their way around the internet. Naturally, deepfakes of political leaders have the potential to strike at the stability of any country. They can also influence election outcomes, thereby affecting how democracies function. While people in the public spotlight fall prey to deepfakes, it is not limited to them. Given that deepfake AI is often used to create pornographic imagery, women and other marginalized identities are particularly vulnerable to it. Fundamentally (and perhaps reductively), deepfakes have the potential to erode trust in the digital ecosystem and have implications beyond that.
Which brings us to where we are. So far, India does not have any formal regulation on AI and particularly, deepfakes. While MeitY has been discussing the Digital India Bill, which is likely to regulate AI from the prism of user harm, deepfakes haven’t been the main part of the regulatory conversation for some time. In February 2023, MeitY published an advisory that directed social media intermediaries to weed out to deepfakes (we did a deep-dive on deepfakes then as well, check it out here). However, since then the conversations on regulating deepfakes have taken a backseat only to come back roaring into the public sphere now. Currently, MeitY is holding conversations with social media companies and internet intermediaries to understand and crystallize what the regulation should look like.
According to Minister Vaishnaw, these conversations have focused on four pillars - a) detecting deepfakes; b) preventing and reducing their virality; c) strengthening reporting mechanisms; and d) spreading awareness about the technology. It will be interesting to see if MeitY locates its power to direct regulation of deepfakes within the content moderation framework of the Information Technology, 2000 and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code), Rules 2021 (IT Rules 2021) or if it is going to bring out a standalone regulation. In fact, just last month it was reported that the Central Government is considering enforcing user traceability requirements against WhatsApp over the circulation of multiple deepfake videos of politicians on its platform.
●Dark Patterns guidelines out: The Department of Consumer Affairs (DoCA) has notified the ‘Guidelines for Prevention and Regulation of Dark Patterns, 2023’. Dark patterns are deceptive practices in user interfaces and user experiences to convince customers to make purchases that they do not intend to.
You have probably faced this - you see a completely different, lower price for a product when you add to cart versus a higher amount that gets billed while shopping online or you items you never purchased have been added to your bill or you can’t get rid of that pesky travel insurance you never intended to purchase with your flight ticket. The list goes on, but the DoCA has managed to crystallize 13 of these practices into its guidelines.
The guidelines follow a draft that was first published in September. The draft saw several rounds of consultations and also a hackathon that was organized by DoCA with the Indian Institute of Technology-BHU, Varanasi. The currently notified guidelines identify a total of 13 dark pattern practices, whereas the September draft had 10 items. The additions are likely a result of the consultations. The notified guidelines recognition false urgency, basket sneaking, confirm shaming, forced action, subscription trap, interface interference, bait and switch, drip pricing, disguised advertisement, nagging, trick question, SaaS billing and rogue malwares in its specified list of dark patterns (check out our deep dive into what these practices mean here)
●Same but also different: Ministry of Information and Broadcasting (MIB) published the draft Broadcasting Services (Regulation) Bill, 2023 on 10th November, 2023. The draft is open for public comments till 9th December, 2023. The Bill marks the move to replace the Cable Television Networks (Regulation) Act, 1995 and provide a consolidated framework to regulate broadcasting services in India including cable, radio, cable, satellite, terrestrial and internet broadcasting networks, including OTT broadcasting services (Read our key takeaways from the Bill here). The Bill aims to bring parity between traditional broadcasting media and OTT platforms and subjects them to a programme and an advertisement code. The Bill also requires the setting up of a ‘content evaluation committee’ for OTTs that will function similarly to the censor board. The Bill also creates a three-tier regulatory mechanism that is similar to the one created under the IT Rules 2021 . First impressions tell that a) this is MIB’s attempt to regulate OTTs more closely; and b) there is a significant overlap with part III of the IT Rules. If the Bill becomes law, we are likely to see substantial conflicts in its functioning against that of the IT Rule.
●CERT-In exempt from the Right to Information Act: The Central Government recently exempted the application of the Right to Information Act (RTI Act) to the Indian Computer Emergency Response Team (CERT-In). CERT-In handles the country’s cyber security coordination and is the nodal agency for responding to cyber threats. It now joins 26 other intelligence agencies such as the Intelligence Bureau and Enforcement Directorate to be exempt from the application of the RTI. Given recent concerns over ransomware and malware attacks on public institutions such as the All India Institute of Medical Sciences, cybersecurity has been a hot topic issue. Now, the distancing of CERT-In from public scrutiny through the RTI Act has drawn criticism, with critics noting that it will now be more difficult to hold the agency accountable.
●Rajeev Chandrasekhar invites the world to GPAI: The Global Partnership on Artificial Intelligence (GPAI) summit is happening in Delhi this December. Minister Chandrasekhar, following the UK's AI Safety Summit where the Bletchley Declaration was adopted, has invited global leaders to participate in the GPAI summit to work towards creating a global framework on AI safety. He mentioned that the GPAI summit will be used as a forum to continue deliberations on AI risks and to create appropriate guardrails for it. If the Delhi Declaration under India's G20 presidency is anything to go by, can we also expect a global consensus on governing AI following the GPAI summit?
Image credit: X (formerly known as Twitter)