A video clip of Jerry Seinfeld’s cameo in Quentin Tarantino’s Pulp Fiction has over 1.4 million views on YouTube. A confused and terrified Seinfeld takes aim and fails, while Samuel L Jackson and John Travolta stand side by side pointing their guns at the comedian as the show’s recognizable theme song plays. The clip isn’t real. Seinfeld was never part of the film.
The convincing – or misleading – clip was put together with the help of artificial intelligence. Such videos, called deepfakes, are all over the Internet. Deepfakes have in equal parts fooled, amused, and alarmed the world. This forgery on steroids is hard to spot, unless it is something obvious, like Tom Cruise recounting his meeting with former Soviet Union president Mikhail Gorbachev. What’s more, creating these digital effects needs no special schooling. These tools are widely available.
Besides the fun and games, deepfakes have become central to more nefarious activities, including online mis-/disinformation. In May 2022, scammers forged a fake video of Twitter CEO Elon Musk endorsing a cryptocurrency. Policymakers across the world are keeping a close watch. That’s where we enter today’s Tech Ticker.
We look at how the Indian government plans to address the deepfakes related harms. Later, we also delve into updates from telecom, competition law, cyberlaw and foreign policy.
Spotlight – IT Ministry gets real about fakes
In last month’s edition, we covered the IT Ministry’s proposal to moderate ‘false’ or ‘fake’ information online by amending the 2021 Intermediary Rules. This month the ministry continued its crusade against this content category, albeit through an advisory instead of a regulatory amendment, targeted at a specific type of false information – deepfake imagery.
In recent years, regulation of deepfakes has become a particularly tricky content moderation issue across the globe, owing to the widespread misuse of this dynamic manipulated media technique in perpetuating online harms. These include generating revenge pornography; manipulating domestic and international politics through morphed footage; and committing of frauds and many others. So concerning are the real and potential threats posed by deepfakes that jurisdictions like the EU and UK have already developed comprehensive policy proposals (here and here).
The IT Ministry’s advisory appears to be Indian government’s initial attempt to weed out deepfakes. Reportedly, the advisory was issued directly to the Chief Compliance Officers of ‘social media intermediaries’ (SMIs) like Facebook, WhatsApp, Instagram and Twitter; a deviation from the Ministry’s past practice of publishing content moderation advisories on its website. The advisory requires SMIs to adopt ‘reasonable and practicable’ measures to take-down deepfake imagery from their platforms within 24 hours of receiving user complaints.
The IT Ministry’s advisory raises certain questions – what is deepfake imagery exactly? What is the legal obligation to takedown deepfakes? And most importantly, what are the possible approaches for regulating deepfakes? We take a stab at answering these questions below.
A deep-dive into deepfakes
Deepfakes are a type of synthetic media in which a person in an image or video is swapped with the likeness of another individual. The underlying technology can also be used to create videos where any kind of speech can be overlaid on a person’s face. Remember that massively viral video of world leaders from Trump to Putin singing John Lennon’s Imagine? Deepfake. The term itself is derived from ‘deep learning’, which is a type of artificial intelligence. More specifically, this manipulated media is created through a technique called ‘generative adversarial networks’ (GANs), in which machine learning algorithms learn to develop images and videos of persons by processing a database of training data. The goal of GANs is to provide machines with something akin to human imagination. To learn more about generative artificial intelligence and its policy and ethical implications – check out this piece authored by Ikigai’s Aman Taneja and Vijayant Singh.
In addition to their immense potential for misuse, deepfake technology also has some positive applications. For instance, it has the potential to transform advertising by allowing brands to create hyper-personalized content for target audiences and significantly lowering costs of video campaigns. Leading FMCG companies, like Cadbury and PepsiCo, are already leveraging this technology to drive targeted-advertising campaigns. You’ve probably already seen Shah Rukh Khan endorsing multiple local stores in one such ad campaign in 2021. The technology can also be used to catalyze creativity and innovation in the film and entertainment industry, by reducing post-production and re-shoot costs, and even bringing beloved stars back to life on celluloid (ethically of course)!
Intermediary impostor syndrome – what is the actual obligation for SMIs?
Under its blocking and takedown powers, the government can mandate SMIs to disable unlawful content through reasoned written orders. In addition to this, the 2021 Intermediary Rules cast a due diligence obligation on all intermediaries to take ‘reasonable and practicable measures’ to disable access to sexually explicit and impersonating content (including artificially morphed images), within 24 hours of being notified through user complaints. Although it has not been called out as a standalone content category under these rules, deepfake imagery could qualify as ‘artificially morphed images.’ This provision can be read along with, or in consonance with, the recent advisory.
Approaches to deepfakes regulation
It is unclear if the advisory, along with intermediaries’ due diligence requirements, account for risks or harms arising out of deepfake content. Meaning, it is not certain if an SMI may have to take down all deepfake imagery flagged by users, as opposed to content that results in harm.
In contrast, other jurisdictions are considering risk-based approaches to deepfakes regulation. The EU Parliament in its report ‘Tackling deepfakes in European policy’ categorized deepfakes as a ‘dual-use technology’ – capable of numerous malicious as well as beneficial applications. The report suggested instituting a range of policy measures geared towards mitigating harms. These measures include – creating legal obligations on deepfake technology providers to appropriately label content, establishing takedown procedures, limiting the decision-making power of online platforms, institutionalizing support for victims, and investing in media literacy, among others. The UK too is also contemplating a harms-focused approach to the deepfakes problem via planned amendments to its Online Safety Bill, focusing on criminalizing sexually explicit content and non-consensual manipulated media created through deepfake technology.
On the technological front, as part of its responsible AI work, Intel has developed a real-time deepfakes detection system called ‘FakeCatcher’, which can supposedly detect manipulated videos at a 96% accuracy rate. Much like the deepfake technology itself, ‘FakeCatcher’ too is powered by deep-learning techniques. This is just one of the many solutions being developed by the private sector.
In case you missed it
- Digital India Bill consultations: In an interview, MoS, IT Rajeev Chandrasekhar announced that the IT Ministry will commence extensive stakeholder consultations on the much-anticipated ‘Digital India Bill’ in March 2023. This Bill will replace the over two-decade old IT Act, and may introduce comprehensive provisions on online harms, cybercrimes, non-personal data, among others. The first round of consultations will take place in New Delhi.
- Committee on a Digital Competition Act: In recent years, there has been significant global attention on competition in digital markets. In India, the Standing Committee on Finance (SFC) highlighted and elaborated upon these concerns in its ‘Big Tech Report’ and recommended enacting a Digital Competition Act to impose ex-ante regulations on larger players. Against this backdrop, on 6 February 2023, the corporate affairs ministry constituted a 16-member inter-ministerial committee to examine the need to enact a Digital Competition Act (DCA) and also to frame a draft DCA, within a period of three months.
- First Digital Economy Working Group (DEWG) meeting in Lucknow: As part of India’s Presidency of the G20, the IT Ministry hosted the first DEWG meeting in Lucknow from 13 February to 15 February 2023. The DEWG was formed in 2017 during Germany’s G20 presidency, with the objective of implementing a secure, interconnected, and inclusive digital economy. The three-day long event saw workshops and discussions on themes relevant to DEWG’s agenda. These include discussions implementing digital IDs as part of digital public infrastructure (DPI), cyber security solutions for MSMEs, using DPIs for sustainable development goals, and the use of geo-spatial technologies for infrastructure and product development in the digital economy. The side events saw the participation of G20 members, industry players and international organizations.
What’s more, as Knowledge Partners for the second priority Cyber Security in the Digital Economy, Ikigai Law had a front row seat to all the action. It was an immense pleasure to support the IT Ministry and develop the deliverables under cyber security priority. Rutuja Pol and Tanea Bandyopadhyay from Ikigai were also a part of the Working Group’s deliberations on 14 and 15 February 2023 held in Lucknow.
- TRAI’s consultation paper on DCIP authorization: Through a recent consultation paper, TRAI has proposed introducing a new category of infrastructure provider called Digital Connectivity Infrastructure Providers (DCIP) within the ‘unified licensing’ framework. DCIP is a category of infrastructure provider that could provide both active and passive telecom infrastructure (like dark fiber, feeder cables, antennas, etc.) to licensed entities. The consultation aims to solve issues of high infrastructure costs and inefficient resource utilization. Stakeholders can submit written comments on this paper till 9 March 2023, and counter-comments by 23 March 2023.
- IT Ministry bans digital-lending apps: On 5 February 2023, the IT Ministry issued orders under Section 69A of the IT Act for blocking of certain digital lending apps (DLAs). The primary reasons for these orders include – suboptimal consumer practices like borrower harassment, misuse of users’ data, and influence of Chinese investors and companies on these apps. Later in the month, the IT Ministry revoked its blocking orders qua certain DLAs like LazyPay and Kissht. We’ve unpacked this block-unblock saga in this month’s edition of FinTales – don’t miss it!
- DGCI show-cause notices to e-pharmacies: The Drugs Controller General of India (DGCI) – India’s apex health regulatory agency – issued show-cause notices to top e-pharmacy companies, asking them to explain why coercive action should not be taken against them for selling drugs without a license. Reportedly, the notices state that distribution of drugs without requisite licenses can pose serious public health risks such as indiscriminate drug use and self-medication. Tata 1mg, Amazon and Flipkart are amongst the over 20 e-pharmacies who have received these notices.
- EU-India Trade and Technology Council (TTC) formed: The TTC was created on 6 February 2023. It will focus on fostering mutual interests of EU and India in digital governance – including regulation of emerging tech like AI, quantum computing and cloud systems; resilient value chains; clean energy technologies and more. The TTC will also promote cooperation between EU and Indian incubators and startups.
That’s it from us folks! See you next month. If you enjoyed reading Tech-Ticker, please do share it with your friends and professional networks.