Our key takeaways from the Colloquium on AI Safety and Cybersecurity


Ikigai Law, in collaboration with Microsoft and NASSCOM organized a Colloquium on AI Safety and Cybersecurity on 6th June 2023 in New Delhi.

The event brought together government officials, academia, technical experts as well as lawyers and policy professionals to look at issues at the intersection of AI safety and cybersecurity from a 360º lens.

The event comprised of a 30-minute fireside chat, a 45-minute panel discussion and a 20-minute question-and-answer session with the audience.

The fireside chat was on the theme of “Roadmap ahead for India in AI – for the government and private sector.” We were privileged to have with us Mr. Abhishek Singh, President & CEO, NeGD, Ministry of Electronics and Information Technology, Government of India and Mr. Antony Cook, Vice President and Deputy General Counsel Customer and Partner Solutions, Microsoft for the chat, which was moderated by Ikigai Law founder Mr. Anirudh Rastogi. Mr. Singh provided valuable insights on the Government’s vision and focus areas for AI in the coming few years. Mr. Cook spoke about the role of the private sector, particularly Microsoft, in using AI in cybersecurity and the challenges that come with it.

The panel discussion had a mix of technical experts, academia, industry representatives and former government officials who delved into issues at the intersection of AI and cybersecurity including AI being a double-edged sword as both a potential solution and a threat to cybersecurity; potential for government to deploy AI in cybersecurity; and organization preparedness to deploy AI in a safe and responsible manner, among others. We were privileged to have the following as speakers for the panel discussion:

  • Ms. Geeta Gurnani, CTO & Technical Sales Leader, India/South Asia, IBM Technology;
  • Mr. Debayan Gupta, Assistant Professor of Computer Science, Ashoka University;
  • Mr. Nitendra Rajput, Senior Vice President & Head – AI Garage at Mastercard;
  • Mr. Prashant Deo, Senior Information Security Consultant at TCS;
  • Mr. Ankit Bose, Head of NASSCOM AI and
  • Mr. Atul Tripathi, Ex AI & Big Data Consultant, National Security Coordination Secretariat (NSCS) (Prime Minister’s Office).

The discussion was moderated by Ms. Nehaa Chaudhari, Partner at Ikigai Law.

The floor was also opened to the audience which comprised of people working in the tech policy landscape including policy officials from leading technology companies and start-ups, civil society including think tanks, among others. A myriad of important questions were put forth to the panellists – on the best approach for India to regulate AI, the dangers of replicability of AI software, the digital skilling required to use AI effectively as well as safety protocols needed for public facing AI technologies. 

Our key takeaways from the various discussions at the event are as follows:

1. AI is both a solution and a threat, however the benefits outweigh the risks: The speakers acknowledged that while the benefits of AI in all areas, particularly cybersecurity, are immense, the risks cannot be ignored. The panel spoke about how AI based tools can help in enhancing cybersecurity, by strengthening security systems and in helping detect abnormal activity to predict attacks. Several examples were discussed such as: deployment of AI based tools in the Russia- Ukraine War to mitigate damage from cybersecurity threats and attacks; using synthetic data thereby reducing the training time for security models etc. Some speakers also pointed towards the dangers associated with AI. Various risks were discussed such as ability of AI to generate synthetic media about politicians and proliferate misinformation which has massive ramifications in political discourse; heightened risks because of the replicability of AI software, among others. The speakers acknowledged that the technology has strengthened over the years from a needle to a sword and can do both good and harm. However, there was consensus that while AI, like any other technology has risks, but the benefits promised by AI far outweigh the risks anticipated.

2. AI needs to be for all: The government’s vision is to make AI accessible for all and develop it as an inclusive technology. The government seeks to democratize the use of AI and digitally empower people. Compute infrastructure should not be concentrated in the hands of few entities but should be made more equitable. One suggestion given was that the API for AI based technologies should be made open source to make it accessible to all, as long as the same is not proprietary data.

3. The approach towards AI needs to be multi-stakeholder driven: The panelists agreed that a collaborative approach needs to be taken towards use and governance of AI between the industry, government and technical experts. One of the suggestions was a three-pronged approach consisting of (i) an industry benchmark, (ii) governance framework and (iii) agile product development. The panel discussed the importance of having a framework comprised of all three elements: a benchmark for the industry to follow while deploying AI based tools, complemented with a framework by the government with strong guardrails, keeping in line the innovations in AI technology. The panelists also placed emphasis on the importance of creating public-private partnerships in AI between the industry and the government, similar to the approach taken for digital public infrastructure.

4. Government’s roadmap for AI: Government’s strategy for AI rests on four limbs: (i) Data Management Office for data governance, which seeks to make non-personal data accessible to everyone; (ii) Utilizing the power of AI to solve 10+ societal problems including in healthcare and agriculture; (iii) Focus on skilling through STEM talent development, preparing the workforce for an AI-driven job transition through upskilling and digital reskilling programs, creating jobs for people in the AI space etc. and (iv) Responsible use of AI to ensure that AI is developed and deployed in a manner that respects privacy, fairness, transparency and accountability. The speakers discussed the need for capacity building and regular audits by the government. The panelists also discussed one of the biggest challenges the government is facing currently i.e., the availability of trusted data sets, particularly in local languages to train AI algorithms. In light of this, the government has started initiatives such as “Bhasha Daan”, to crowdsource multi-lingual datasets by asking people to contribute data in their local languages.

5. There should be self-regulation along with hard limitations by the government: The panelists agreed that self-regulation could be a suitable option for AI but should be supported with hard limitations by the government such as purpose limitations. AI regulation needs to be rooted in innovation and based on global consensus. China’s new draft regulations for generative AI were also discussed and the panel agreed that the approach taken by China cannot be adopted in India. India’s approach should be based on openness, yet have sufficient safeguards in place to avert the risks associated with the technology. The panel discussed that there was need to rethink India’s fundamental rights in relation to emerging technologies such as AI. The discussion involved around how our current understanding of fundamental rights such as right to free speech needs to be evolved, given the capabilities of AI technology to influence public discourse.

6. Critical role of private sector in responsible deployment of AI: The speakers agreed that commitment is required from the senior leadership for a company to use AI responsibly. A suggestion was given that companies should have an ethics board to govern the use of AI. The panelists also stressed on the principles of transparency, accountability, fairness and security while deploying AI as well as the importance of identifying risks early on. Another suggestion given was having a human in the loop to identify errors in highly capable AI models which interact with critical infrastructure.

We would like to thank the Microsoft CELA team and NASSCOM AI for their significant contribution in bringing this event together. We also thank all the speakers who took out time and shared their invaluable insights with us on issues surrounding AI and cybersecurity.

This post was authored by Pallavi Sondhi, Senior Associate with inputs from Rutuja Pol, Principle Associate and Aman Taneja, Principle Associate.

For more on the topic please reach out to us at contact@ikigailaw.com

the status quo

Dividing by zero...