Responsible AI at Scale: Governance, Integrity, and Cyber Readiness for a Changing World

Panel Discussion

00

days

00

hours

00

minutes

00

seconds
Register Now
Register Now
Feb 18, 2026
10:30 am
to
11:30 am
West Wing Room 4 B, Bharat Mandapam, New Delhi

Maj Vineet Kumar, Founder and Global President, CyberPeace, Context Setting and Moderation

Maj Vineet Kumar opened the session by framing responsible AI as a question not only of governance, but also of cyber readiness, societal trust, and human safety. He positioned the discussion within the broader objectives of the IndiaAI Impact Summit 2026, namely to ensure that AI adoption strengthens economic development while remaining anchored in social good. He highlighted CyberPeace’s role at the intersection of policy, capacity building, and community engagement, and underlined that the purpose of the panel was to bring government, industry, academia, and civil society into one integrated conversation.

Throughout the session, he guided the discussion toward practical realities, including how AI is changing the nature of cyber threats, how institutions are coping with these shifts, and why inclusion, safety, and trust must remain central to AI at scale. Toward the conclusion, he invited each panelist to reflect on one priority action for improving cyber readiness of AI systems across sectors.

Dr Subi Chaturvedi, Global SVP and Chief Corporate Affairs and Public Policy Officer, InMobi

Dr Subi Chaturvedi spoke from the perspective of industry, policy, and social impact, and anchored her remarks in the lived realities of women and children in the digital age. Drawing from personal experience, she highlighted how online abuse, grooming, and cyber bullying often remain hidden within households and communities, and stressed that child protection in the digital age is not only a legal or technical issue, but also a social and educational responsibility. She emphasised the importance of teaching children about privacy, consent, and digital boundaries from a very early age, and illustrated how awareness can empower even young children to assert their rights and protect their personal space. She also shared a personal incident involving a child’s misuse of social media to demonstrate how lack of cyber hygiene and supervision can quickly escalate into reputational, legal, and emotional consequences for families.

From an industry and policy lens, she argued that responsible AI frameworks cannot simply be copied from Western models, especially when countries like India must solve simultaneously for access, scale, and inclusion. While privacy by design remains essential, she noted that platforms in the Global South often serve as the first point of contact with the internet for millions of users, which places a special responsibility on companies to embed trust, safety, and integrity into their products by default. She spoke about the need to counter misinformation and disinformation through a combination of technology, institutional frameworks, and public awareness, and underlined that integrity must remain at the centre of every conversation on responsible AI.

Dr Chaturvedi also focused strongly on participation and representation in global governance processes. She shared her experience from the United Nations Internet Governance Forum, where she advocated for stronger and more structured participation of young people and women in decision making bodies, including dedicated representation within the multi stakeholder advisory processes that guide the United Nations Secretary General. She noted that sustained engagement in standards bodies and global policy forums is essential, since this is where rules, norms, and technical standards are actually shaped.

She concluded by calling on young professionals, especially young women, to actively participate in consultations, standards organisations, and international forums, stressing that meaningful inclusion is not only about being invited to the table, but also about having the confidence and capacity to shape the agenda and build the table itself.

Ms Anna Sytnik, Associate Professor, St Petersburg State University

Ms Anna Sytnik brought in the academic and nonprofit perspective, focusing on the challenge of speed in AI development. She observed that while large technology companies can transform quickly, universities, research institutions, and smaller organisations often struggle to keep pace, even when they are willing to do so.

She argued that the central challenge is how to scale impact without losing quality, legitimacy, and trust. According to her, one effective approach is to focus on three pillars, knowledge, community, and continuous updates. She shared examples of building educational ecosystems that include textbooks, research clubs, summer schools, and collaborative programs to support learners and institutions in the age of AI.

She also stressed the importance of partnerships, both with technology companies and across borders, since meaningful engagement with those who build AI systems is essential to understand real world challenges and to design relevant academic and social responses.

Ms Carly Ramsey, Director and Head of Public Policy APJC, Cloudflare

Ms Carly Ramsey spoke from the perspective of a global digital infrastructure company that operates between users, customers, and the internet. She explained that Cloudflare has observed a sharp rise in cyber attacks, including a significant increase in large scale DDoS attacks, and that AI is accelerating both the volume and sophistication of these threats.

She noted that while AI is helping defenders absorb and mitigate attacks, it is also empowering attackers to scale and automate malicious activity. This makes it essential to build security into AI systems from the very beginning, rather than attempting to add it later. She cautioned against repeating the mistakes of the early internet, which was not designed with security principles at its core.

She also highlighted emerging concerns around agentic AI, where autonomous systems may operate with access to critical assets, raising complex questions of accountability, authentication, and control. Finally, she called for global collaboration on standards and policy, warning that fragmented national approaches would create serious challenges for both security and innovation.

Mr Jay Bavisi, Founder, Group President and Chairman, EC Council

Mr Jay Bavisi placed the current AI moment in historical context by comparing it to the early days of cybersecurity. He noted that while many countries already have AI frameworks, the more serious gap lies in human capability to implement, defend, and govern these systems. He pointed out that the world already faces a significant shortage of cybersecurity professionals, even before factoring in the additional demands created by AI.

He explained that AI will replace some jobs, but will also create new roles that require higher level skills and new forms of expertise. In his view, the central challenge is preparing human beings to work with, manage, and govern AI responsibly. He emphasised that India, with its strong technical talent base, has a unique opportunity to build a globally relevant workforce that can support AI adoption, defence, and governance across the world.

He concluded by stating that the impact of AI, whether positive or negative, ultimately depends on human decisions, training systems, and institutional readiness.

Mr Beenu Arora, Co Founder and Chief Executive Officer, Cyble

Mr Beenu Arora provided a frontline perspective from the field of threat intelligence. He described how AI has industrialised cybercrime, making attacks more scalable, more targeted, and more sophisticated. He shared examples of deepfake based fraud, voice cloning scams, and highly contextual phishing campaigns crafted using data gathered by AI agents from across the internet.

He explained that organisations are now facing attacks that combine multiple vulnerabilities and techniques, with AI being used for reconnaissance, vulnerability discovery, and social engineering. He also noted that AI is increasingly becoming a cognitive layer within enterprises, yet investments in protecting this layer remain insufficient.

He warned that financial losses are no longer limited to small scale scams, but now include major corporate losses caused by impersonation and manipulation in digital meetings and communications. He called for stronger public private collaboration and more transparent disclosure frameworks to help both government and industry respond more effectively to these risks.

Lt Gen Rajesh Pant, Former National Cyber Security Coordinator, Government of India

Lt Gen Rajesh Pant offered a strategic and national security perspective on responsible AI at scale. He observed that the next two years will be critical in shaping India’s AI trajectory over the next two decades, particularly in the context of the national vision for a developed India. He noted that India already possesses key advantages, including digital public infrastructure, democratic institutions, scale, and strong technical talent.

He explained that AI is now being used both for attack and for defence, creating a situation where AI increasingly confronts AI in cyberspace. He outlined a five pillar approach to AI governance, focusing on safety, security by design, integrity of data and information, accountability through transparency and oversight, and inclusiveness, especially in a multilingual and diverse society like India.

He concluded by stressing that responsible AI at scale must be built through a balanced approach that combines innovation with strong governance, institutional readiness, and social inclusion.

Key Recommendations from the Discussion

  1. Integrate AI safety and cybersecurity by design into all major AI deployments.
    Government and industry should mandate that AI systems incorporate security, safety, and integrity controls from the design stage, rather than treating them as add-ons. This includes clear accountability mechanisms, auditability, and safeguards for autonomous or agentic systems.

  2. Invest at scale in workforce development for AI adoption, defence, and governance.
    A national and sector-wide push is needed to build structured, measurable, and certified skill pathways for professionals who will design, deploy, secure, and regulate AI systems. This should cover government, industry, academia, and civil society.

  3. Strengthen child safety and gender sensitive protections in digital and AI policies.
    Policies and platforms should explicitly address online abuse, grooming, cyberbullying, and technology mediated violence. This includes digital literacy from an early age, support systems for victims, and stronger platform responsibilities for prevention and response.

  4. Enhance public private collaboration and transparent incident reporting.
    Given the scale and speed of AI enabled threats, closer cooperation between government, industry, and research institutions is essential. Transparent and responsible disclosure frameworks should be strengthened to improve collective situational awareness and response.

  5. Promote inclusion and representation in standards and governance forums.
    India should actively support greater participation of youth, women, and experts from diverse sectors in international standards bodies and policy platforms, including United Nations related processes, where AI norms and technical standards are being shaped.

  6. Leverage India’s digital public infrastructure to build inclusive AI models.
    India should use its existing digital platforms and public infrastructure to pilot and scale AI use cases that are secure, inclusive, multilingual, and accessible, particularly for first time users and underserved communities.

  7. Adopt a holistic governance framework focused on safety, security, integrity, accountability, and inclusion.
    The panel recommended a balanced and comprehensive approach to AI governance that aligns innovation with trust, protects societal interests, and ensures that the benefits of AI are widely shared.

Conclusion

Moderated by Maj Vineet Kumar, Founder and Global President, CyberPeace, the panel demonstrated that responsible AI is not only a technological or regulatory challenge, but also a social, institutional, and human capital challenge. Each panelist, from their respective domain, reinforced the importance of safety, trust, inclusion, and preparedness as AI scales across society. The discussion clearly positioned India as having both the responsibility and the opportunity to shape globally relevant, inclusive, and trustworthy models of AI governance and cyber readiness.

No items found.
Speakers
Lt Gen (Dr.) Rajesh Pant (Retd), PVSM, AVSM, VSM
Dr Subi Chaturvedi

You're invited! Join hands with the cyber peace movement and register for our upcoming event.

Agenda
Registration begins from 09:00
10:00 AM to 10:10 AM
Welcome Address and Opening Remarks
Lt Gen (Dr.) Rajesh Pant PVSM, AVSM, VSM (Retd)
Ex National Cyber Security Coordinator
Prime MInister’s Office, Government of India
10:10 AM to 10:20 AM
Address
Prof. Rajan Bose
Director IIT Delhi
10:20 AM - 10:25 AM
Industry Address
Dr. Subi Chaturvedi
Global Senior Vice President & Chief Corporate Affairs & Public Policy Officer
InMobi Group
10:25 AM to 10:30 AM
Address
Professor Sanjay Jha
Director of Research and Innovation, School of Computer Science and Engineering
UNSW, Sydney
10:30 AM to 10:35 AM
Address
Ms. Pooja Kinger
Homeland Security Investigation
US Embassy
10:35 AM to 10:40 AM
Government Address
Dr. Gaurav Gupta
Additional Director / Scientist 'E'
Ministry of Electronics & Information Technology (MeitY), Government of India
10:40 AM - 10:45 AM
Survivor Video
10:45 AM to 11:45 PM
PANEL 1
Emerging Technologies and vulnerable Populations: A Security by design Approach
Mr. Samiran Gupta
Vice President, Stakeholder Engagement and Managing Director, Asia Pacific
Internet Corporation for assigned Names and Numbers
Professor Sanjay Jha
Director of Research and Innovation
School of Computer Science and Engineering UNSW, Sydney
Prof Anjali Kaushik
Professor, Ex-DEAN, and Chair, CoE on Digital Economy and, Cyber Security (DECCS),
Management Development Institute, Gurgaon
Dr. Shruti Mantri
Associate Director
Institute of Data Sciences,Indian School of Business, Hyderabad
Moderator
Maj Gen (Dr) Ripin Bakshi AVSM, VSM (Retd)
Senior Fellow
Center for Land Warfare Studies (CLAWS)
11:45 PM to 12:00 PM
Tea / Coffee Break
12:00 PM to 12:15 PM
Paper Presentation, GD Goenka
12:15 PM to 12:25 PM
Launch of Report and Unveiling of the Digital Forensics Magazine
12:25 PM to 12:35 PM
Debriefing of the Report: Fact-Checking India: Identifying the Spread of Fake News and Policy Recommendations for Combating Misinformation
Dr. Shruthi Mantri
Associate Director
Institute of Data Sciences, Indian School of Business, Hyderabad
12:35 PM to 12:45 PM
Key Highlights of the Study: Unmasking the Digital Deception: Advancements in Tackling Misinformation, Deepfakes & AI Generated Fakes
Prof Anjali Kaushik
Professor, Ex-DEAN, and Chair, CoE on Digital Economy and, Cyber Security (DECCS),
Management Development Institute, Gurgaon
12:45 PM to 13:00 PM
Keynote Address: The Cornerstones of Trust and Safety in Digital Environments
Smt. Rekha Sharma
Member of Parliament
Rajya Sabha
13:00 PM to 14:00 PM
Networking and Lunch
14:00 PM to 15:15 PM
PANEL 2
Risk Mitigation in Digital Environments: Elevating User Grievance Redressal Mechanisms and Trust-Building in the Age of Emerging Technologies
Dr.Pavan Duggal
Advocate
Supreme Court of India
Mr. Bhajan Poonia
CTO
OLX India
Dr. Rakesh Maheshwari
Former Sr. Director and Group Coordinator, Cyber Laws and Data Governance,
Ministry of Electronics and Information Technology Government of India
Mr. Sudhir Sharma
Sr Manager, Product Management, GTM Support Operations
Google Singapore
Dr. Aparajita Bhatt
Associate Professor of Law & Director, Center for Cyber Laws
National Law University, Delhi
Moderator
Mr. Pradyot Chandra Haldar
President Policy Perspective Foundation (PPF)
Former Director, Intelligence Bureau, Government of India
15:15 PM to 15:30 PM
Tea Break
15:30 PM to 16:30 PM
Awards AND HONORS
Cyberpeace Honors
eRaksha Winners
CyberPeace Corps Volunteers
16:30 PM to 17:00 PM
Valedictory session
Mr. Suresh Yadhav
Senior Director (A.I) Trade Oceans and Natural Resources Directorate Commonwealth Secretrait
Major Vineet Kumar
Founder and Global President CyberPeace
Agenda
Mitigating AI Risks & AI Safety Roundtable
OFFICIAL PRE-SUMMIT EVENT OF THE AI IMPACT SUMMIT 2026
18:00–18:05 PM
Welcome & Context Setting
Opening remarks
18:05 - 18:25 PM
Key Address
18:25 - 18:30 PM
Announcement
Announcement of the Global iSAFE Hackathon - Secure AI for Everyone (SAFE)
18:30–19:45 PM
Mapping the Challenges
Roundtable focused on identifying top risks —
• Agentic AI Security
• Mitigating the Risk of AI
Discussing the Solutions
Exchange of perspectives from participants on: –
Watermarking & provenance systems – AI safety engineering – Red-teaming andadversarial testing – Cross-sector collaboration and data transparency.
• Privacy implications of autonomous AI systems.
• How consent, data trails, and decision accountability evolve when AI acts on behalfof humans.
• Technical interventions: sandboxing, explainability logs, data-use transparency.
• Speakers from GDC, Academia and Startups
19:40 – 19:55 PM
Collaborative Actions & Redress Frameworks
19:55–20:00 PM
Summary & Commitments
Donate
Engage