IIT Madras hosted a two-day Conclave on ‘Safe and Trusted AI’ on December 10 and 11, 2025 . The event, organized by the Centre for Responsible AI (CeRAI) at the Wadhwani School of Data Science and Artificial Intelligence (WSAI), focused on establishing AI safety, developing governance frameworks, and fostering open collaboration for AI resources.
Leaders from government, industry, academia, and global AI institutions participated. Discussions centered on creating an AI safety commons and recommending AI governance guidelines specifically for the Global South. The conclave also served as preparatory work for the India AI Impact Summit 2026.
Key Objectives of the Conclave
The IIT Madras Conclave addressed the urgent need for safe and reliable artificial intelligence systems. Discussions aimed to strengthen global capacity in responsible AI. A primary goal involved ensuring trustworthiness, reliability, and safety in AI development and deployment.
A central topic was the establishment of an AI safety commons. This initiative seeks to create a shared pool of essential resources. These resources include datasets, AI tools, performance benchmarks, and governance protocols. Sharing these assets helps promote consistent safety standards across different AI projects and developers.
Participants also focused on developing specific AI governance guidelines. These frameworks aim to suit the unique contexts and needs of countries in the Global South. The goal is to provide clear pathways for responsible AI development worldwide.
Global Context and India’s Role
The two-day event followed several international initiatives. These included the UK AI Safety Summit, the AI Seoul Summit, the France AI Action Summit, and the Global AI Summit on Africa. The IIT Madras conclave contributed to this ongoing global dialogue.
The discussions were a critical part of preparations for the upcoming India AI Impact Summit 2026. This summit will mark a historic first. It will be the first global AI summit hosted in the Global South. The insights gained from the IIT Madras conclave are expected to inform the Safe and Trusted AI Working Group’s contributions to this major event.
An official press release stated that these efforts aim to strengthen India’s leadership. The country seeks to shape trusted, responsible, and future-ready AI systems. This positions India as a key player in global AI governance.
Leadership Insights and Participation
TRB Rajaa, Minister for Industries, Investment Promotions, and Commerce, Government of Tamil Nadu, inaugurated the conclave. Senior leaders from various sectors attended. These included representatives from government bodies, industry leaders, academic institutions, civil society organizations, and global AI institutions.
Speakers at the event highlighted the increasing use of AI. Artificial intelligence now plays a growing role in decision-making across many sectors. They emphasized the need for practical mechanisms. These mechanisms are crucial to translate responsible AI principles into everyday practice.
V Kamakoti, Director of IIT Madras, underscored the necessity of such conferences. He stated, “AI has reached a stage where it is becoming more and more pervasive into everyone’s life.” Kamakoti noted that AI is now tasked with assisting in intricate activities. These include providing suggestions for health and education. He also pointed out the widespread use of AI co-pilots in decision-making.
Looking Forward to India AI Impact Summit 2026
The conclave’s activities included keynote speeches, panel discussions, and closed-door sessions. A dedicated working group focused on safe and trusted AI. These sessions gathered crucial insights. These insights will directly contribute to the agenda and outcomes of the India AI Impact Summit 2026.
The IIT Madras event reinforces India’s commitment to advancing AI responsibly. It also highlights the nation’s efforts to ensure that AI technologies benefit all, especially within the Global South. The upcoming India AI Impact Summit 2026 will build upon these foundational discussions.