Nearly 60% of Indian businesses confident in scaling AI responsibly, have mature frameworks: Nasscom report

Jan 22, 2026

New Delhi [India], January 22 : Responsible Artificial Intelligence (AI) is rapidly becoming a business imperative for Indian enterprises, moving beyond ethical intent to a strategic priority linked with trust, governance, and long-term value creation, according to Nasscom's State of Responsible AI in India 2025 report.
The report, unveiled at the Responsible Intelligence Confluence in New Delhi, reveals that nearly 60 per cent of organisations confident in scaling AI responsibly have already established mature Responsible AI (RAI) frameworks, highlighting a strong correlation between AI capability and responsible governance practices.
Based on a survey of 574 senior executives from large enterprises, startups, and small and medium enterprises (SMEs) conducted between October and November 2025, the study indicated a clear year-on-year improvement since 2023.
Around 30 per cent of Indian businesses now report mature RAI practices, while 45 per cent are actively implementing formal frameworks, signalling steady ecosystem-wide progress.
Large enterprises continue to lead in Responsible AI maturity, with 46 per cent reporting advanced frameworks, compared to 20 per cent among SMEs and 16 per cent among startups.
Despite the gap, Nasscom noted growing willingness among smaller firms to adopt and comply with responsible AI norms, reflecting increasing awareness and regulatory readiness across the ecosystem.
From an industry perspective, Banking, Financial Services and Insurance (BFSI) leads with 35 per cent maturity, followed by Technology, Media and Telecommunications (TMT) at 31 per cent, and healthcare at 18 per cent. Nearly half of businesses across these sectors are strengthening their RAI frameworks.
Sangeeta Gupta, Senior Vice President and Chief Strategy Officer at Nasscom, said responsible AI is now foundational to trust and accountability as AI systems become embedded in critical sectors such as finance, healthcare, and public services. She emphasised that businesses must move beyond compliance-led approaches and embed responsibility across the AI lifecycle to build sustainable and inclusive innovation
The report highlighted workforce enablement as a major focus area, with nearly 90 per cent of organisations investing in AI sensitisation and training. Companies expressed the highest confidence in meeting data protection obligations.
Accountability structures are also evolving. While 48 per cent of organisations place responsibility for AI governance with the C-suite or board, 26 per cent now assign it to departmental heads, and AI ethics boards and committees are gaining traction, particularly among mature organisations, where 65 per cent have established such bodies.
Despite progress, significant challenges persist. The most frequently reported AI risks include hallucinations (56 per cent), privacy violations (36 per cent), lack of explainability (35 per cent), and unintended bias or discrimination (29 per cent). Key barriers to effective RAI implementation include lack of high-quality data (43 per cent), regulatory uncertainty (20 per cent), and shortage of skilled personnel (15 per cent).
While regulatory uncertainty is a major concern for large enterprises and startups, SMEs cite high implementation costs as a critical constraint.
As AI systems grow more autonomous, the report noted that businesses with higher RAI maturity feel better prepared for emerging technologies such as Agentic AI. Nearly half of mature organisations believe their existing frameworks can address these risks; however, industry leaders caution that substantial updates to current frameworks will be required to manage the novel risks posed by autonomous systems.

More News