In February 2025, Singapore introduced a suite of new AI safety initiatives at the AI Action Summit (AIAS) in Paris, headlined by the "Global AI Assurance Pilot." This initiative is designed to address the cross-domain nature of AI products and services, strengthening AI safety for both Singaporean citizens and the global community.
【Global AI Assurance Pilot Build a Trusted AI Ecosystem】 Spearheaded by Singapore AI Verify Foundation, the Global AI Assurance Pilot aims to help codify emerging norms and best practices around technical testing of Generative AI applications, engaging both AI testing providers and companies deploying generative AI solutions. It will shape AI assurance standards and services, foster the local and international third-party AI assurance markets, provide practical input to AI governance frameworks, as well as build trust in generative AI applications and facilitate widespread adoption. The testing tools developed under this initiative will emphasize technical testing over procedural compliance, targeting specific use cases rather than entire organizations, and focusing on applications rather than underlying foundational models. Participating AI applications must involve the use of at least one large language (or multi-modal) model. The initiative is anticipated to span multiple sectors and functional prototypes. Key testing domains will include safety and health, industry regulatory compliance, unfair treatment, transparency, and redress mechanisms.
【About AI Verify Foundation and Model AI Governance Framework】 The AI Verify Foundation operates under the Infocomm Media Development Authority (IMDA) of Singapore, which has been a pioneer in AI governance since launching the "Model AI Governance Framework" in 2020. In May 2024, the IMDA updated this framework to address the evolving landscape of generative AI, releasing a tailored version to manage this technology. The framework offers a systematic and balanced approach to governing generative AI, mitigating potential risks while encouraging innovation. Organized around nine critical dimensions, the framework covers accountability, data governance, trusted development and deployment, incident reporting, testing and assurance, security, content provenance, safety and alignment research and development, and the use of AI for public good. The Global AI Assurance Pilot and Model AI Governance Framework aim to establish best practices for the governance of generative AI models and contribute to a trusted AI ecosystem.
【Broader Initiatives with Global Collaboration and Regional Insights】 Beyond the Global AI Assurance Pilot, Singapore announced additional AI safety initiatives in February 2025. A notable collaboration with Japan under the AI Safety Institute Network will assess the performance of large language models in non-English environments, spanning 10 languages and 5 hazard standards to ensure AI safety across diverse linguistic contexts. Additionally, Singapore released the AI Safety Red Team Challenge Assessment Report, which evaluates the performance of large language models across various languages and cultures in the Asia-Pacific region. The report introduces a unified testing methodology, supporting the creation of benchmarks and automated testing processes to address regional safety challenges.