Skip to Main Content

[標題]最新消息

Korea Conducts Safety Evaluation of AI Models to Enhance Security in Korean-Language Application Environments

The Ministry of Science and ICT (MSIT), together with the AI Safety Institute (AISI) and TTA, announced the results on December 30, 2025. This is the first time Korea has tested AI safety using a dataset built entirely by domestic research teams. The evaluation was conducted under the framework of Korea’s Basic Act on Artificial Intelligence. Its main purpose is to help companies strengthen the safety of their AI models. The model evaluated was Kakao’s Kanana Essence 1.5. Results showed that Kanana handles various risk scenarios with good technical stability. It was also compared with overseas models of similar size.

The key tool used was the AssurAI benchmark dataset. AssurAI was created specifically for the Korean language environment. It was jointly developed by KAIST, TTA, and other experts. Existing safety tools are mostly English-based. They often miss Korean linguistic nuances and cultural context. AssurAI solves this problem. The dataset is multimodal. It includes text, images, video, and audio. In total, it contains 11,480 high-quality examples. Risks are grouped into six main categories, further divided into 35 specific risk factors, including: harmful & violent content, interpersonal harm, sensitive & adult content, misinformation & manipulation, illegal & unethical activities, socioeconomic & cognitive risks. To ensure quality, AssurAI used strict processes. It combined expert-guided seed data with large-scale crowdsourcing. Three independent annotators labeled each item. A multidisciplinary team also conducted repeated red-teaming reviews.

This evaluation lays a solid foundation for systematic AI safety assessment in Korea. Researchers in Korea and abroad can now quantitatively identify weaknesses in Korean-language models. They can also improve model safety more effectively. The government plans to promote AssurAI toward international standards, such as ISO/IEC. Korea will also strengthen cooperation with the international AI Safety Institute network. This includes ten countries, such as the United States, United Kingdom, and Japan. Korea is working to reduce the negative impact of AI on society. It aims to build a responsible and trustworthy global AI ecosystem.

Links:
The first AI model safety assessment, a first step toward expanding the AI safety ecosystem. (opens in a new window)