Skip to Main Content

[標題]最新消息

Test Results of the September 2025 Open-Source Models

The Artificial Intelligence Evaluation Center (AIEC) has been established to promote localized AI evaluation and third-party certification in Taiwan, thereby strengthening the development of trusted AI within the industry. The Center will periodically publish benchmark evaluation results for language models. In addition to adopting indicators based on the Chinese Language and Social Studies sections of the national high school entrance examination, AIEC also incorporates evaluation criteria reflecting Taiwanese values, aligning with global trends in AI sovereignty. These benchmarks serve as key references for developing locally adapted models or fine-tuning international models.

Language Model Benchmarks / Small Models (13B and below)Please refer to the relevant files below. 
Language Model Benchmarks / Large Models (above 13B)Please refer to the relevant files below. 
 

 

Downloads:
Test Results of the September 2025 Open-Source Models