Breadcrumb

Background

The Administration for Digital Industries, under the Ministry of Digital Affairs, has partnered with the National Institute of Cyber Security (NICS) and the Industrial Technology Research Institute (ITRI) to establish the Artificial Intelligence Evaluation Center (AIEC), using the development of Taiwan's domestic AI industry as its blueprint.

 

Promoting AI Evaluation

With the rapid development of the field of artificial intelligence (AI), AI technologies are increasingly being applied to a wide range of products. To foster AI development and ensure the proper application of AI technologies, Taiwan needs an evaluation system for AI products and systems. Accordingly, AIEC will draft "Evaluation Systems for AI Products and Systems" and "Guidelines for Evaluating AI Products and Systems" to provide evaluation services, thereby enhancing the effectiveness and safety of domestic AI products.

 

Our Goals

AIEC aims to establish a domestic AI products and systems evaluation system and provide AI evaluation services. We have surveyed international AI standards, including NIST, ISO, and the European Parliament, and plan to establish the following 10 evaluation items:

  • Safety: AI systems should not under defined conditions, lead to a state in which human life, health, property, or the environment is endangered.
  • Explainability: A representation of the mechanisms underlying AI systems’ operation and the meaning of AI systems’ output in the context of their designed functional purposes.
  • Resiliency: AI systems can maintain their functions and structure in the face of internal and external change and degrade safely and gracefully when this is necessary.
  • Fairness: Concerns for equality and equity by addressing issues such as harmful bias and discrimination.
  • Accuracy: Closeness of results of observations, computations, or estimates to the true values or the values accepted as being true.
  • Transparency: The extent to which information about an AI system and its outputs is available to individuals interacting with such a system.
  • Accountability: Ensure that the person responsible for the AI system can trace and explain its decisions and actions, and bear corresponding responsibilities and consequences.
  • Reliability: Ability of AI systems to perform as required, without failure, for a given time interval, under given conditions.
  • Privacy: The norms and practices that help to safeguard human autonomy, identity, and dignity in AI systems.
  • Security: AI systems should maintain confidentiality, integrity, and availability through protection mechanisms that prevent unauthorized access and misuse.

 

AI Evaluation System

To provide AI product evaluation services, the AIEC will establish a national AI evaluation system. The AI evaluation system consists of the following three entities:

  • AIEC: Responsible for developing AI evaluation systems and methods.
  • AI Certification Body: Executes the AI evaluation system and provides evaluation reports.
  • AI Testing Laboratory: Conducts product testing and issues test reports.
Entity AIEC AI Certification Body AI Testing Laboratory
Tasks and Responsibilities
  • Developing AI Evaluation System and Methods
  • Executor of AI evaluation system
  • Evaluate the test reports produced by testing laboratory and the relevant documents submitted by applicants, and provide an evaluation report.
  • Testing product and providing test report

 

AIEC Structure

The AIEC comprises two main bodies: the Institutional Promotion Committee, responsible for organizational and strategic planning, and the Technical Advisory Group, which provides expert guidance on technical matters.