The partners of the MISSION KI project – National Initiative for Artificial Intelligence and Data Economy – presented a voluntary quality standard for low-risk AI systems at the AI ​​Quality & Governance Day in Berlin. The initiative, in which the Association of Electrical Engineering Electronics Information Technology (VDE) is also significantly involved, has also provided a portal for the structured assessment of the quality of such AI systems.

A voluntary standard for more transparency and trust

As artificial intelligence systems increasingly penetrate everyday life, regulatory requirements and social expectations for transparency, fairness and reliability are increasing. The pressure is growing, especially for providers and operators of AI solutions, to prove quality at an early stage and to systematically meet the requirements of the EU AI Act.

This is exactly where the new MISSION AI standard comes in: It defines central principles of trustworthy artificial intelligence such as non-discrimination, traceability and reliability and translates them into six clearly defined quality dimensions. These form the basis for a structured testing process that companies can go through independently.

Expertise from standardization, testing and research bundled

The standard was developed together with partners from standardization, testing, consulting and research. In addition to the VDE, PwC Germany, the TÜV AI Lab, the AI ​​Quality and Testing Hub and the Fraunhofer Institute for Intelligent Analysis and Information Systems were also involved.

In particular, the VDE contributed its experience in product testing and standardization – a contribution that was crucial in ensuring that the standard translated abstract trustworthiness requirements into practical, implementable measures.

Nora Dörr, project manager at MISSION KI, emphasizes the importance, especially for smaller companies: Start-ups and small and medium-sized companies often do not have sufficient resources to comprehensively develop quality assurance processes for artificial intelligence. The new standard gives them clear guidelines and makes the quality of their systems visible and verifiable.

Digital testing portal offers step-by-step assessment

A digital testing portal was published to accompany the standard. It guides companies through a systematic analysis of their AI system, starting with a protection needs analysis that determines which requirements are particularly relevant in the respective operational context.

The quality requirements are then checked based on specific criteria and documented in a test report. This shows both the degree of fulfillment of the quality dimensions and specific potential for improvement.

Added value for providers, investors and public procurement

The new quality standard offers different target groups clear advantages.

  • AI providers receive a structured proof of quality that is suitable for dialogue with customers, investors and regulatory authorities.
  • Start-ups and small and medium-sized companies can systematically document quality measures and demonstrate these in tenders and procurement processes.
  • A more uniform basis is created for public clients to evaluate AI offers in a comparable and transparent manner.

At the same time, the standard helps companies to take the requirements of the EU AI Act into account at an early stage and to sustainably anchor quality assurance processes. The quality standard and the testing portal are now available for companies that want to make their low-risk AI secure, transparent and verifiable.



Istaka Karya Membangun Negeri

Leave a Reply

Your email address will not be published. Required fields are marked *