Singapore – Aiming to address the security and safety challenges associated with using large language models, Singapore has recently announced the launch of its new testing toolkit designed specifically for this problem, the AI Verify-Project Moonshot.
As an open beta project, this initiative is aimed at providing intuitive assessments of the quality and safety of a model or application in an easily understood manner, even for a non-technical user. This was created through Singapore’s collaboration with DataRobot, IBM, Singtel, and Temasek, ensuring that the tool is useful and aligned with industry needs.
It also stands as a pioneering open-source tool, integrating red-teaming, benchmarking, and baseline testing on a user-friendly platform, according to Josephine Teo, minister for communication and information of Singapore. This underscores Singapore’s commitment to leveraging the global open-source community’s potential to address AI risks.
Moreover, the project plays a crucial role in global testing standards alongside its collaboration with two prominent AI testing organisations, including the AI Verify Foundation (AIVF) and MLCommons. This collaboration commenced with the signing of a memorandum of intent to create a common safety benchmark suite.
Meanwhile, the AIVF project, which intends to harness collective expertise for the responsible use of AI, has recently celebrated its first founding anniversary at ATxSG. The foundation has doubled its membership to over 120, with Amazon Web Services (AWS) and Dell joining as new premier members.
It has also extended its scope from AI testing tooling to developing trust-enhancing AI safety products. These include the Model AI Governance Framework for Generative AI, the mapping of AI Verify with ISO 42001, and the integration of AI Verify with MAS’ Veritas toolkit.