The companies on Tuesday released the “Generative AI Application Security Testing and Validation Standard” and the “Large Language Model Security Testing Method” during a side event at the United Nations Science and Technology Conference in Geneva, Switzerland, according to a statement from the event organiser, the non-profit World Digital Technology Academy (WDTA).
Ant is an affiliate of Alibaba Group Holding, owner of the South China Morning Post.
The new GenAI standard was written by researchers from Nvidia, Facebook owner Meta Platforms and others, and reviewed by companies including Amazon.com, Google, Microsoft, Ant, Baidu and Tencent.
It provides a framework for testing and validating the security of GenAI applications, according to a copy of the document published on the WDTA website.
The LLM guideline, penned by 17 Ant employees and reviewed by Nvidia, Microsoft, Meta and others, outlines a diverse range of attack methodologies to test an LLM’s resistance to hacks, according to the official copy
The WDTA, established last April under the UN framework, aims to “expedite the establishment of norms and standards in the digital domain”.
As GenAI develops rapidly and becomes increasingly used by businesses and individual users, tech companies have called for efforts to keep the technology safe. OpenAI chief executive Sam Altman, upon resuming his role in November after a brief ousting, said “investing in full-stack safety efforts” would be one of the company’s priorities.
International standards and regulations on AI have existed before GenAI became popular.
In 2021, Unesco, the UN’s heritage body, introduced a “Recommendation on the Ethics of AI”, which has been adopted by 193 member states.
Between 2022 and 2023, the International Organisation for Standardisation, a Geneva-based non-governmental group that composes standards covering a wide range of areas from workplace safety to IT security, published AI-related guidelines on system management, risk management and systems using machine learning.