Singapore seeks expanded governance framework for generative AI
Share- Nishadil
- January 17, 2024
- 0 Comments
- 3 minutes read
- 35 Views
Singapore has released a draft governance framework on generative artificial intelligence (GenAI) that it says is necessary to address emerging issues, including incident reporting and content provenance. The proposed model builds on the country's existing AI governance framework, which was and last .
GenAI has significant potential to be transformative "above and beyond" what traditional AI can achieve, but it also comes with risks, said the AI Verify Foundation and Infocomm Media Development Authority (IMDA) in a joint statement. There is growing are necessary to create an environment in which GenAI can be used safely and confidently, the Singapore government agencies said.
"The use and impact of AI is not limited to individual countries," they said. "This proposed framework aims to facilitate international conversations among policymakers, industry, and the research community, to enable trusted development globally." The draft document encompasses proposals from a IMDA had released last June, which identified six risks associated with GenAI, including hallucinations, , and embedded , and a framework on how these can be addressed.
The proposed GenAI governance framework also draws insights from previous initiatives, including a and testing conducted via an . The draft GenAI governance model covers nine key areas that Singapore believes play key roles in supporting a trusted AI ecosystem. These revolve around the principles that AI powered decisions should be explainable, transparent, and fair.
The framework also offers practical suggestions that AI model developers and policymakers can apply as initial steps, IMDA and AI Verify said. One of the nine components looks at content provenance: There needs to be transparency around where and how content is generated, so consumers can determine how to treat online content.
Because it can be created so easily, AI generated content such as , the Singapore agencies said. Noting that other governments are looking at technical solutions such as digital watermarking and cryptographic provenance to address the issue, they said these aim to label and provide additional information, and are used to flag content created with or modified by AI.
Policies should be "carefully designed" to facilitate the practical use of these tools in the right context, according to the draft framework. For instance, it may not be feasible for all content created or edited to include these technologies in the near future and provenance information also can be removed.
Threat actors can find other ways to circumvent the tools. The draft framework suggests working with publishers, including social media platforms and media outlets, to support the embedding and display of digital watermarks and other provenance details. These also should be properly and securely implemented to mitigate the risks of circumvention.
Another key component focuses on security where , such as prompt attacks infected through the model architecture. This allows threat actors to exfiltrate sensitive data or model weights, according to the draft framework. It recommends that concepts that are applied to a systems development lifecycle.
These will need to look at, for instance, how the ability to inject natural language as input may create challenges when implementing the appropriate security controls. The probabilistic nature of GenAI also may bring new challenges to traditional evaluation techniques, which are used for system refinement and risk mitigation in the development lifecycle.
The framework calls for the development of new security safeguards, which may include input moderation tools to detect unsafe prompts as well as digital forensics tools for GenAI, used to investigate and analyze digital data to reconstruct a cybersecurity incident. "A careful balance needs to be struck between protecting users and driving innovation," the Singapore government agencies said of the draft government framework.
"There have been various international discussions pulling in the related and pertinent topics of accountability, copyright, and misinformation, among others. These issues are interconnected and need to be viewed in a practical and holistic manner. No single intervention will be a silver bullet." With AI governance still a nascent space, building international consensus also is key, they said, pointing to Singapore's efforts to collaborate with governments such as the .
Singapore is accepting feedback on until March 15..