Loading...

NIST Releases Draft Guidance on AI Safety and Standards

TL;DR

  • The risks associated with generative AI in NIST’s next papers include training data validation and content transparency and safety.
  • Pursuing international joint work on AI standards is essential for responsible AI deployment as outlined by NIST.
  • The main task of the NIST GenAI initiative is to evaluate the generative AI capabilities focusing on the development of ethical use of AI in digital content creation.

The U.S. National Institute of Standards and Technology (NIST) became proactive in resolving the challenges associated with artificial intelligence (AI) by issuing four draft publications designed to ensure the safety, security, and reliability of AI systems. During this period, which ends on 2nd June 2024, the drafts are open to public comments. This reflects NIST’s response to the October 2023 AI Executive Order that emphasizes mitigation of the effects of AI technologies while promoting responsible innovation and the maintenance of the US technological leadership.

Alleviating generative AI risks

A main area of concern in NIST’s publications in drafts is security threats arising from generative AI technologies. The AI RMF AI Generative AI AI profile provides 12 risks which range from high accessibility to sensitive information to the propagation of hate speech and malicious content. Addressing these risks has been a key area of focus for NIST, which has identified over 400 potential risk management actions that organizations can consider. This framework provides a structure developers can follow, and align their goals and priorities.

Minimizing training data risks 

Another key content presented in the drafts is how to secure the data used in training AI systems. The draft publication on Secure Software Development Practices for Generative AI and Dual-Use Foundation Models which is part of NIST’s existing guidance is generated to guarantee the integrity of AI systems amidst worries about malicious training data. NIST suggests some ways to make the computer code secure and gives solutions for data problems including data collection and use. This will make the AI systems more secure against possible threats.

Encouraging transparency in AI-created content

As a response to the rapidly increasing number of synthesized digital materials, NIST is developing mitigation measures to combat the risks posed by them in their upcoming document on Reducing Risk posed by Synthetic Material. Through digital watermarking and metadata recording, NIST is trying to make it possible to track and identify altered media, which will hopefully prevent some negative outcomes including the distribution of non-consensual intimate images and child sexual abuse material.

Driving global engagement on AI standards

Recognizing the fact that international cooperation is one of the keys to establishing AI-related standards, NIST produced a draft of the Global Engagement Plan on AI Standards. The objective of this plan is to encourage cooperation and coordination among international allies, standards-developing organizations and the private sector to speed up technology standards for AI. By making content origin awareness and test methods a top priority, NIST aims to develop a strong regime that ensures the safe and ethically sound operations of AI technologies all around the globe.

Initiating NIST GenAI

Moreover, the institute has created NIST GenAI, a software tool that assesses and quantifies the capabilities of generative artificial intelligence tools. The NIST GenAI will be an instrument with which the NIST AI Safety Institute can issue challenge problems and pilot evaluations to help the U.S. AI Safety Institute at NIST differentiate between the input of the AI algorithm and human-produced content. By design, the main goal of this initiative is to promote information reliability and provide guidance on the ethical dimension of content creation in the AI era.

NIST´s announcement of these draft reports and the launch of NIST GenAI mark an active and development-oriented initiative to solve AI-related problems that challenge our society at the same time keep the innovation of society secure. Through NIST’s solicitations for input from key stakeholders, such as companies that have either developed or deployed AI technologies, the platform can influence the determination of AI safety and standards guidelines. Through active involvement in this process, stakeholders can help in setting up the most preferred practices and the industry’s standard approach, which ultimately leads to a safe level and trustworthy AI ecosystem.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Chris Murithi

Chris is a versatile fintech analyst with a deep understanding of blockchain domains. As much as technology fascinates him, he finds the intersection of both technology and finance mind-blowing. His particular interest in digital wallets and blockchain aids his audience.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Google
Cryptopolitan
Subscribe to CryptoPolitan