Rezilion Finds Generative AI Projects Present High Security Risk

Rezilion, a software supply chain security platform, released a new report, “Expl[AI]ning the Risk: Exploring the Large Language Models (LLM) Open-Source Security Landscape,” finding the world’s most-popular generative artificial intelligence (AI) projects present a high-security risk to organizations.

Generative AI has surged in popularity, empowering us to create, interact with and consume content like never before. With the remarkable advancements in LLMs, such as GPT (Generative Pre-Trained Transformers), machines possess the ability to generate human-like text, images, and even code.

The number of open-source projects that integrate these technologies is growing exponentially. By way of example, since OpenAI debuted ChatGPT seven months ago, there are now more than 30,000 open-source projects on GitHub using the GPT-3.5 family of LLMs.

Despite the demand for these technologies, GPT and LLM projects present various security risks to the organizations using them, including trust boundary risks, data management risks, inherent model risks and general security concerns.

“Generative AI is increasingly everywhere, but it’s immature, and extremely prone to risk,” said Yotam Perkal, director of Vulnerability Research at Rezilion.“On top of their inherent security issues, individuals and organizations provide these AI models with excessive access and authorization without proper security guardrails. Through our research, we aimed to convey that the open-source projects that utilize insecure generative AI and LLMs have poor security postures as well. These factors result in an environment with significant risk for organizations.”

Rezilion’s research team investigated the security posture of the 50 most popular generative AI projects on GitHub. The research uses the Open Source Security Foundation (OSSF) Scorecard to objectively evaluate the LLM open-source ecosystem and highlight the lack of maturity, gaps in basic security best practices, and potential security risks in many LLM-based projects.

Key findings highlight concerns, revealing new and popular projects with low scores:

  • Extremely popular, with an average of 15,909 stars
  • Extremely immature, with an average age of 77months
  • Very poor security posture with an average score of 60 out of 10 is low by any standard. For example, the most popular GPT-based project on GitHub, Auto-GPT, has more than 138,000 stars, is less than three months old and has a Scorecard score of 3.7.

These best practices and guidance is recommended for the secure deployment and operation of generative AI systems: educate teams on the risks associated with adopting any new technologies; evaluate and monitor security risks related to LLMs and open-source ecosystems; implement robust security practices, conduct thorough risk assessments, and foster a culture of security awareness.

An alarming amount of time is dedicated to security – especially when it comes to software.

Rezilion’s automated software supply chain security platform helps customers to manage their software vulnerabilities efficiently and effectively. Maintaining a detailed and current database on the latest software vulnerabilities and the strategies to mitigate them remains paramount to customers’ success in navigating this complex security landscape.

Rezilion provides its users with the same OpenSSF scorecard insights as part of the product offering for customers to make more informed decisions regarding adopting and managing any open-source project.

To download the full report, please visit: https://info.rezilion.com/explaining-the-risk-exploring-the-large-language-models-open-source-security-landscape.

For more information about how Rezilion’s automated software supply chain helps customers manage software vulnerabilities efficiently and effectively, visit www.Rezilion.com.