Legit Security Releases First-Ever AI Discovery Capabilities

Legit Security, an application security posture management (ASPM) platform for secure application delivery, rolled out new cybersecurity AI discovery capabilities. With this function, Legit expects to help bridge gaps between security and development by allowing CISOs and AppSec teams to understand where and when AI code is used, taking action to ensure proper security controls are in place without slowing software delivery.

“There’s still a huge disconnect between what CISOs and their teams believe to be true and what is actually happening on the ground in development,” said Gary McGraw, the co-founder of the Berryville Institute of Machine Learning (BILM) and author of Software Security. “This belief gap is particularly acute when it comes to understanding how, when and why AI technology is used by developers. In our recent BIML publication ‘An Architectural Risk Analysis of Large Language Models’ we identified 81 LLM risks, including a critical top ten – none of which can be mitigated without thorough understanding of where AI is used to deliver code.”

Legit’s platform helps security leaders such as CISOs, product security leaders and security architects to gain improved visibility into risks across the development pipeline, from infrastructure to the application layer. With a view of the development lifecycle, customers can ensure deployed code is traceable, secure and compliant. The AI code discovery capabilities bolster the platform by closing a significant visibility gap that allows security to take preventive action, decrease legal exposure and promote compliance.

“AI offers huge potential to enable developers and organizations to deliver and innovate faster, but it is important to understand whether such decisions introduce risk,” said Liav Caspi, the co-founder and CTO of Legit Security. “Our aim is to ensure nothing stops developers from delivering while providing security and the confidence they have visibility and control into the usage of AI and LLMs. We already helped some of our customers see where and how AI is used, which was new information for the team.”

The AI code discovery capabilities include:

  • Discovery of AI-generated code: Legit provides a full view of the development environment, including code derived from AI-generated coding tools (e.g., GitHub Copilot).
  • Full visibility of the dev environment: By gaining a full view of the application environment, including repositories using LLM, MLOps services and code generation tools, Legit’s platform offers the context necessary to understand and manage an application’s security posture.
  • Security policy enforcement: Legit Security detects LLM and GenAI development and enforces organizational security policies such as ensuring all AI-generated code gets reviewed by a human.
  • Real-time notifications of GenAI code: Legit can immediately notify security teams when users install AI code generation tools, providing improved transparency and accountability.
  • Protect against releasing vulnerable code: Legit’s platform provides guardrails to prevent the deployment of vulnerable code to production, including that delivered via AI tools.
  • Alert on LLM risks: Legit scans LLM application’s code for security risks such as prompt injection and insecure output handling.

To learn more about the broader Legit Security platform, visit here. For partner resources, click here.