Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

AI Risk Analysis: Discovering Model Dependencies and Managing OSS Exposure

As AI-assisted development accelerates, so do the risks that come with it. Whether models are imported from public registries like Hugging Face or generated and modified in-house, they introduce new supply chain, license, and security risks that traditional AppSec tools often miss.

Hopper’s AI Inventory gives you full visibility into AI-generated code and model dependencies across your codebase. It helps you identify risky behavior, manage compliance, and track OSS exposure introduced by AI tooling.

What Is the AI Inventory?

Hopper scans for AI models embedded in your codebase, whether pickled, downloaded, or custom-trained, and analyzes their metadata, behavior, and usage. It surfaces:

  • Insecure deserialization (e.g., Pickle)
  • Malicious or unverified models
  • Remote access behavior
  • Sensitive data exposure
  • Licensing conflicts
  • Usage links in source code

This extends OSS risk management into the AI layer of your stack.

Fields in the AI Inventory View

Each model entry includes:

  • Project – Where the model resides
  • Model ID – Unique identifier for tracking
  • Risk – Classification as Malicious, Remote, or Safe
  • Pickle – Flags insecure deserialization
  • Model Source – Public registry, internal source, etc.
  • Download Count – Known pulls or usage volume
  • Sensitive Data – Flags embedded secrets or private data
  • License – Open-source license type
  • Usage – Direct link to usage in your source code

Filtering for AI Model Risk

Use filters to focus on what matters most:

  • Show Crown Jewels Projects only
  • Show Projects with Malicious Models only
  • Show Remote Projects only
  • Model Source
  • Integration
  • Download Count (more than, less than)
  • Tags, including custom and crown jewel tags
  • Tag search by keyword

These filters help security and compliance teams quickly identify high-risk models.

Exporting AI Inventory Data

Filtered AI Inventory results can also be exported as CSV or JSON format. This data supports compliance reviews, engineering audits, and automated tooling.

Why It Matters

AI-generated components introduce unreviewed logic, unexpected behavior, and licensing ambiguity. Hopper brings model visibility into the security workflow so you can:

  • Track model provenance and risk
  • Enforce policy on AI usage and license type
  • Detect insecure patterns like Pickle
  • Prioritize exposure based on real application context