OWASP LLM Top 10 Coverage

Security scanning built for the AI era

Oculum detects AI-specific vulnerabilities like package hallucination, prompt injection, and model supply chain attacks — issues traditional scanners miss entirely.

No credit card required100 free scans/month
oculum scan

$ oculum scan .

Scanning 247 files...

[CRITICAL] Hallucinated package 'flask-caching-utils'

requirements.txt:12 — Package does not exist on PyPI

[HIGH] User input in system prompt

api/chat.py:34 — Enables prompt injection attacks

[MEDIUM] API key in LLM context

services/openai.ts:18 — sk-proj-... exposed to model

Found 1 critical, 1 high, 1 medium issue

Scan completed in 3.2s

43vulnerability categories
161+hallucinated packages
93.9%benchmark accuracy
40+secret patterns

AI is writing your code. Who's checking for security?

LLM-generated code introduces new vulnerability classes that traditional security tools weren't designed to catch.

AI suggests packages that don't exist

Attackers register hallucinated package names with malware. Your dependency install becomes a supply chain attack.

Prompts leak secrets to language models

API keys interpolated into prompts end up in LLM context, logs, and potentially training data.

LLM output runs without validation

Prompt injection leads to code execution when AI output flows into eval(), SQL queries, or shell commands.

Traditional scanners miss AI patterns

SAST tools built before the AI coding era don't understand prompt context, model APIs, or hallucination risks.

Detection Categories

What Oculum Detects

Purpose-built detectors for AI-specific vulnerabilities across 7 categories aligned with OWASP LLM Top 10.

Package Hallucination

161+ packages

AI models suggest packages that don't exist. Attackers register these names with malicious code.

What we catch:

  • Database of 161+ known hallucinated package names
  • Detects typosquatting variations
  • Cross-references PyPI, npm, and other registries

Example finding:

[CRITICAL]

Potentially hallucinated package 'flask-caching-utils'

This package name matches known AI hallucination patterns. No package with this name exists on PyPI.

requirements.txt:12

CWE: CWE-829

Getting Started

Three steps to secure your AI code

Start scanning in under a minute with the CLI. No configuration required.

1

Install

One command to get started

npm install -g @oculum/cli
2

Scan

Run against your codebase

oculum scan .
3

Fix

Review and resolve findings

oculum fix 1

Scan Modes

Choose the right balance of speed and depth

Quick
Pattern matching only, instant feedback
~1000 files/secLocal development
Validated
AI validation reduces false positives
~100 files/secCI/CD pipelines
Deep
Full semantic analysis of high-risk code
~10 files/secSecurity audits
Integrations

Works where you do

Integrate Oculum into your existing workflow. CLI for local dev, GitHub Actions for CI, VS Code for real-time feedback.

CLI

Scan from your terminal during development

  • Instant local scanning
  • Multiple output formats
  • Baseline management
# Install globally
npm i -g @oculum/cli

# Authenticate
oculum auth login

# Scan your project
oculum scan .

GitHub Actions

Automated scanning on every PR

  • PR comments with findings
  • SARIF output
  • Incremental scanning
- uses: oculum-dev/scan-action@v1
  with:
    api-key: ${{ secrets.OCULUM_API_KEY }}
    mode: validated

VS Code

Real-time scanning as you code

  • Inline diagnostics
  • Quick fixes
  • Scan on save
# Install from marketplace
Search: "Oculum Security"

# Or via CLI
code --install-extension \
  oculum.oculum
Standards & Accuracy

Built on security standards

Oculum follows industry-standard vulnerability classifications and provides actionable, verified findings.

OWASP LLM Top 10

Comprehensive coverage of the OWASP Top 10 risks for LLM applications, from prompt injection to model supply chain attacks.

CWE Mapped Findings

Every finding links to relevant CWE identifiers for standardized vulnerability classification and remediation guidance.

SARIF Output

Export findings in SARIF format for integration with GitHub Security, Azure DevOps, and other security platforms.

43

Vulnerability Categories

Across 7 detection domains

18

Detection Rules

Pattern + semantic analysis

93.9%

Benchmark Accuracy

On test corpus of 500+ files

<5%

False Positive Rate

With AI validation enabled

Pricing

Simple, credit-based pricing

Credits are consumed during AI-validated scans. Quick scans are always free.

1 credit ≈ 100K tokens ≈ ~50 files or ~5,000 lines of code

Free

Try it out

$0/mo

5 credits/month

  • Quick scans (unlimited)
  • Validated scans
  • CLI access
  • Community support
Starter

~250 PR scans/month

$9/mo

100 credits/month

  • Everything in Free
  • Private repositories
  • Diff/incremental scanning
  • Email support
Popular
Pro

For professionals

$19/mo

250 credits/month

  • Everything in Starter
  • GitHub Action + SARIF
  • Priority support (24h)
  • Usage analytics
Max

For power users

$60/mo

1,000 credits/month

  • Everything in Pro
  • Team management
  • Multiple API keys
  • Dedicated support (4h)

Need more? Contact us for custom Max plans.

Early Access

Join the Beta Waitlist

Get early access to Oculum and help shape the future of AI code security. Limited spots available.

No spam, just updates on launch and early access invites

FAQ

Common questions

How is this different from Snyk, Semgrep, or other SAST tools?+

Traditional SAST tools were built before AI-assisted coding became widespread. Oculum specifically targets AI-introduced vulnerabilities: hallucinated packages, prompt injection, secrets in LLM context, and insecure output handling. We also use AI validation to reduce false positives by understanding code context, not just pattern matching.

What languages and frameworks are supported?+

Currently: JavaScript, TypeScript, Python, and common configuration files (Docker, CI/CD, .env). We support popular LLM frameworks including LangChain, LlamaIndex, OpenAI SDK, Anthropic SDK, and direct API usage. More languages and framework-specific detectors are added regularly.

How accurate are the findings?+

Our benchmark test corpus shows 93.9% accuracy. With AI validation enabled, false positive rate drops below 5%. Quick scans (pattern-only) may have more noise but are instant. Validated scans use Claude to understand context and filter out obvious false positives like test files, environment variable placeholders, and static content.

Is my code sent to your servers or to AI models?+

Quick scans run entirely locally — no code leaves your machine. For validated scans, only files with potential findings are sent to AI for verification, and we use Claude with zero data retention. We never store your source code; only metadata about findings is persisted for your dashboard.

Can I use this in CI/CD pipelines?+

Yes. Our GitHub Action scans PRs automatically and posts findings as comments. It supports incremental mode (only scan changed files), SARIF output for GitHub Security tab integration, and configurable failure thresholds. The CLI also works in any CI environment — just install and run.

Start finding AI vulnerabilities today

Install the CLI and run your first scan. No account required for quick scans.

$ npx oculum scan .
Get Started Free

No credit card required