$ AI agent governance, security tooling, and mechanical enforcement.

rigscore v0.5.0

rigscore, security, ai, devtools, open-source

Changes: v0.4.1 → v0.5.0

One addition. Check count goes from 10 to 11.

FeatureWhat it does
Network exposureAdvisory check — detects AI services bound to 0.0.0.0 instead of 127.0.0.1 across four layers

The problem

AI services default to binding on all interfaces more often than they should. Ollama ships with OLLAMA_HOST=0.0.0.0 in many setup guides. LM Studio and Open WebUI listen on 0.0.0.0 by default. MCP SSE servers frequently bind to all interfaces for convenience during development — and then stay that way.

On a laptop behind NAT, this is low risk. On a cloud VM, a shared network, or a Docker host with published ports, it means your local AI inference server is accessible to anyone on the network. Model weights, API tokens passed in headers, and prompt content become exposed.


What it scans

The check examines four layers:

  • MCP config URLs — flags SSE and streamable-http endpoints targeting non-loopback hosts. This is the highest severity — an MCP server exposed to the network is a direct attack surface for agent hijacking.
  • Docker port bindings — flags AI service ports (Ollama, LM Studio, Open WebUI, vLLM, LocalAI, FastChat, LiteLLM) mapped without an explicit 127.0.0.1: bind address.
  • Ollama config — checks systemd overrides and .ollama/.env for OLLAMA_HOST=0.0.0.0.
  • Live listeners — runs ss or lsof to detect known AI ports currently listening on all interfaces.

Advisory, not scored

Network exposure ships as advisory — weight 0, no score impact. The detection covers a broad surface area and we want community feedback on false positive rates before it affects scores.

If you have services intentionally bound to all interfaces (e.g., behind a reverse proxy with auth), use config.network.safeHosts to allowlist them.


What’s next

The backfill releases (v0.1.0 through v0.3.0) are now on the GitHub releases page, documenting the full development history from initial release through coherence checking, SARIF output, and moat-heavy scoring.


Install

1
npx rigscore

No accounts, no telemetry, no network calls. MIT licensed.

github.com/Back-Road-Creative/rigscore

Configuration details reflect a production environment at time of writing. Implementation specifics vary based on tooling versions, platform updates, and organizational requirements. Validate approaches against current documentation before deployment.