ZERO TRUST/OFFLINE BY DESIGN/NO OUTBOUND NETWORK
A Fisher Sovereign Publication

The LLM Enclave

A hardened zero-trust environment for running a local coding LLM. Inference happens entirely offline through Ollama. Every file the model can touch is gated by policy, scanned for secrets, and recorded in a hash-chain audit trail.

Built by Fisher Sovereign Systems
Codename
QWEN35
Model
qwen2.5-coder
Inference
Local Ollama
Network
None
Audit
Hash-Chain
Stack
source: LLM_Enclave/CLAUDE.md

A small, intentional stack. Python for the chat loop, PowerShell for launch and teardown, and Ollama for completely local inference. No cloud APIs, no telemetry, no background uploaders.

Architecture and Trust Boundary
source: QWEN35/cli/qwen_chat.py, policies/bridge_policy.yaml

The model never touches the file system directly. All disk access flows through the workspace bridge, which checks the policy file on every operation. The chat loop, the model, and the policy enforcer are three separate trust zones inside one process.

Allowed Roots
source: policies/bridge_policy.yaml, allowed_target_roots

The enclave will only ever read or write under explicitly listed root paths. Any path outside this list is rejected before the request reaches the file system. The default configuration scopes the entire enclave to one directory.

Permitted
Absolute paths under these roots are accepted.
    Rejected
    Anything outside the allowed roots fails the path check.
      Allowed Extensions
      source: bridge_policy.yaml,

      An explicit allow list of file extensions. Anything without one of these extensions is refused, even if its path is allowed. Source code, configs, and documentation pass. Binaries, archives, and executables do not.

      Deny Patterns
      source: bridge_policy.yaml, deny_patterns, regex matched on filename

      A second layer of defense. Even if a file passes the path and extension checks, it can still be blocked by regex patterns matched against its name. These patterns target categories of files that are sensitive by definition.

      Secret Scanning
      source: bridge_policy.yaml, secret_scanning

      File contents are scanned for embedded secrets before they ever reach the model. The default mode is block, meaning a file containing a detected secret is refused entirely rather than silently redacted. Redact mode is supported but opt-in.

      Configuration
      Live values from the active policy file.
      • [on]enabled: true
      • [on]mode: block
      • [on]built-in patterns: 10 redaction rules
      • [off]custom_patterns: empty
      • [off]ignore_files: empty (no false-positive overrides)
      Modes
      How the scanner can respond to a hit.
      • blockRefuse the file entirely. The model never sees it. (default)
      • redactReplace each match with a labelled placeholder, then pass.
      • warnAllow the file through, log the violation for review.
      Redaction Pipeline
      source: bridge_policy.yaml, redaction_rules, 10 rules

      When the scanner runs in redact mode, these are the patterns it looks for. Each match is replaced in-place with a labelled token so the original secret never reaches the model context. The patterns are real values from the policy file.

      Tool Surface
      source: cli/qwen_chat.py, exposed via Ollama tool calling

      The model is given exactly four tools. No shell, no network, no process spawning, no environment access. Every tool routes through the workspace bridge and is subject to the full policy chain before it executes.

      Logging Policy
      source: bridge_policy.yaml, logging.safe_to_log, logging.never_log

      Logs are useful for debugging and audit, but they are also a leak surface. The policy explicitly enumerates what may be written to disk and what may not. Source code, model output, and prompt content are never logged. Metadata is.

      Safe To Log
      Metadata only. No content ever appears in these.
        Never Log
        Categorically excluded by policy.
          Hash-Chain Audit Trail
          source: QWEN35/runtime/config/, session logs

          Every approved operation is appended to a session log whose entries form a hash chain. Each entry includes the SHA of the previous entry, so any tampering with historical events invalidates every entry that came after it. The chain is the source of truth for what the model actually did.

          Operational Limits
          source: bridge_policy.yaml, size and batch limits

          Hard caps that bound any single bridge operation. These exist to prevent the model from accidentally pulling a large binary into context, and to keep batch operations from exceeding the local memory budget.

          Repository Layout
          source: D:\ProjectsHome\LLM_Enclave\QWEN35\

          A small, deliberate file tree. Three top-level PowerShell scripts handle launch, setup, and shutdown. The chat CLI lives under cli/, the policy file lives under policies/, and the workspace bridge owns inbox/outbox/scratch directories that the model can address.

          What This Is Not
          design intent

          An honest list of things the LLM Enclave deliberately does not do. The constraints are the point. A coding assistant that can do anything is a coding assistant that can leak anything.

          No Cloud Inference
          All model calls hit local Ollama at 127.0.0.1:11434. Pulling the model is the only network operation, and it happens during setup, not at runtime.
          No Shell Access
          The model cannot execute commands, spawn processes, or shell out. The four bridge tools are the entire surface area.
          No Network Tools
          No fetch, no curl, no socket primitives. The model has no concept of the internet at runtime.
          No Implicit Trust
          Every file access is checked against the policy at request time. Approval is per-call, never cached, never inferred from prior context.