A hardened zero-trust environment for running a local coding LLM. Inference happens entirely offline through Ollama. Every file the model can touch is gated by policy, scanned for secrets, and recorded in a hash-chain audit trail.
LLM_Enclave/CLAUDE.mdA small, intentional stack. Python for the chat loop, PowerShell for launch and teardown, and Ollama for completely local inference. No cloud APIs, no telemetry, no background uploaders.
QWEN35/cli/qwen_chat.py, policies/bridge_policy.yamlThe model never touches the file system directly. All disk access flows through the workspace bridge, which checks the policy file on every operation. The chat loop, the model, and the policy enforcer are three separate trust zones inside one process.
policies/bridge_policy.yaml, allowed_target_rootsThe enclave will only ever read or write under explicitly listed root paths. Any path outside this list is rejected before the request reaches the file system. The default configuration scopes the entire enclave to one directory.
bridge_policy.yaml, An explicit allow list of file extensions. Anything without one of these extensions is refused, even if its path is allowed. Source code, configs, and documentation pass. Binaries, archives, and executables do not.
bridge_policy.yaml, deny_patterns, regex matched on filenameA second layer of defense. Even if a file passes the path and extension checks, it can still be blocked by regex patterns matched against its name. These patterns target categories of files that are sensitive by definition.
bridge_policy.yaml, secret_scanningFile contents are scanned for embedded secrets before they ever reach the model. The default mode is block, meaning a file containing a detected secret is refused entirely rather than silently redacted. Redact mode is supported but opt-in.
bridge_policy.yaml, redaction_rules, 10 rulesWhen the scanner runs in redact mode, these are the patterns it looks for. Each match is replaced in-place with a labelled token so the original secret never reaches the model context. The patterns are real values from the policy file.
cli/qwen_chat.py, exposed via Ollama tool callingThe model is given exactly four tools. No shell, no network, no process spawning, no environment access. Every tool routes through the workspace bridge and is subject to the full policy chain before it executes.
bridge_policy.yaml, logging.safe_to_log, logging.never_logLogs are useful for debugging and audit, but they are also a leak surface. The policy explicitly enumerates what may be written to disk and what may not. Source code, model output, and prompt content are never logged. Metadata is.
QWEN35/runtime/config/, session logsEvery approved operation is appended to a session log whose entries form a hash chain. Each entry includes the SHA of the previous entry, so any tampering with historical events invalidates every entry that came after it. The chain is the source of truth for what the model actually did.
bridge_policy.yaml, size and batch limitsHard caps that bound any single bridge operation. These exist to prevent the model from accidentally pulling a large binary into context, and to keep batch operations from exceeding the local memory budget.
D:\ProjectsHome\LLM_Enclave\QWEN35\A small, deliberate file tree. Three top-level PowerShell scripts handle launch, setup, and shutdown. The chat CLI lives under cli/, the policy file lives under policies/, and the workspace bridge owns inbox/outbox/scratch directories that the model can address.
An honest list of things the LLM Enclave deliberately does not do. The constraints are the point. A coding assistant that can do anything is a coding assistant that can leak anything.