Founder & Technical Lead
Security-focused applied AI lab. Builder-operator across architecture, implementation, deployment, and iteration.
Continuity — Persistent context system for AI-assisted development.
- Identified the core failure mode: AI assistants produce inconsistent output because architectural decisions, constraints, and rationale are lost between sessions. Treated context as infrastructure, not convenience.
- Designed five-layer architecture (Decision Capture → Normalization & Validation → Persistent Store → Semantic Retrieval → Context Injection). Achieved 91.4% token reduction vs. naive history retrieval; sub-15ms retrieval using local SQLite + HNSW index.
- Built MCP security interception layer that validates and gates tool execution before LLM-initiated actions reach the file system or external services — enforcement boundary, not policy document.
- Designed human-in-the-loop correction model: corrections stored as scoped annotations with timestamps, not global overrides. Prevents contamination of unrelated future outputs and resolves conflicts deterministically (narrower scope wins within applicability window).
- Resolved a real failure mode in production: early iterations over-weighted semantic similarity, causing stale decisions to interfere with current work. Introduced explicit decision scope and applicability gating; behavior became predictable, trust recovered.
- Security-first defaults: 100% local storage, zero telemetry, on-device vector index. Shipped to VS Code Marketplace October 2025; active paid users in production.
RedArchives — Tamper-evident digital evidence platform.
- Designed for environments where evidence must survive adversarial challenge, insider threats, and legal scrutiny over decades. Mission: preserve documentation of war crimes and human rights violations with cryptographic integrity that doesn't depend on trusting the platform operator.
- Built layered integrity architecture (Artifact Ingestion → Provenance Ledger → Metadata Store → Verification Layer → Presentation Layer) with explicit separation of duties between storage, verification, and presentation.
- Implemented blockchain-anchored cryptographic fingerprinting at ingestion; fingerprints cover normalized metadata, not just content, preventing semantic rewriting without detection. Designed for algorithm agility and re-verification paths to address long-term cryptographic decay.
- Modeled explicit adversarial threats: post-hoc tampering, chain-of-custody disputes, insider risk, temporal attacks, selective disclosure, platform trust collapse. Same integrity discipline informs how I design trust-critical AI systems (training data provenance, evaluation artifacts, feedback integrity).
