The Lab · v0.2
My projects.
The deep-dive companion to niclydon.com. Where the front door is a portrait, this is the schematic — eight projects, seven agents, and the home lab they all run on. Two AMD Strix Halo machines linked over Thunderbolt 5, an M4 Mac Mini for dev work, eight always-on inference models, one ~191-table Postgres brain, zero cloud dependency for anything I actually care about.
Projects 10 active · 15 total
- 01ARIA Adaptive Responsive Intelligence AssistantNext.js · TypeScript · Swift 6Active→
- 02Desk Operator console for the Nexus agent platformNext.js 16 · React 19 · TypeScriptActive→
- 03Nexus Unified agent platformTypeScript · Express · PostgreSQLActive→
- 04Forge Home lab LLM inference gatewayFastAPI · Python · llama.cppActive→
- 05Homelab Host-level infrastructure and model lifecycle contractPython · Bash · systemdActive→
- 06Tongs Unrestricted permissive chat with visible thinkingNext.js 16 · React 19 · TypeScriptActive→
- 07Whittled Whittle your photo library down to what mattersSwift 6 · SwiftUI · PhotoKitPre-launch→
- 08Voice Print Media processing & person fingerprinting hubPython · Next.js · WhisperActive→
- 09Cairn Daily check-in capture surface for journaling, biography, and draftsSwift 6 · SwiftUI · PostgreSQLExperimental→
- 10QLoRA Personal language model fine-tuningPython · Unsloth · QLoRAExperimental→
- 11Broadside Multi-brand editorial desk seeded from your own project docsNext.js 15 · React 19 · TypeScriptActive→
- 12Jingle Family Multi-tenant Christmas elf universeNext.js · TypeScript · TailwindActive→
- 13Smithy Image & video generation workbenchFastAPI · React · TypeScriptExperimental→
- 14Netware Blue Retro design system inspired by 1990s Novell NetWareTypeScript · Node.js · ESMActive→
- 15ARIA Origin How a chatbot became a personal operating systemNext.js · PostgreSQL · SwiftArchived→
Infrastructure View detail →
The substrate that makes the projects above possible. Two AMD Strix Halo machines (Furnace + Crucible) linked over Thunderbolt 5, an M4 Mac Mini for daily dev (Anvil), and a remote KVM (Bellows). Combined: 192GB RAM, 160GB VRAM, eight always-on inference models, zero cloud dependency for personal workloads.