Project N.O.M.A.D.: Your Offline AI Survival Kit (And How to Actually Get It Running)
If you've been following the homelab and self-hosted AI scene, you've probably heard of Project N.O.M.A.D. which is short for Networked Offline Machine for Autonomous Data. It's an impressive all-in-one offline-first AI platform built by Crosstalk Solutions, designed to give you a full AI assistant, knowledge base, and document processing engine that works even when the internet doesn't. Think Wikipedia mirrors, Stack Exchange snapshots, medical references, and a fully local LLM; all in a Docker stack you run on your own hardware.
This post covers what NOMAD is, how to install it, how to get it behind a reverse proxy with authentication, and (for those of you with AMD Ryzen mini PCs) how to unlock GPU acceleration so your AI assistant actually runs at a useful speed.
What Is Project NOMAD?
At its core, NOMAD is a web application that bundles:
- A local AI assistant powered by Ollama (runs LLMs like Llama 3 locally)
- A RAG (Retrieval-Augmented Generation) engine that indexes ZIM files (offline Wikipedia, Stack Exchange, medical references, etc.)
- File embeddings for uploading and querying your own documents
- A self-updating Docker stack with a sidecar updater container
It's designed for preppers, remote deployments, ships, field teams, or anyone who wants an AI that doesn't phone home or require a cloud subscription. It runs on modest hardware! You just need a mini PC with 16GB RAM and a fast SSD is plenty to get started.
Installation
Prerequisites
- A Linux machine (Ubuntu/Debian/Zorin OS)
curlinstalledsudoaccess
One-Line Install
sudo apt-get update && sudo apt-get install -y curl && \
curl -fsSL https://raw.githubusercontent.com/Crosstalk-Solutions/project-nomad/refs/heads/main/install/install_nomad.sh \
-o install_nomad.sh && sudo bash install_nomad.sh
The installer will:
- Detect your OS and install Docker if not present
- Pull all required images (main app, MySQL, Redis, Dozzle log viewer, disk collector, sidecar updater)
- Configure and start the stack
The web UI will be available at http://<your-ip>:8080 once everything is healthy.
Port Conflict: MySQL on 3306
If you have a native MySQL instance already running (common on machines running Samba, lighttpd, or other services), the installer will fail with:
failed to bind host port 0.0.0.0:3306/tcp: address already in use
Fix: Edit /opt/project-nomad/compose.yml and remap the MySQL host port:
ports:
- "3307:3306" # Change 3306 to 3307 (or any free port)
The NOMAD application connects to MySQL internally by container name (mysql:3306), so this change only affects the host binding; nothing inside the stack breaks.
Then bring it up:
cd /opt/project-nomad && sudo docker compose up -d
Reverse Proxy Setup (NPMplus + TinyAuth):
Running NOMAD on a raw IP and port is fine for local use, but if you want to access it remotely via a domain name with HTTPS and authentication, you'll need a reverse proxy.
This guide uses NPMplus (Nginx Proxy Manager Plus) on TrueNAS SCALE and TinyAuth for authentication.
Key Configuration Notes
After extensive troubleshooting, here is the exact working configuration:
Details Tab:
- Forward Hostname:
<your-nomad-machine-ip> - Forward Port:
8080 - Scheme:
http:// - ✅ Disable Request Buffering
- ✅ Disable Response Buffering
Custom Locations Tab; add one location:
- Location:
/tinyauth - Scheme:
http:// - Forward Hostname:
<your-tinyauth-ip> - Forward Port:
30310 - Gear icon on that row:
rewrite ^ /api/auth/nginx break;
internal;
proxy_method GET;
proxy_pass_request_body off;
proxy_set_header Content-Length "";
proxy_set_header x-forwarded-proto $scheme;
proxy_set_header x-forwarded-host $http_host;
proxy_set_header x-forwarded-uri $request_uri;
Gear Icon (top right -> Custom Nginx Configuration):
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
auth_request /tinyauth;
error_page 401 =302 https://auth.yourdomain.com/login?redirect_uri=$scheme://$host$request_uri;
Important Gotchas
-
Do NOT use NPMplus's built-in TinyAuth dropdown: as of NPMplus
2026-02-19-r3, the native TinyAuth template generates a broken config with an incompleteproxy_passURL and a malformed redirect URI. Use the manual method above instead. -
The
$upstreamvariable leak: NPMplus sets$upstreamto the proxy target in each location block. When TinyAuth is configured via a Custom Location row, its$upstreamvalue (your TinyAuth IP) can bleed into the parentlocation /block, causing NOMAD requests to be routed to TinyAuth's IP instead. Usingproxy_pass http://10.x.x.x:30310/api/auth/nginxwith a hardcoded IP in the gear icon config avoids this entirely. -
Disable Response Buffering is required: Without it, NOMAD's streaming responses (AI completions, file processing) will stall or fail with
ERR_HTTP2_PROTOCOL_ERROR. -
Remove any direct port forwards for NOMAD in your router/firewall: Ff you have a port forward for port 8080 pointing at your NOMAD machine, it will intercept traffic before it reaches NPMplus. All external traffic should go through ports 80/443 → NPMplus only.
GPU Acceleration:
AMD GPU Acceleration (Discrete & iGPU): AMD ROCm Setup
Out of the box, NOMAD's AI assistant runs on the CPU. On hardware like a mini PC with an AMD Ryzen APU (Radeon 780M, 890M, etc.) or discrete cards like the Radeon RX 6400, you can achieve 30–55 tokens/second by enabling ROCm AMD GPU passthrough.
1. BIOS Settings First
Before touching software, ensure your GPU has enough dedicated VRAM. If locked at 512MB, the AI model will fail to load to the GPU and fall back to CPU silently.
Reboot and enter BIOS (F2 or Del).
Find UMA Frame Buffer Size.
Set to 4GB or 8GB minimum (Auto works on most boards, but manual is safer).
2. Host-Level Preparation & Drivers
NOMAD uses Ollama as its AI backend, which requires the ROCm stack to communicate with AMD hardware.
Install Drivers (Ubuntu/Debian/Zorin):
# Add your user to the required hardware groups
sudo usermod -a -G render,video $USER
# Install AMD GPU drivers and ROCm
sudo apt update
sudo apt install -y amdgpu-dkms rocm-hip-sdk
# Force-unlock hardware nodes for the container
sudo chmod 666 /dev/kfd /dev/dri/renderD*
[!IMPORTANT] A reboot is mandatory after this step for group changes and drivers to take effect.
3. Enabling GPU in NOMAD
You have two ways to enable the GPU depending on your hardware stability.
Option A: The "Standard" Re-detection (iGPUs)
If you are using a modern Ryzen APU, you can often let NOMAD handle the heavy lifting:
Open your NOMAD dashboard (
http://localhost:8080).Go to Settings → Apps → AI Assistant.
Click Force Reinstall. NOMAD will detect the ROCm libraries and recreate the container with the required flags:
--device /dev/kfd --device /dev/dri.
Option B: The "Hard Override" (Discrete Cards & SFF PCs)
If you are using a discrete card (like the RX 6400) or an older SFF PC (like the Lenovo M720q) that doesn't support PCIe Atomics, the AI engine may report "0B VRAM." Run this block to manually recreate the container with "Magic" overrides:
# Stop and remove the existing 'blind' container
sudo docker stop nomad_ollama && sudo docker rm nomad_ollama
# Launch with overrides for AMD hardware
sudo docker run -d \
--name nomad_ollama \
--restart unless-stopped \
--network project-nomad_default \
-v ollama:/root/.ollama \
-v /opt/rocm:/opt/rocm \
-p 11434:11434 \
--device /dev/kfd:/dev/kfd \
--device /dev/dri:/dev/dri \
--security-opt seccomp=unconfined \
--group-add 44 \
--group-add 110 \
-e OLLAMA_HOST=0.0.0.0 \
-e OLLAMA_ORIGINS="*" \
-e HSA_OVERRIDE_GFX_VERSION=10.3.0 \
-e HSA_IGNORE_ATOMIC_CHECK=1 \
-e HSA_ENABLE_SDMA=0 \
-e LD_LIBRARY_PATH=/opt/rocm/lib \
ollama/ollama:rocm
4. Explaining the "Secret Sauce" Flags
HSA_OVERRIDE_GFX_VERSION=10.3.0: Masks the card as a standard RDNA2 chip for ROCm compatibility.
HSA_IGNORE_ATOMIC_CHECK=1: Critical Fix. Forces ROCm to use VRAM even if the motherboard fails the "Atomic" hardware check.
HSA_ENABLE_SDMA=0: Prevents driver timeouts/crashes on RX 6400/6500 series cards.
OLLAMA_ORIGINS="*": Fixes "Status 400" errors by allowing the dashboard to talk to the AI API.
5. Final Setup & Verification
Dashboard Config: In NOMAD, go to Settings → AI Assistant.
Set URL: Change the Remote Ollama URL to:
http://nomad_ollama:11434Save & Test: Click save. If you see "Remote Disconnected," perform a Hard Refresh (Ctrl + F5) to clear browser CORS cache.
To Verify:
Run watch -n 1 /opt/rocm/bin/rocm-smi in your terminal while the AI is generating. You should see the GPU% spike and VRAM% fill up.
Final Thoughts
Project NOMAD is one of the most ambitious self-hosted AI projects available right now. The installation is straightforward, the ZIM knowledge base integration is genuinely useful, and with an AMD Ryzen mini PC you can get real-world AI performance without spending anything on cloud APIs.
The reverse proxy setup has some rough edges (particularly the NPMplus/TinyAuth integration bugs) but once you have the working config above in place, it's solid and fast.
If you're building a homelab, a go-bag server, or just want an AI that works when the internet doesn't, NOMAD is worth the setup time.
Created & Maintained by Pacific Northwest Computers
📞 Pacific Northwest Computers offers Remote & Onsite Support Across:
SW Washington including Vancouver WA, Battle Ground WA, Camas WA, Washougal WA, Longview WA, Kelso WA, and Portland OR


No comments:
Post a Comment