Skip to content

NAS Deployment Guide

A friendly, follow-along guide for deploying the DIY NAS. Follow each phase in order, checking off steps as you go. Total estimated time: 1.5–2 hours.


What You'll Need

Hardware on Hand

  • [ ] NAS (assembled, all 3 drives installed: SSD + Purple + Red)
  • [ ] Ethernet cable (goes to MokerLink switch port 4)
  • [ ] Power cable (plugged into UPS)
  • [ ] USB keyboard + monitor (for initial setup only)
  • [ ] Debian 13 USB boot drive (already prepared)

Credentials (Have These Ready)

  • [ ] Tailscale auth key (generated from Headscale — see "Before Deployment Day")
  • [ ] Samba password (generated by prep script — saved in Vaultwarden)
  • [ ] Restic password (generated by prep script — saved in Vaultwarden)
  • [ ] Root password you'll set during Debian install
  • [ ] User (augusto) password you'll set during Debian install

On Your MacBook

  • [ ] SSH access to VPS (ssh vps) working
  • [ ] SSH tunnel to OPNsense working (ssh -L 8443:192.168.0.1:443 <[email protected]>)
  • [ ] This guide open (Markdown or HTML version)

Before Deployment Day

Do these from your MacBook, remotely. They prepare the network and secrets so deployment day goes smoothly.

Step 1: Generate Secrets with the Prep Script

Run this from your homelab repo on the MacBook:

bash scripts/nas-prep-env.sh

What this does: Generates a random Samba password, creates .env files for the NAS Docker stacks, and prompts you for a Restic backup password. It puts everything in the right place so you just copy files on deployment day.

  • [ ] Script ran successfully
  • [ ] Samba password saved in Vaultwarden
  • [ ] Restic password saved in Vaultwarden

Step 2: Generate a Tailscale Auth Key

ssh vps
docker exec headscale headscale preauthkeys create --expiration 1h

What this does: Creates a one-time key that lets the NAS join your Tailscale network. The key expires in 1 hour, so generate this right before you start Phase 4.

  • [ ] Auth key saved (Vaultwarden or a note — you'll type it on the NAS)

Timing tip: This key expires in 1 hour. If you're doing prep the night before, skip this step and do it right before Phase 4 on deployment day.

Step 3: Add DHCP Reservation in OPNsense

  1. Open an SSH tunnel: ssh -L 8443:192.168.0.1:443 <[email protected]>
  2. Open https://localhost:8443 in your browser
  3. Go to Services → DHCPv4 → LAN
  4. Add a reservation: NAS MAC address → 192.168.0.12

What this does: Tells OPNsense to always give the NAS the same IP address (192.168.0.12) every time it connects. You can find the MAC address on a sticker on the NAS motherboard.

  • [ ] DHCP reservation added for 192.168.0.12

Step 4: Add DNS Entry in Pi-hole

Add a custom DNS entry so nas.home resolves to 192.168.0.12. Edit Pi-hole's config:

ssh docker-vm
docker exec -it pihole bash

Inside the Pi-hole container, edit /etc/pihole/pihole.toml and add to the dns.hosts array:

[[dns.hosts]]
  addr = "192.168.0.12"
  names = ["nas.home", "syncthing.home"]

Then reload:

pihole reloaddns
  • [ ] DNS entry added for nas.home

Step 5: Flash the Debian USB (if not already done)

sudo dd if=debian.iso of=/dev/sdX bs=4M status=progress

What this does: Writes the Debian installer to a USB stick so the NAS can boot from it.

  • [ ] Debian 13 netinst ISO on USB drive

Phase 1 of 8: Install Debian (~25 min)

This phase installs the operating system on the NAS's SSD drive.

Boot from USB

  1. Connect the USB keyboard, monitor, and Debian USB to the NAS
  2. Power on the NAS
  3. Press F8(orDEL) to open the boot menu
  4. Select the USB drive

  5. [ ] NAS booted into Debian installer

Walk Through the Installer

Use these settings when prompted:

Setting Value

| Language | English | | Location | Paraguay | | Hostname | nas | | Domain | (leave blank) | | Root password | (set a strong one — save in Vaultwarden) | | Username | augusto | | Timezone | America/Asuncion | | Partitioning | Use entire disk→ select theLexar SSD (240GB) | | Software | SSH server+standard utilities only (no desktop!) |

What this does: Installs a minimal Debian server on the SSD. The two data drives (Purple and Red) are left untouched — we'll set those up in Phase 2.

  • [ ] Debian installer completed
  • [ ] NAS rebooted into Debian (remove USB when prompted)

First Boot Setup

Log in as root at the console and run:

apt update && apt upgrade -y

What this does: Downloads and installs the latest security patches.

  • [ ] System updated
apt install -y sudo curl wget git vim htop tmux

What this does: Installs essential tools you'll need for the rest of the setup.

  • [ ] Essential packages installed
usermod -aG sudo augusto

What this does: Gives your user account (augusto) permission to run admin commands with sudo.

  • [ ] User added to sudo group

Now reboot to apply everything:

reboot

Trouble? If the NAS doesn't get IP 192.168.0.12, check the OPNsense DHCP reservation (Step 3 above). As a fallback, set a static IP by editing /etc/network/interfaces as root:

auto enp0s25
iface enp0s25 inet static
  address 192.168.0.12
  netmask 255.255.255.0
  gateway 192.168.0.1
  dns-nameservers 192.168.0.10

Then reboot.


Phase 2 of 8: Set Up the Drives (~15 min)

This phase prepares the two data drives (Purple 2TB and Red 8TB) so the NAS can store files on them.

From here on, you can work via SSH from your MacBook: ssh <[email protected]>

Identify Your Drives

lsblk -f

What this does: Lists all drives and their partitions. You should see three drives:

  • sda — Lexar 240GB SSD (your boot drive, already has Debian)
  • sdb — WD Purple 2TB (for Frigate camera recordings)
  • sdc — WD Red Plus 8TB (for media, backups, everything else)

  • [ ] All three drives visible in lsblk

Trouble? If a drive is missing, power off the NAS and check the SATA cable. Then power back on and try again.

Check Drive Health

sudo apt install -y smartmontools
sudo smartctl -a /dev/sdb
sudo smartctl -a /dev/sdc

What this does: Runs a health check on each data drive. Look for SMART overall-health self-assessment test result: PASSED. If a drive shows errors, don't use it.

  • [ ] Purple drive health: PASSED
  • [ ] Red drive health: PASSED

Partition and Format

sudo fdisk /dev/sdb
# Type: n (new), p (primary), 1, press Enter twice for defaults, w (write)
sudo mkfs.ext4 -L purple /dev/sdb1

What this does: Creates a single partition on the Purple drive and formats it with the ext4 filesystem, labeling it "purple".

  • [ ] Purple drive partitioned and formatted
sudo fdisk /dev/sdc
# Type: n (new), p (primary), 1, press Enter twice for defaults, w (write)
sudo mkfs.ext4 -L red8 /dev/sdc1
  • [ ] Red drive partitioned and formatted

Note: If the drives already have partitions from previous use, fdisk will warn you. Delete the old partitions first with d, then create the new one with n.

Create Mount Points

sudo mkdir -p /mnt/{purple,red8}

What this does: Creates the folders where the drives will be "attached" to the filesystem.

  • [ ] Mount points created

Configure Automatic Mounting (fstab)

Get the drive UUIDs:

sudo blkid

What this does: Shows the unique ID of each partition. You'll need the UUIDs for sdb1 and sdc1.

Now edit fstab:

sudo nano /etc/fstab

Add these two lines at the end (replace the UUIDs with your actual values from blkid):

UUID=<purple-uuid>  /mnt/purple  ext4  defaults,noatime  0  2
UUID=<red8-uuid>    /mnt/red8    ext4  defaults,noatime  0  2

What this does: Tells Debian to automatically mount these drives every time the NAS starts. The noatime option improves performance by not tracking every file access time.

  • [ ] fstab entries added

Mount and Verify

sudo mount -a
df -h

What this does: Mounts everything in fstab right now (without rebooting). You should see /mnt/purple (~2TB) and /mnt/red8 (~8TB) in the output.

  • [ ] Both drives mounted and showing correct sizes

Create Directory Structure

# Folders on the Red 8TB drive
sudo mkdir -p /mnt/red8/{media,downloads,data,sync,backup}

# Folder on the Purple 2TB drive
sudo mkdir -p /mnt/purple/frigate

# Create convenient shortcuts in /srv
sudo mkdir -p /srv
sudo ln -s /mnt/red8/media /srv/media
sudo ln -s /mnt/red8/downloads /srv/downloads
sudo ln -s /mnt/red8/data /srv/data
sudo ln -s /mnt/red8/sync /srv/sync
sudo ln -s /mnt/red8/backup /srv/backup
sudo ln -s /mnt/purple/frigate /srv/frigate

# Set ownership so your user can write to everything
sudo chown -R 1000:1000 /mnt/red8 /mnt/purple /srv

What this does: Creates the folder structure for all your data, then makes shortcuts in /srv so services always use the same paths regardless of which physical drive the data lives on.

  • [ ] Directories created
  • [ ] Symlinks in /srv created
  • [ ] Ownership set to user 1000

Phase 3 of 8: Install Docker (~10 min)

Docker runs all the NAS services (Samba, Syncthing, Restic) in containers.

curl -fsSL https://get.docker.com | sudo sh

What this does: Downloads and installs Docker using the official installer script.

  • [ ] Docker installed
sudo usermod -aG docker augusto

What this does: Lets your user run Docker commands without sudo.

  • [ ] User added to docker group

Now log out and back in so the group change takes effect:

exit

SSH back in: ssh <[email protected]>

docker --version
docker compose version

What this does: Verifies Docker and Docker Compose are installed and working.

  • [ ] docker --version shows a version
  • [ ] docker compose version shows a version
docker run hello-world

What this does: Runs a tiny test container to confirm Docker can download images and run containers.

  • [ ] hello-world ran successfully

Phase 4 of 8: Connect to Tailscale (~5 min)

Tailscale lets you access the NAS from anywhere (not just your home network) through your self-hosted Headscale server.

Reminder: If you generated the auth key the night before, it has expired. Generate a fresh one now (see "Before Deployment Day" Step 2).

curl -fsSL https://tailscale.com/install.sh | sh

What this does: Installs the Tailscale client on the NAS.

  • [ ] Tailscale installed
sudo tailscale up --login-server=https://hs.cronova.dev --authkey=<key>

Replace <key> with your auth key from Headscale.

What this does: Connects the NAS to your private Tailscale network via your Headscale server.

  • [ ] Tailscale connected
tailscale status
tailscale ip

What this does: Shows the NAS's Tailscale IP address.

  • [ ] Tailscale IP noted: _______________

Note: The NAS Tailscale IP is 100.82.77.97 (assigned by Headscale).


Phase 5 of 8: Set Up NFS (~10 min)

NFS (Network File System) lets the Docker VM mount the NAS's Frigate folder directly, as if it were a local drive. This is how Frigate will store camera recordings on the NAS.

Run this from your MacBook (not the NAS):

cd ~/homelab/ansible

# Test that Ansible can reach the NAS
ansible -i inventory.yml nas -m ping

# Run the NFS setup playbook
ansible-playbook -i inventory.yml playbooks/nfs-server.yml -l nas

What this does: Automatically installs NFS, configures which folders are shared and to whom, opens the firewall ports, and starts the NFS service. One command does everything.

  • [ ] Ansible playbook completed successfully

Trouble? If Ansible can't reach the NAS, make sure the ansible_host in inventory.yml matches the NAS's actual Tailscale IP (which you just learned in Phase 4). Update it if needed.

Option B: Manual Setup

If Ansible isn't working, do it by hand on the NAS:

sudo apt install -y nfs-kernel-server
sudo nano /etc/exports

Add these lines:

/srv/frigate    192.168.0.10(rw,sync,no_subtree_check,no_root_squash)
/srv/media      192.168.0.0/24(ro,sync,no_subtree_check)
/srv/downloads  192.168.0.0/24(rw,sync,no_subtree_check)
/srv/backup     192.168.0.0/24(rw,sync,no_subtree_check)

What this does: Defines which folders are shared and who can access them. Frigate recordings are only accessible from the Docker VM (192.168.0.10). Media is read-only for everyone on the network. Downloads and backups are read-write.

sudo exportfs -ra
sudo exportfs -v
sudo systemctl enable nfs-kernel-server
sudo systemctl start nfs-kernel-server

What this does: Applies the NFS configuration, shows what's shared, and makes NFS start automatically on boot.

  • [ ] NFS exports configured

Open the firewall:

sudo ufw allow from 192.168.0.0/24 to any port 111
sudo ufw allow from 192.168.0.0/24 to any port 2049

What this does: Opens the two ports NFS needs (port mapper + NFS itself) for your local network.

  • [ ] Firewall rules added
  • [ ] NFS service running

Phase 6 of 8: Clone the Repo (~5 min)

Get the homelab configuration files onto the NAS.

sudo mkdir -p /opt/homelab
sudo chown augusto:augusto /opt/homelab
git clone ssh://git@localhost:2222/augusto/homelab.git /opt/homelab/repo

What this does: Creates the /opt/homelab directory and clones your homelab repo from Forgejo (running locally on the NAS). This gives the NAS access to all the Docker Compose files it needs.

  • [ ] Repo cloned to /opt/homelab/repo

Trouble? If the git clone fails, check that Forgejo is running on the NAS: docker ps | grep forgejo.


Phase 7 of 8: Start the Services (~15 min)

Storage Stack (Samba + Syncthing)

cd /opt/homelab/repo/docker/fixed/nas/storage

Copy the .env file that the prep script generated (from your MacBook):

# Run this on your MacBook:
scp ~/homelab/docker/fixed/nas/storage/.env [email protected]:/opt/homelab/repo/docker/fixed/nas/storage/.env

Or create it manually on the NAS:

cp .env.example .env
nano .env

Fill in the values:

TZ=America/Asuncion
PUID=1000
PGID=1000
SAMBA_USER=augusto
SAMBA_PASSWORD=<the-password-from-vaultwarden>
MEDIA_PATH=/srv/media
DATA_PATH=/srv/data
DOWNLOADS_PATH=/srv/downloads
BACKUP_PATH=/srv/backup
SYNC_PATH=/srv/sync

Start the services:

docker compose up -d
docker compose ps
docker compose logs -f

What this does: Starts Samba (file sharing for Windows/Mac) and Syncthing (file sync) in Docker containers. The ps command shows their status, and logs -f streams their output so you can watch for errors (press Ctrl+C to stop watching).

  • [ ] .env file in place
  • [ ] docker compose up -d succeeded
  • [ ] Both containers running (check with docker compose ps)

Backup Stack (Restic REST)

cd /opt/homelab/repo/docker/fixed/nas/backup

Create the backup directory:

sudo mkdir -p /srv/backup/restic
sudo chown 1000:1000 /srv/backup/restic

Copy the files from your MacBook:

# Run on MacBook:
scp ~/homelab/docker/fixed/nas/backup/.env [email protected]:/opt/homelab/repo/docker/fixed/nas/backup/.env
scp ~/homelab/docker/fixed/nas/backup/htpasswd [email protected]:/opt/homelab/repo/docker/fixed/nas/backup/htpasswd

Or create them manually:

cp .env.example .env
nano .env
# Set: BACKUP_DATA=/srv/backup/restic

sudo apt install -y apache2-utils
htpasswd -B -c htpasswd augusto
# Enter the Restic password from Vaultwarden

Start the backup service:

docker compose up -d
docker compose ps

Test that it's responding:

curl http://localhost:8000/

What this does: Starts the Restic REST server, which provides an HTTP interface for Restic backups. Other machines on your network can back up to this server.

  • [ ] Backup directory created
  • [ ] .env and htpasswd files in place
  • [ ] Restic REST container running
  • [ ] curl returns a response (not an error)

Phase 8 of 8: Verify Everything Works (~15 min)

Time to test every service and make sure it all works.

Network Connectivity

# From your MacBook — local network:
ping 192.168.0.12

# From your MacBook — Tailscale:
ping <nas-tailscale-ip>
ssh augusto@<nas-tailscale-ip>
  • [ ] NAS reachable via local IP (192.168.0.12)
  • [ ] NAS reachable via Tailscale IP
  • [ ] SSH works over Tailscale

Samba Shares

On your Mac, open Finderand pressCmd+K (Go → Connect to Server):

smb://192.168.0.12/media

Enter user augusto and the Samba password from Vaultwarden.

What this does: Connects to the NAS's shared media folder from your Mac. You should be able to browse (read-only for media).

  • [ ] Can connect to Samba share from Finder
  • [ ] Can see the media folder

NFS (Test from Docker VM)

# SSH to Docker VM
ssh docker-vm

sudo apt install -y nfs-common
sudo mkdir -p /mnt/nas/frigate
sudo mount -t nfs 192.168.0.12:/srv/frigate /mnt/nas/frigate
ls /mnt/nas/frigate

What this does: Mounts the NAS's Frigate folder on the Docker VM. This is how Frigate will store camera recordings.

  • [ ] NFS mount works from Docker VM

Syncthing

Access the Syncthing web UI. Either:

  • Direct (local network): http://192.168.0.12:8384
  • SSH tunnel: ssh -L 8384:localhost:8384 augusto@<nas-tailscale-ip> then http://localhost:8384

  • [ ] Syncthing web UI loads

Restic REST

curl http://192.168.0.12:8000/
  • [ ] Restic REST responds

After Deployment

These tasks can be done later, from your MacBook.

Add Uptime Kuma Monitors

Open the Uptime Kuma web UI and add these monitors:

Service Type Target

| NAS SSH | TCP | 192.168.0.12:22 | | Samba | TCP | 192.168.0.12:445 | | Syncthing | HTTP | http://192.168.0.12:8384 | | Restic REST | HTTP | http://192.168.0.12:8000 | | NFS | TCP | 192.168.0.12:2049 |

Tailscale IP Reference

The NAS Tailscale IP is 100.82.77.97 (assigned by Headscale). All repo files have been updated with this IP.

Set BIOS Options

Next time you have a keyboard/monitor connected to the NAS:

  • Restore on AC Power Loss → Power On (auto-boots after a power outage)
  • Wake on LAN → Enabled (optional, lets you power on remotely)

If Something Goes Wrong

Phase 1 — Debian won't install

  • Make sure you're booting from the USB (F8 boot menu)
  • Try a different USB port
  • Re-flash the USB: sudo dd if=debian.iso of=/dev/sdX bs=4M status=progress

Phase 2 — Drive not showing up

lsblk
sudo fdisk -l
  • Check SATA cables are firmly connected
  • Try a different SATA port on the motherboard
  • Check drive health: sudo smartctl -H /dev/sdb

Phase 2 — Drive won't mount after reboot

  • Check fstab syntax: cat /etc/fstab
  • Verify UUIDs match: sudo blkid
  • Try manual mount: sudo mount /dev/sdb1 /mnt/purple

Phase 3 — Docker won't install

  • Check internet: ping 8.8.8.8
  • Check DNS: ping google.com
  • If DNS fails, check /etc/resolv.conf points to 192.168.0.10 (Pi-hole)

Phase 4 — Tailscale won't connect

tailscale status
  • Auth key might have expired (they last 1 hour) — generate a new one
  • Re-authenticate: sudo tailscale up --login-server=https://hs.cronova.dev --authkey=<new-key> --reset

Phase 5 — NFS not working

sudo systemctl status nfs-kernel-server
sudo exportfs -v
sudo ufw status
  • From Docker VM, test: showmount -e 192.168.0.12
  • Make sure firewall allows ports 111 and 2049

Phase 7 — Samba won't start

docker logs samba
  • Check .env file exists and has correct values
  • Test authentication: smbclient -L localhost -U augusto
  • Check ports: ss -tlnp | grep -E '139|445'

Phase 7 — Restic REST won't start

docker logs restic-rest
  • Check htpasswd file exists and has content
  • Check .env has BACKUP_DATA=/srv/backup/restic
  • Test: curl -v <http://localhost:8000/>

General — Can't SSH into NAS

  • Connect keyboard/monitor directly
  • Check network: ip addr (is there an IP on enp0s25?)
  • Check SSH: systemctl status ssh
  • Try from local network first, then Tailscale

Nuclear Option — Full Reinstall

If everything is broken:

  1. Boot from Debian USB
  2. Reinstall on the SSD only
  3. Your data drives (Purple and Red) are not touched — data is safe
  4. After reinstall, re-mount the drives using their UUIDs (Phase 2)