I’ve been meaning to upgrade my home network based on some of the lessons I’ve learned deploying NixOS on a VPS. I’ve got a few of these little ProDesk machines that I got on eBay a while back. They’re great if you want some lightweight processes running 24/7. But they were still running Ubuntu! Oof! Time for an overhaul.

1: Deploy Over Network with deploy-rs

Based on my experience with a DigitalOcean VPS I knew I wanted to use the deploy-rs model on a local node. Here’s the basic idea:

  • The node configuration lives in version control.
  • You edit it and build it on your workstation.
  • You push the complete build over the network onto the node.
  • Errors are caught with atomic rollbacks.

That means I don’t need physical access to the machine. It can live in the basement with the router. I don’t even need to SSH in. Just edit the config and deploy. The process is very similar to the instructions in my other post:

  1. Install NixOS on the node
  2. Modify the configuration:
    • Add your workstation SSH key to users.root.openssh.authorizedKeys.keys
    • Enable Flakes
  3. Copy the contents of /etc/nixos/ to your workstation and wrap them in a flake that specifies its hostname.
  4. Add the deploy-rs configuration and then deploy.
  5. Your node is unchanged, but now the configuration lives on your workstation and in version control.

Just make sure your SSH keys are loaded with ssh-add - it’s basic but this tripped me up as I thought it was a user configuration issue.

2: Define a DNS Overlay Server with dnsmasq

If you’ve served custom domains over LAN before, you may have gone into your router settings and added custom DNS entries. This is something I wanted to avoid: I wanted to be able to add new domains purely in my node configuration.

So the first step was to go to a different place in the router settings. This will be different for everyone. For me, it was under Broadband Connection settings -> “Obtain IPv4 DNS Addresses Automatically”. I adjusted this to point to the node’s IP.

Next, configuring the node to act as a DNS server was easy as pie with the dnsmasq module:

services.dnsmasq = {
  enable = true;
  settings = {
    server = [
      "8.8.8.8"
      "8.8.4.4"
    ];
    domain = "home";
    local = "/home/";

    address = [
      "/test.home/mynode"
    ];
  };
};

I tried using a cute custom domain extension but couldn’t get that to work. No big deal and I don’t care enough to pull that thread: .home and .lan both worked.

Now since that domain is resolving to the same node, we need to add a virtual host. The endgame is to reverse proxy these domains to various services, but we’ll start with a static response:

services.nginx = {
  enable = true;

  virtualHosts."test.home" = {
    locations."/" = {
      return = "200 'hello'";
      extraConfig = ''
        add_header Content-Type text/plain;
      '';
    };
  };
}

If everything worked, you should be able to hit http://test.home from any device on your network, and get a nice little hello!

3: Configure Frigate (Net Video Recorder)

If you’ve followed to this point, then your system is a blank slate for adding new services under custom domains. I’ll walk through this Frigate example since that’s the main thing I was migrating! It was straightforward configuring the rtsp feed of my IP cameras. But with naive settings the CPU usage was very high. It took some fiddling to get the Intel hardware acceleration enabled and reduce some of the overhead. I’ll annotate this fragment:

services.frigate = {
  enable = true;
  hostname = "localhost";

  settings = {
    mqtt.enabled = false;

    #This enables the Intel hardware acceleration
    ffmpeg = {
      hwaccel_args = "preset-vaapi";
    };

    cameras."front_camera" = {
      ffmpeg.inputs = [{
        path = "rtsp://admin:pwd@192.168.1.54:554/h264Preview_01_main";
        roles = [ "record" ];
      }
      # This second input needs to be specified even though we're not
      # doing detection, otherwise a high def feed is getting fed 
      # through to the dashboard! 
      {
        path = "rtsp://admin:pwd@192.168.1.54:554/h264Preview_01_sub";
        roles = [ "detect" ];
      }
      ];
      detect.enabled = false;
    };

    record = {
      enabled = true;
      retain = {
        days = 1;
        mode = "all";
      };
    };
  };
};

This keeps a trailing 24 hours of recording.

Intel Hardware Acceleration

This took a lot of trial and error, but got CPU usage down significantly (like, 75% to 2%)! Without knowing much about it, I will present without commentary.

systemd.services.frigate = {
  environment.LIBVA_DRIVER_NAME = "iHD";
  serviceConfig = {
    SupplementaryGroups = [ "render" "video" ];
    DeviceAllow = [ "/dev/dri/renderD128" ];
    AmbientCapabilities = "CAP_PERFMON";
  };
};
  

And in the hardware-configuration.nix:

hardware.opengl = {
  enable = true;
  extraPackages = with pkgs; [
    intel-media-driver
    intel-vaapi-driver
  ];
};

Reverse Proxy to Frigate Default Port

Last but not least, we need to add the address in dnsmasq.settings.address, and the reverse proxy config:

virtualHosts."frigate.home" = {
  locations."/" = {
    proxyPass = "http://127.0.0.1:5000";
    proxyWebsockets = true; 
  };
};

The websocket part is important as Frigate uses them for live views.

That’s it! Works like a charm.

Recap

We’re in a pretty good spot. We’ve got:

  • Access to the entire corpus of nixpkgs with modules for many incredible open-source tools.
  • Version-controlled configuration.
  • Network deployments with atomic rollback.

That means we can host new services (like Jellyfin, Immich, NextCloud, Home Assistant, etc) with a few steps:

  1. Enable it: services.jellyfin.enable = true;
  2. Reverse proxy with nginx.
  3. Add a DNS entry with dnsmasq.
  4. deploy.

You have basically zero risk of bricking the machine with a failed experiment, or interfering at all with your existing services. If something goes awry you simply roll back the node and revert the config changes.

Next Steps?

I reserve the right to work (or not) in any of these directions:

  • Being an affirmed Elixir enjoyer, I’m tempted to make a LiveView dashboard. That would make it really easy to wrap Frigate and augment with whatever custom live logic I want.

  • I’ve been wanting an excuse to try out Slack’s Nebula. The idea there would be to make a network overlay connecting my VPS to the local node. Then I could (very carefully) access my services from public internet.

  • Once we’ve got that, why not introduce some BEAM clustering so our LiveView backend is distributed? I have as many reasons to do that as I have not to! (It’s zero.)