# cyper-servers NixOS flake configuration for x86_64 server infrastructure. ## Hosts | Host | IP | DNS | Role | |------|----|-----|------| | `cyper-controller` | `192.168.2.2` (static) | `127.0.0.1`, `1.1.1.1` | k3s master, PostgreSQL, DNS/DHCP | | `cyper-node1` | `192.168.2.30` (static) | `192.168.2.2` | k3s agent | | `cyper-node2` | `192.168.2.31` (static) | `192.168.2.2` | k3s agent | | `cyper-cluster` | DHCP | `1.1.1.1`, `8.8.8.8` | k3s agent, Helm | | `cyper-cloud` | DHCP | `1.1.1.1`, `8.8.8.8` | Cloud tooling (Terraform, AWS CLI) | ## Directory Structure ``` cyper-servers/ ├── flake.nix # Main flake — declares all hosts ├── nixos/ │ ├── default.nix # Shared NixOS config (users, SSH, Nix settings) │ ├── hardware.nix # x86_64 bootloader & filesystem config │ ├── settings.nix # Locale & timezone │ └── packages.nix # Common system packages ├── home/ # Home Manager (shared across all hosts) │ ├── default.nix │ ├── packages.nix │ ├── git.nix │ ├── shell.nix │ └── neovim/ └── hosts/ ├── services/ │ ├── k3s-master.nix # Reusable k3s server module │ └── k3s-agent.nix # Reusable k3s agent module ├── cyper-controller/ │ ├── configuration.nix # Static IP, imports master + postgres + dns │ ├── postgres.nix # PostgreSQL (x86_64 tuned) │ └── dns.nix # dnsmasq DNS + DHCP server ├── cyper-node1/ │ └── configuration.nix # Static 192.168.2.30, k3s agent ├── cyper-node2/ │ └── configuration.nix # Static 192.168.2.31, k3s agent ├── cyper-cluster/ │ └── configuration.nix # DHCP, k3s agent + Helm └── cyper-cloud/ └── configuration.nix # DHCP, Terraform + AWS CLI ``` ## Quick Start ### 1. Install NixOS on each machine Boot from a standard NixOS x86_64 ISO and partition your disk with labels `boot` (FAT32) and `nixos` (ext4), then: ```bash nixos-generate-config --root /mnt ``` ### 2. Clone and apply ```bash git clone /etc/nixos cd /etc/nixos nixos-rebuild switch --flake .#cyper-controller # on the controller nixos-rebuild switch --flake .#cyper-node1 # on node1 # ... etc ``` ### 3. k3s cluster setup After `cyper-controller` is up, retrieve the node token and apply it on each agent: ```bash # On cyper-controller: cat /var/lib/rancher/k3s/server/node-token # On each agent, set the token in the k3s service environment: # Edit hosts/services/k3s-agent.nix and add: # tokenFile = "/etc/k3s-token"; # Then create /etc/k3s-token with the token value and rebuild. ``` ## Customization ### Change username Edit `flake.nix`: ```nix primaryUser = "your-username"; ``` ### Add packages per host Edit `hosts//configuration.nix` and add to `environment.systemPackages`. ### Adjust PostgreSQL Edit `hosts/cyper-controller/postgres.nix` — memory settings are pre-tuned for x86_64 with 8GB+ RAM. ### DNS entries Edit `hosts/cyper-controller/dns.nix` to add static A records or adjust DHCP ranges. ## Locale & Timezone Defaults (same as cyper-rpi): - **Timezone**: `Europe/Berlin` - **Locale**: `en_US.UTF-8` with German regional settings Change in `nixos/settings.nix`.