Advanced Server Installation

This section outlines installing the Netmaker server, including Netmaker, Netmaker UI, rqlite, and CoreDNS

System Compatibility

Netmaker requires elevated privileges on the host machine to perform network operations. Netmaker must be able to modify interfaces and set firewall rules using iptables.

Typically, Netmaker is run inside of containers, using Docker or Kubernetes.

Netmaker can be run without containers, but this is not recommended. You must run the Netmaker binary, CoreDNS binary, database, and a web server directly on the host.

Each of these components have their own individual requirements and the management complexity increases exponentially by running outside of containers.

For first-time installs, we recommend the quick install guide. The following documents are meant for more advanced installation environments and are not recommended for most users. However, these documents can be helpful in customizing or troubleshooting your own installation.

Server Configuration Reference

Netmaker sets its configuration in the following order of precendence:

  1. Defaults: Default values set on the server if no value is provided in configuration.

  2. Config File: Values set in the config/environments/*.yaml` file

  3. Environment Variables: Typically values set in the Docker Compose. This is the most common way of setting server values.

In most situations, if you wish to modify a server setting, set it in the docker-compose.yml file, then run “docker kill netmaker” and “docker-compose up -d”.

Variable Description


Default: “”

Description: MUST SET THIS VALUE. This is the public, resolvable DNS name of the MQ Broker. For instance:


Default: (Server detects public IP address of machine)

Description: The public IP of the server where the machine is running.


Default: “”

Description: MUST SET THIS VALUE. This is the public, resolvable address of the API, including port. For instance:


Default: “”

Description: The public IP of the CoreDNS server. Will typically be the same as the server where Netmaker is running (same as SERVER_HOST).


Default: Equals SERVER_HOST if set, “” if SERVER_HOST is unset.

Description: Should be the same as SERVER_API_CONN_STRING minus the port.


Default: 8081

Description: Should be the same as the port on SERVER_API_CONN_STRING in most cases. Sets the port for the API on the server.


Default: “secretkey”

Description: The admin master key for accessing the API. Change this in any production installation.


Default: “*”

Description: The “allowed origin” for API requests. Change to restrict where API requests can come from.


Default: “on”

Description: Enables the REST backend (API running on API_PORT at SERVER_HTTP_HOST). Change to “off” to turn off.


Default: “on”

Description: Enables the AGENT backend (GRPC running on GRPC_PORT at SERVER_GRPC_HOST). Change to “off” to turn off.


Default: “off”

Description: Enables DNS Mode, meaning config files will be generated for CoreDNS.


Default: “sqlite”

Description: Specify db type to connect with. Currently, options include “sqlite”, “rqlite”, and “postgres”.



Description: Specify the necessary string to connect with your local or remote sql database.


Default: “localhost”

Description: Host where postgres is running.


Default: “5432”

Description: port postgres is running.


Default: “netmaker”

Description: DB to use in postgres.


Default: “postgres”

Description: User for posgres.


Default: “nopass”

Description: Password for postgres.


Default: “off”

Description: The server enables you to set PostUp and PostDown commands for nodes, which is standard for WireGuard with wg-quick, but is also Remote Code Execution, which is a critical vulnerability if the server is exploited. Because of this, it’s turned off by default, but if turned on, PostUp and PostDown become editable.


Default: “999999999”

Description: Limits the number of nodes allowed on the server (total).


Default: “on”

Description: If “on”, will allow you to always show the key values of “access keys”. This could be considered a vulnerability, so if turned “off”, will only display key values once, and it is up to the users to store the key values locally.


Default: <system mac addres>

Description: This setting is used for HA configurations of the server, to identify between different servers. Nodes are given ID’s like netmaker-1, netmaker-2, and netmaker-3. If the server is not HA, there is no reason to set this field.


Default: “on”

Description: If “on”, the server will send anonymous telemetry data once daily, which is used to improve the product. Data sent includes counts (integer values) for the number of nodes, types of nodes, users, and networks. It also sends the version of the server.


Default: (public IP of server)

Description: The address of the mq server. If running from docker compose it will be “mq”. If using “host networking”, it will find and detect the IP of the mq container. Otherwise, need to input address. If not set, it will use the public IP of the server. the port 1883 will be appended automatically. This is the expected reachable port for MQ and cannot be changed at this time.


Default: “off”

Description: Whether or not host networking is turned on. Only turn on if configured for host networking (see docker-compose.hostnetwork.yml). Will set host-level settings like iptables and forwarding for MQ.


Default: “on”

Description: # Allows netmaker to manage iptables locally to set forwarding rules. Largely for DNS or SSH forwarding (see below). It will also set a default “allow forwarding” policy on the host. It’s better to leave this on unless you know what you’re doing.


Default: “”

Description: Comma-separated list of services for which to configure port forwarding on the machine. Options include “mq,dns,ssh”. MQ IS DEPRECIATED, DO NOT SET THIS.’ssh’ forwards port 22 over wireguard, enabling ssh to server over wireguard. However, if you set the Netmaker server as a ingress gateway, this will break SSH on external clients, so be careful. DNS enables private dns over wireguard. If you would like to use private dns with ext clients, turn this on.


Default: “”

Description: Specific to a Kubernetes installation. Gets the container IP address of the pod where Netmaker is running.


Default: 0

Description: Specify level of logging you would like on the server. Goes up to 3 for debugging. If you run into issues, up the verbosity.

Config File Reference

A config file may be placed under config/environments/<env-name>.yml. To read this file at runtime, provide the environment variable NETMAKER_ENV at runtime. For instance, dev.yml paired with ENV=dev. Netmaker will load the specified Config file. This allows you to store and manage configurations for different environments. Below is a reference Config File you may use.

  apihost: "" # defaults to or remote ip (SERVER_HOST) if DisableRemoteIPCheck is not set to true. SERVER_API_HOST if set
  apiport: "" # defaults to 8081 or HTTP_PORT (if set)
  grpchost: "" # defaults to or remote ip (SERVER_HOST) if DisableRemoteIPCheck is not set to true. SERVER_GRPC_HOST if set.
  grpcport: "" # defaults to 50051 or GRPC_PORT (if set)
  masterkey: "" # defaults to 'secretkey' or MASTER_KEY (if set)
  allowedorigin: "" # defaults to '*' or CORS_ALLOWED_ORIGIN (if set)
  restbackend: "" # defaults to "on" or REST_BACKEND (if set)
  agentbackend: "" # defaults to "on" or AGENT_BACKEND (if set)
  clientmode: "" # defaults to "on" or CLIENT_MODE (if set)
  dnsmode: "" # defaults to "on" or DNS_MODE (if set)
  sqlconn: "" # defaults to "http://" or SQL_CONN (if set)
  disableremoteipcheck: "" # defaults to "false" or DISABLE_REMOTE_IP_CHECK (if set)
  version: "" # version of server
  rce: "" # defaults to "off"

Compose File - Annotated

All environment variables and options are enabled in this file. It is the equivalent to running the “full install” from the above section. However, all environment variables are included, and are set to the default values provided by Netmaker (if the environment variable was left unset, it would not change the installation). Comments are added to each option to show how you might use it to modify your installation.

version: "3.4"

  netmaker: # The Netmaker Server
    container_name: netmaker
    image: gravitl/netmaker:v0.14.0
    volumes: # Volume mounts necessary to store SQL Data, and share data with CoreDNS and MQ (certs folder)
      - dnsconfig:/root/config/dnsconfig
      - sqldata:/root/data
      - /root/certs:/etc/netmaker/
    cap_add: # necessary permissions to manage networking
      - NET_ADMIN
      - NET_RAW
      - SYS_MODULE
    sysctls: # forwarding enablement for ipv4/ipv6
      - net.ipv4.ip_forward=1
      - net.ipv4.conf.all.src_valid_mark=1
      - net.ipv6.conf.all.disable_ipv6=0
      - net.ipv6.conf.all.forwarding=1
    restart: always
      SERVER_NAME: "broker.NETMAKER_BASE_DOMAIN" # The resolvable domain of the MQ broker (must point to machine)
      SERVER_HOST: "SERVER_PUBLIC_IP" # the ip of the host machine
      SERVER_API_CONN_STRING: "api.NETMAKER_BASE_DOMAIN:443" # The string used by clients to connect to the api. Includes domain and port.
      COREDNS_ADDR: "SERVER_PUBLIC_IP" # the ip of coredns. Typically the same as SERVER_HOST
      DNS_MODE: "on" # allows netmaker to create a "hosts" file for CoreDNS and send dns records to clients
      SERVER_HTTP_HOST: "api.NETMAKER_BASE_DOMAIN" # The resolvable domain of the api
      API_PORT: "8081" # The port the API is running on
      CLIENT_MODE: "on" # Pretty much always leave this on
      MASTER_KEY: "REPLACE_MASTER_KEY" # The admin master key for accessing the API. Change this in any production installation.
      CORS_ALLOWED_ORIGIN: "*" # The "allowed origin" for API requests. Change to restrict where API requests can come from.
      DISPLAY_KEYS: "on" # Show keys permanently in UI (until deleted) as opposed to 1-time display.
      DATABASE: "sqlite" # The database Netmaker should configure itself for. If not SQLite, need to provide connection details (see above for env vars).
      NODE_ID: "netmaker-server-1" # The ID of the server to identify itself in HA setups and scenarios with multiple Netmaker servers.
      MQ_HOST: "mq" # The name of the MQ container
      HOST_NETWORK: "off"  # whether or not host networking is turned on. Only turn on if configured for host networking (see docker-compose.hostnetwork.yml). Will set host-level settings like iptables.
      VERBOSITY: "1" # Logging verbosity of the server. Can be 0-3
      PORT_FORWARD_SERVICES: "dns"  # set iptables on the machine being managed in order to forward properly from wireguard interface to services listed in "port forward services"
      MANAGE_IPTABLES: "on"  # services for which to configure port forwarding on the machine. 'ssh' forwards port 22 over wireguard, enabling ssh to server over wireguard. dns enables private dns over wireguard.
      - "51821-51830:51821-51830/udp" # WireGuard ports - one per network you plan on running
      - "8081:8081" # API port
  netmaker-ui: # The Netmaker UI Component
    container_name: netmaker-ui
      - netmaker
    image: gravitl/netmaker-ui:v0.14.0
      - "netmaker:api"
      - "8082:80"
      BACKEND_URL: "https://api.NETMAKER_BASE_DOMAIN" # URL where UI will send API requests. Change based on SERVER_HOST, SERVER_HTTP_HOST, and API_PORT
    restart: always
  coredns: # The DNS Server. Remove this section if you dont plan to use the coredns server"
      - netmaker 
    image: coredns/coredns
    command: -conf /root/dnsconfig/Corefile
    container_name: coredns
    restart: always
      - dnsconfig:/root/dnsconfig
    image: caddy:latest
    container_name: caddy
    restart: unless-stopped
    network_mode: host # Wants ports 80 and 443!
      - /root/Caddyfile:/etc/caddy/Caddyfile
      # - $PWD/site:/srv # you could also serve a static site in site folder
      - caddy_data:/data
      - caddy_conf:/config
  mq: # the MQTT broker for netmaker
    image: eclipse-mosquitto:2.0.11-openssl
      - netmaker
    container_name: mq
    restart: unless-stopped
      - "" # only exposed locally
      - "8883:8883" # exposed publicly and must be reachable from all clients!
      - /root/mosquitto.conf:/mosquitto/config/mosquitto.conf # need to pull conf file from github before running (under docker/mosquitto.conf
      - /root/certs/:/mosquitto/certs/ # certs generated by netmaker server, including root certificate and client certs for every peer.
      - mosquitto_data:/mosquitto/data
      - mosquitto_logs:/mosquitto/log
  caddy_data: {} # storage for caddy data
  caddy_conf: {} # storage for caddy configuration file
  sqldata: {} # storage for embedded sqlite
  dnsconfig: {} # storage for coredns
  mosquitto_data: {} # storage for mqtt data
  mosquitto_logs: {} # storage for mqtt logs

Available docker-compose files

The default options for docker-compose can be found here:

The following is a brief description of each:

  • docker-compose.contained.yml - This is the default docker-compose, used in the quick start and deployment script in the README on GitHub. It deploys Netmaker with all options included (Caddy and CoreDNS) and has “self-contained” netclients, meaning they do not affect host networking.

  • docker-compose.coredns.yml - This is a simple compose used to spin up a standalone CoreDNS server. Can be useful if, for instance, you are unning Netmaker on baremetal but need CoreDNS.

  • docker-compose.hostnetwork.yml - This is similar to the docker-compose.contained.yml but with a key difference: it has advanced permissions and mounts host volumes to control networking on the host level.

  • docker-compose.nocaddy.yml -= This is the same as docker-compose.contained.yml but without Caddy, in case you need to use a different proxy like Nginx, Traefik, or HAProxy.

  • docker-compose.nodns.yml - This is the same as docker-compose.contained.yml but without CoreDNS, in which case you will not have the Private DNS feature.

  • docker-compose.reference.yml - This is the same as docker-compose.contained.yml but with all variable options on display and annotated (it’s what we show right above this section). Use this to determine which variables you should add or change in your configuration.

  • docker-compose.yml - This is a renamed docker-compose.contained.yml. It is meant only to act as a placeholder for what we consider the “primary” docker-compose that users should work with.

Traefik Proxy

To install with Traefik, rather than Nginx or the default Caddy, check out this repo:

No DNS - CoreDNS Disabled

CoreDNS is no longer required for most installs. You can simply remove the CoreDNS section from your docker-compose. DNS will still function, because it is added directly to nodes’ hosts files (ex: /etc/hosts). If you would like to disable DNS propagation entirely, in your docker-compose env for netmaker, set DNS_MODE=”off”

Linux Install without Docker

Most systems support Docker, but some do not. In such environments, there are many options for installing Netmaker. Netmaker is available as a binary file, and there is a zip file of the Netmaker UI static HTML on GitHub. Beyond the UI and Server, you may want to optionally install a database (sqlite is embedded, rqlite or postgres are supported) and CoreDNS (also optional).

Once this is enabled and configured for a domain, you can continue with the below. The recommended server runs Ubuntu 20.04.

Database Setup (optional)

You can run the netmaker binary standalone and it will run an embedded sqlite server. Data goes in the data/ directory. Optionally, you can run PostgreSQL or rqlite. Instructions for rqlite are below.

  1. Install rqlite on your server:

  2. Run rqlite: rqlited -node-id 1 ~/node.1

If using rqlite or postgres, you must change the DATABASE environment/config variable and enter connection details.

Server Setup

  1. Run the install script:

sudo curl -sfL | sh -

  1. Check status: sudo journalctl -u netmaker

  2. If any settings are incorrect such as host or sql credentials, change them under /etc/netmaker/config/environments/< your env >.yaml and then run sudo systemctl restart netmaker

UI Setup

The following uses Nginx as an http server. You may alternatively use Apache or any other web server that serves static web files.

  1. Download and Unzip UI asset files

  2. Copy Config to Nginx or other reverse proxy

  3. Modify Default Config Path

  4. Change Backend URL

  5. Start Nginx

sudo wget -O /usr/share/nginx/html/
sudo unzip /usr/share/nginx/html/ -d /usr/share/nginx/html
sudo cp /usr/share/nginx/html/nginx.conf /etc/nginx/conf.d/default.conf
sudo sed -i 's/root \/var\/www\/html/root \/usr\/share\/nginx\/html/g' /etc/nginx/sites-available/default
sudo sh -c 'BACKEND_URL=http://<YOUR BACKEND API URL>:PORT /usr/share/nginx/html/ >/usr/share/nginx/html/config.js'
sudo systemctl start nginx

Proxy / Load Balancer

You will need to proxy connections to your UI and Server. By default the ports are 8081, 8082. This proxy should handle SSL certificates. We recommend Caddy or Nginx (you can follow the Nginx guide in these docs).


You will need an MQTT broker on the host. We recommend Mosquitto. In addition, it must use the mosquitto.conf file found here:

Netmaker env vars must be configured to reach the MQ broker

Kubernetes Install

Server Install

This template assumes your cluster uses Nginx for ingress with valid wildcard certificates. If using an ingress controller other than Nginx (ex: Traefik), you will need to manually modify the Ingress entries in this template to match your environment.

This template also requires RWX storage. Please change references to storageClassName in this template to your cluster’s Storage Class.


Replace the NETMAKER_BASE_DOMAIN references to the base domain you would like for your Netmaker services (ui,api,grpc). Typically this will be something like

sed -i ‘s/NETMAKER_BASE_DOMAIN/<your base domain>/g’ netmaker-template.yaml

Now, assuming Ingress and Storage match correctly with your cluster configuration, you can install Netmaker.

kubectl create ns nm
kubectl config set-context --current --namespace=nm
kubectl apply -f netmaker-template.yaml -n nm

In about 3 minutes, everything should be up and running:

kubectl get ingress nm-ui-ingress-nginx

Netclient Daemonset

The following instructions assume you have Netmaker running and a network you would like to add your cluster into. The Netmaker server does not need to be running inside of a cluster for this.

sed -i ‘s/ACCESS_TOKEN_VALUE/< your access token value>/g’ netclient-template.yaml
kubectl apply -f netclient-template.yaml

For a more detailed guide on integrating Netmaker with MicroK8s, check out this guide.

Nginx Reverse Proxy Setup with https

The Swag Proxy makes it easy to generate a valid ssl certificate for the config bellow. Here is the documentation for the installation.

The following file configures Netmaker as a subdomain. This config is an adaption from the swag proxy project.


server {
    # Redirect HTTP to HTTPS.
    listen 80;
    server_name *; # Please change to your domain
    return 301 https://$host$request_uri;

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name; # Please change to your domain
    include /config/nginx/ssl.conf;
    location / {
        proxy_pass http://<NETMAKER_IP>:8082;

server {
    listen 443 ssl;
    listen [::]:443 ssl;
    server_name; # Please change to your domain
    include /config/nginx/ssl.conf;

    location / {
        proxy_pass http://<NETMAKER_IP>:8081;
        proxy_set_header            Host; # Please change to your domain
        proxy_pass_request_headers  on;

Highly Available Installation (Kubernetes)

Netmaker comes with a Helm chart to deploy with High Availability on Kubernetes:

helm repo add netmaker
helm repo update


To run HA Netmaker on Kubernetes, your cluster must have the following: - RWO and RWX Storage Classes (RWX is only required if running Netmaker with DNS Management enabled). - An Ingress Controller and valid TLS certificates - This chart can currently generate ingress for Nginx or Traefik Ingress with LetsEncrypt + Cert Manager - If LetsEncrypt and CertManager are not deployed, you must manually configure certificates for your ingress

Furthermore, the chart will by default install and use a postgresql cluster as its datastore.

A minimal HA install of Netmaker can be run with the following command: helm install netmaker –generate-name –set This install has some notable exceptions: - Ingress must be manually configured post-install (need to create valid Ingress with TLS) - Server will use “userspace” WireGuard, which is slower than kernel WG - DNS will be disabled

Example Installations:

An annotated install command:

helm install netmaker/netmaker --generate-name \ # generate a random id for the deploy
--set \ # the base wildcard domain to use for the netmaker api/dashboard/grpc ingress
--set replicas=3 \ # number of server replicas to deploy (3 by default)
--set ingress.enabled=true \ # deploy ingress automatically (requires nginx or traefik and cert-manager + letsencrypt)
--set ingress.className=nginx \ # ingress class to use
--set ingress.tls.issuerName=letsencrypt-prod \ # LetsEncrypt certificate issuer to use
--set dns.enabled=true \ # deploy and enable private DNS management with CoreDNS
--set dns.clusterIP= --set dns.RWX.storageClassName=nfs \ # required fields for DNS
--set postgresql-ha.postgresql.replicaCount=2 \ # number of DB replicas to deploy (default 2)

The below command will install netmaker with two server replicas, a coredns server, and ingress with routes of,, and CoreDNS will be reachable at, and will use NFS to share a volume with Netmaker (to configure dns entries).

helm install netmaker/netmaker --generate-name --set \
--set replicas=2 --set ingress.enabled=true --set dns.enabled=true \
--set dns.clusterIP= --set dns.RWX.storageClassName=nfs \
--set ingress.className=nginx

The below command will install netmaker with three server replicas (the default), no coredns, and ingress with routes of,, and There will be one UI replica instead of two, and one database instance instead of two. Traefik will look for a ClusterIssuer named “le-prod-2” to get valid certificates for the ingress.

helm3 install netmaker/netmaker --generate-name \
--set --set postgresql-ha.postgresql.replicaCount=1 \
--set ui.replicas=1 --set ingress.enabled=true \
--set ingress.tls.issuerName=le-prod-2 --set ingress.className=traefik

Below, we discuss the considerations for Ingress, Kernel WireGuard, and DNS.


To run HA Netmaker, you must have ingress installed and enabled on your cluster with valid TLS certificates (not self-signed). If you are running Nginx as your Ingress Controller and LetsEncrypt for TLS certificate management, you can run the helm install with the following settings:

  • –set ingress.enabled=true

  • –set<your LE issuer name>

If you are not using Nginx or Traefik and LetsEncrypt, we recommend leaving ingress.enabled=false (default), and then manually creating the ingress objects post-install. You will need three ingress objects with TLS:

  • dashboard.<baseDomain>

  • api.<baseDomain>

  • grpc.<baseDomain>

If deploying manually, the gRPC ingress object requires special considerations. Look up the proper way to route grpc with your ingress controller. For instance, on Traefik, an IngressRouteTCP object is required.

There are some example ingress objects in the kube/example folder.

Kernel WireGuard

If you have control of the Kubernetes worker node servers, we recommend first installing WireGuard on the hosts, and then installing HA Netmaker in Kernel mode. By default, Netmaker will install with userspace WireGuard (wireguard-go) for maximum compatibility, and to avoid needing permissions at the host level. If you have installed WireGuard on your hosts, you should install Netmaker’s helm chart with the following option:

  • –set wireguard.kernel=true


By Default, the helm chart will deploy without DNS enabled. To enable DNS, specify with:

  • –set dns.enabled=true

This will require specifying a RWX storage class, e.g.:

  • –set dns.RWX.storageClassName=nfs

This will also require specifying a service address for DNS. Choose a valid ipv4 address from the service IP CIDR for your cluster, e.g.:

  • –set dns.clusterIP=

This address will only be reachable from hosts that have access to the cluster service CIDR. It is only designed for use cases related to k8s. If you want a more general-use Netmaker server on Kubernetes for use cases outside of k8s, you will need to do one of the following: - bind the CoreDNS service to port 53 on one of your worker nodes and set the COREDNS_ADDRESS equal to the public IP of the worker node - Create a private Network with Netmaker and set the COREDNS_ADDRESS equal to the private address of the host running CoreDNS. For this, CoreDNS will need a node selector and will ideally run on the same host as one of the Netmaker server instances.


To view all options for the chart, please visit the README in the code repo here .

Highly Available Installation (VMs/Bare Metal)

For an enterprise Netmaker installation, you will need a server that is highly available, to ensure redundant WireGuard routing when any server goes down. To do this, you will need:

  1. A load balancer

  2. 3+ Netmaker server instances

  3. rqlite or PostgreSQL as the backing database

These documents outline general HA installation guidelines. Netmaker is highly customizable to meet a wide range of enterprise environments. If you would like support with an enterprise-grade Netmaker installation, you can schedule a consultation here .

The main consideration for this document is how to configure rqlite. Most other settings and procedures match the standardized way of making applications HA: Load balancing to multiple instances, and sharing a DB. In our case, the DB (rqlite) is distributed, making HA data more easily achievable.

If using PostgreSQL, follow their documentation for installing in HA mode and skip step #2.

1. Load Balancer Setup

Your load balancer of choice will send requests to the Netmaker servers. Setup is similar to the various guides we have created for Nginx, Caddy, and Traefik. SSL certificates must also be configured and handled by the LB.

2. RQLite Setup

RQLite is the included distributed datastore for an HA Netmaker installation. If you have a different corporate database you wish to integrate, Netmaker is easily extended to other DB’s. If this is a requirement, please contact us.

Assuming you use Rqlite, you must run it on each Netmaker server VM, or alongside that VM as a container. Setup a config.json for database credentials (password supports BCRYPT HASHING) and mount in working directory of rqlite and specify with -auth config.json :

    "username": "netmaker",
    "password": "<YOUR_DB_PASSWORD>",
    "perms": ["all"]

Once your servers are set up with rqlite, the first instance must be started normally, and then additional nodes must be added with the “join” command. For instance, here is the first server node:

sudo docker run -d -p 4001:4001 -p 4002:4002 rqlite/rqlite -node-id 1 -http-addr -raft-addr -http-adv-addr -raft-adv-addr -auth config.json

And here is a joining node:

sudo docker run -d -p 4001:4001 -p 4002:4002 rqlite/rqlite -node-id 2 -http-addr -raft-addr -http-adv-addr  -raft-adv-addr -join https://netmaker:<YOUR_DB_PASSWORD>@

Once rqlite instances have been configured, the Netmaker servers can be deployed.

3. Netmaker Setup

Netmaker will be started on each node with default settings, except with DATABASE=rqlite (or DATABASE=postgress) and SQL_CONN set appropriately to reach the local rqlite instance. Rqlite will maintain consistency with each Netmaker backend.

If deploying HA with PostgreSQL, you will connect with the following settings:

SQL_HOST = <sql host>
SQL_PORT = <port>
SQL_DB   = <designated sql DB>
SQL_USER = <your user>
SQL_PASS = <your password>
DATABASE = postgres

4. Other Considerations

This is enough to get a functioning HA installation of Netmaker. However, you may also want to make the Netmaker UI or the CoreDNS server HA as well. The Netmaker UI can simply be added to the same servers and load balanced appropriately. For some load balancers, you may be able to do this with CoreDNS as well.