Steps to a Setup Public Node on Zenon Network

Copy and paste instructions to setup a public node on Zenon Network with Ubuntu.

Spin up a Ubuntu 20.04 server on your cloud provider of choice.

Update and upgrade your VPS

sudo apt-get update
sudo apt-get upgrade

Open the following ports in your firewall. Please note a public static IP is required.

TCP: 35995, 35997, 35998
UDP: 35995

Install ZIP and dynamic swapfile

sudo apt-get install zip
sudo apt-get install dphys-swapfile

Reboot the VPS

sudo reboot

Download the ZNN Controller Software

wget https://github.com/zenon-network/znn_controller_dart/releases/download/v0.0.3-alphanet/znn-controller-linux-amd64.zip

Unzip the file

unzip znn-controller-linux-amd64.zip

Run the ZNN Controller installation software

sudo ./znn-controller

You should see the following output

Running ZNN Controller with superuser privileges
ZNN Node Controller v0.0.3 [NOTE THIS IS OLD]
Gathering system information ...
System info:
Shell: bash
User: REMOVE
Host: REMOVE
Operating system: linux
OS version: Linux 5.11.0-1022-aws #23~20.04.1-Ubuntu SMP Mon Nov 15 14:03:19 UTC 2021
Available CPU cores: 2
Dart runtime: 2.16.2 (stable) (Tue Mar 22 13:15:13 2022 +0100) on "linux_x64"
IP address: [IP ADDRESS]
  1) Deploy
  2) Status
  3) Start service
  4) Stop service
  5) Resync
  6) Help
  7) Quit
Select an option from the ones listed above

Select Option 1. This will download the latest node software and install everything required to run the node. This will automatically setup a service that runs on reboot. After the node is done downloading the blockchain it will be available to use in the syrius wallet.

5 Likes

Would be awesome to see this get ported to the GitBook :slight_smile:

1 Like

yep - I’m testing these steps again. LMK when the gitbook framework is setup and I will add this.

1 Like

Also what kind of hardware specs do you recommend?

yep - can add that too. 8G ram sucks ATM b/ of the memory leak. Need 16G. But I can specify all this.

1 Like

Thanks for the guide! Do you have instructions or tips for setting up wss?

While the memory leak does cause the process’ memory consumption to creep up over time, I’ve had a node running for the last two months on a VPS with 8GB RAM. I had to setup a cron job to restart the service every two days but I haven’t experienced any crashes or issues otherwise.

yes. I’ll post that later today. I’m doing house work. You basically install NGINX, letsencrypt, and use the load balancer to offload ssl and forward requests to your node.

I’ll post before the end of the day.

1 Like

I’m still testing out a bunch of stuff. I’m not 100% done with everything. So I’ll post the NGINX configs of the two load balancers I’m testing. One is for HTTPS on 35997. That is below.

Basically, setup NGINX, setup letsencrypt with your domain, then update the config file per below. proxy_pass http://LOCAL_IP_Address:35997; is a different server (the public Zenon node) in the same subnet as NGINX. However you can run them on the same machine, but not on the same port. The zenon node must stay on 35997, so you could change NGINX to listed on a different port and forward to 35997 with proxy_pass above (if you plan to do this with one machine.)

server {
    root /var/www/secure.deeznnodez.com/html;
    index index.html;
    server_name secure.deeznnodez.com;

    location / {
        try_files $uri $uri/ =404;
        }

    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/secure.deeznnodez.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/secure.deeznnodez.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    }

server {
    if ($host = secure.deeznnodez.com) {
        return 301 https://$host$request_uri;
        } # managed by Certbot

    listen 80;

    server_name secure.deeznnodez.com;
    return 404; # managed by Certbot
    }

server {
    listen 35997 ssl;
    server_name secure.deeznnodez.com;
    ssl_certificate /etc/letsencrypt/live/secure.deeznnodez.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/secure.deeznnodez.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    location / {
        proxy_pass http://LOCAL_IP_Address:35997;
        }
    }
1 Like

Here is the WSS on 35998 running on a different machine / load balancer. Again the LB and node are on two different machines. You can combine but NGINX needs to listen on something other than 35998 if you do that.

This one was tricky. Here are some of the resources I used to figure this out. Same concept as http. Install NGINX, setup letsencrypt, then insert server block below adjusting for your domain and paths.

You will notice this server block is setup for load balancing. This is unnecessary if you are not running 2 back end nodes, but will work with one node exactly as shown below. Here is the full server block starting below. Everything here is needed, so don’t remove anything. Make sure to update domain and paths for you.

I’m happy to help if you have any questions.

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

upstream wsbackend{
    server LOCAL-IP-ADDRESS:35998;
    keepalive 1000;
}

server{
    listen 35998 ssl;
    server_name ssl.deeZNNodez.com;
    ssl_certificate /etc/letsencrypt/live/ssl.deeznnodez.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/ssl.deeznnodez.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    ssl_verify_client off;
    location /{
      proxy_http_version 1.1;
      proxy_pass http://wsbackend;
      proxy_redirect off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_read_timeout 3600s;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;
    }
}

server {
    root /var/www/html;
    index index.html index.htm index.nginx-debian.html;
    server_name ssl.deeznnodez.com; # managed by Certbot
    listen 443 ssl; # managed by Certbot
    ssl_certificate /etc/letsencrypt/live/ssl.deeznnodez.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/ssl.deeznnodez.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    location / {
    # First attempt to serve request as file, then
    # as directory, then fall back to displaying a 404.
    # root /var/www/html;
    try_files $uri $uri/ =404;
    }
}

server {
    if ($host = ssl.deeznnodez.com) {
        return 301 https://$host$request_uri;
    } # managed by Certbot

    listen 80;
    server_name ssl.deeznnodez.com;
    return 404; # managed by Certbot

}
1 Like

I plan to combine all these into one load balancer that handles https, http, wss, and ws. I’m still
working through issues with https load balancing. I’ve engaged F5, the developers of NGINX, to help me.

Eventually I’m going to build all of this into a docker image with network routing so we can listen on 35997 / 35998 on SSL and then forward to the node on the same port over the docker network. But I’m still in the first stage and that is to get NGINX working 100% on a stand alone machine with both 35997 and 35998.

This has been a good learning experience for me!

1 Like

Thank you for sharing all this, especially the nginx config! :grin:
Are you setting up a load balancer for the community or just for your own nodes?

Absolutely for the community. Right now you can access:

https://secure.deeZNNodez.com:35997 for API access.

I’m going to setup and test WSS on that URL today. I’ll let you know when it’s done. In the near future, I hope others will run secure nodes and we can create a sort of distributed HA node service. Basically we can add other community run nodes to the upstream load balancer. LMK if you want to participate in that. The LB will be centralized, but the back end data could come from several community members.

1 Like

@sol_sanctum I consolidated WSS and HTTPS into one file, cleaned it up, and added some comments. Hope this helps you.

map $http_upgrade $connection_upgrade {
    default upgrade;
    '' close;
}

# Setup load balancer Local IP addresses for HTTPS connections
upstream https-backend {
    server [LOCAL IP]:35997; #local node
    server [LOCAL IP]:35997; #local node
    #sticky route $route_cookie $route_uri;
    }

# Setup load balancer Local IP addresses for WSS connections
upstream wss-backend {
    server [LOCAL IP]:35998; #local node
    server [LOCAL IP]:35998; #local node
    keepalive 1000;
}

# Server block to redirect port 80 requests to port 443
server {
    listen 80;
    server_name secure.deeznnodez.com;

    if ($host = secure.deeznnodez.com) {
        return 301 https://$host$request_uri;
        } # managed by Certbot

    return 404; # managed by Certbot
    }

# Server block to renew SSL certs by letsencrypt  DO NOT REMOVE
server {
    listen 443 ssl; # managed by Certbot
    server_name secure.deeznnodez.com;
    root /var/www/secure.deeznnodez.com/html;
    index index.html;
    ssl_certificate /etc/letsencrypt/live/secure.deeznnodez.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/secure.deeznnodez.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    location / {
        try_files $uri $uri/ =404;
        }
    }

# Server block for https connections to port 35997 for API calls
server {
    listen 35997 ssl;
    server_name secure.deeznnodez.com;
    ssl_certificate /etc/letsencrypt/live/secure.deeznnodez.com/fullchain.pem; # managed by Certbot
    ssl_certificate_key /etc/letsencrypt/live/secure.deeznnodez.com/privkey.pem; # managed by Certbot
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot

    location / {
        #proxy_pass http://https-backend; upstream not used because of NGINX config issue
        proxy_pass http://[LOCAL IP]:35997; #using one local node rather than load balancer. See note above
        }
    }

# Server block for wss connections to port 35998 for secure websocket calls
server {
    listen 35998 ssl;
    server_name secure.deeznnodez.com;
    ssl_certificate /etc/letsencrypt/live/secure.deeznnodez.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/secure.deeznnodez.com/privkey.pem;
    include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
    ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
    ssl_verify_client off;

    location / {
      proxy_http_version 1.1;
      proxy_pass http://wss-backend;
      proxy_redirect off;
      proxy_set_header Host $host;
      proxy_set_header X-Real-IP $remote_addr;
      proxy_read_timeout 3600s;
      proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
      proxy_set_header Upgrade $http_upgrade;
      proxy_set_header Connection $connection_upgrade;
    }
}
1 Like

Thank you so much! I had a hard time setting up wss a few months ago but your config made it much simpler for me this time around. I did have a bit of trouble enabling the service on 35997 but I found a workaround for my setup.

I’m glad you were successful in setting up the load balancer; I’d love to share my node with the community now that I’ve got ssl working.

Explorer: https://node.zenon.fun:35997
Node: wss://node.zenon.fun:35998


Here, I’ll outline everything I needed to do to setup WSS on a single VPS.
I had to re-compile znnd for a couple reasons:

  • The Zenon Explorer will append 35997 to a node address, forcing me to use this socket or wait until that code logic is updated.

From: explorer.zenon.network/main.js
addNode(t) {
let e = t;
t.split(":")[2] || (e = t + “:35997”), this.addNodeToLocalStorage(e), this.changeEndpoint(e), this.toastrService.info(e)
}

  • I’m running nginx on the same server hosting the node. Since there’s contention for ports 35997/35998, I opted to re-compile znnd and have nginx as the “frontend”.
  1. Add an A Record in my DNS settings: “node” → [VPS IP]
  2. git clone https://github.com/zenon-network/go-zenon
    Edit ./go-zenon/blob/master/p2p/config.go
    DefaultHTTPPort = 35887
    DefaultWSPort = 35888
    Compile with make znnd
  3. which znnd to determine location of znnd
    Mine is /usr/local/bin/znnd
    Backup znnd: mv /usr/local/bin/znnd /usr/local/bin/znnd-old
    Copy new znnd: cp ./go-zenon/build/znnd /usr/local/bin/
  4. sudo systemctl restart go-zenon.service
  5. Following steps 1-3 from here:
    Install nginx and dependencies
    sudo vim /etc/nginx/conf.d/node.zenon.fun.conf
    Paste in the config from Step 2-2
    Update the server_name line to server_name node.zenon.fun
    sudo nginx -t && sudo nginx -s reload
    sudo certbot --nginx -d node.zenon.fun
  6. Update the same .conf file with this config
    Update the domain, socket, redirect references
  7. sudo nginx -t && sudo nginx -s reload

Once again, thank you @0x3639 for your help! Really appreciate it! :grin:

1 Like

Wow. this is awesome! Great job. Good idea on recompiling znnd to change the port.

How would you feel about me adding your wss endpoint to my load balancer. That way we can test load balance queries across me and you. NGINX should be able to connect to you over SSL. The hope is we can get a few others to join and we can offer a public, “decentralized” load balanced endpoint for https and wss. I should be able to load balance based on response times and/or health.

1 Like

Yes please!!! :pray:
I’ll be hosting this node for the foreseeable future and it’s been stable for the last two months.

1 Like

OK - cool. I’ll work on that this week and keep you posted!

1 Like

I have a call with F5 today to discuss our NGINX use case - load balancing requests across multiple nodes in different data centers. Goal is to monitor endpoint health and latency. Route accordingly. I’ll let you know what they say.

1 Like