# Creating a Native Rocket Pool Node without Docker
In this section, we will walk through the process of installing the Rocket Pool Smartnode stack natively onto your system, without the use of Docker containers.
The general plan is as follows:
- Create system services for the Rocket Pool components (the node process, and optionally the watchtower process if you are an Oracle Node)
- Create a system service for the Execution client
- Create a system service for the Beacon node
- Create a system service for the Validator client
- Configure Rocket Pool to use communicate with those services
This is a fairly involved setup so it will take some time to complete.
The diversity of Operating Systems and distros available make it impractical to make guides available for all of them. The instructions in this guide are tailored to a Debian-based system (including Ubuntu). For other distros or operating systems, you may follow the high-level steps described in the guide but will have to substitute certain commands for the ones that your system uses as appropriate.
WARNING
This guide is intended for users that are experienced with Linux system administration and usage. This includes using the terminal, creating system accounts, managing permissions, and installing services. If you are not familiar with these activites, we do not recommend that you use the native mode.
# Creating Service Accounts
The first step is to create new system accounts for the services and disable logins and shell access for them. The reason for having separate user accounts is practical: if your Execution or Consensus clients have a major vulnerability like an Arbitrary Code Execution (opens new window) exploit, doing this will limit the amount of damage an attacker can actually do since they're running on an account with limited permissions.
We're going to create one account for your Execution client, one for your Beacon Node, and one for both Rocket Pool and the validator client. The sharing is necessary because Rocket Pool will create the validator's key files once you create a new minipool, and it will set the permissions so that only its own user has access to them. If you're using Nimbus for your Consensus client, then it will share an account with Rocket Pool instead since it doesn't have a separate validator client.
Start by creating an account for your Execution client, which we'll call eth1
:
sudo useradd -r -s /sbin/nologin eth1
Do the same for your Beacon Node, which we'll call eth2
:
sudo useradd -r -s /sbin/nologin eth2
Finally, make one for the validator and Rocket Pool, which we'll call rp
:
sudo useradd -r -s /sbin/nologin rp
NOTE
If you're using Nimbus, ignore the rp
account.
Any time you see it used in this guide, just substitute it with eth2
instead.
Now, add yourself to the rp
group.
You'll need to do this in order to use the Rocket Pool CLI, because it and the Rocket Pool daemon both need to access the Execution layer wallet file.
sudo usermod -aG rp $USER
After this, logout and back in for the changes to take effect.
# Installing Rocket Pool
# Setting up the Binaries
Start by making a folder for Rocket Pool and a data subfolder.
You can put this wherever you want; for this guide, I'll put it into /srv
:
sudo mkdir -p /srv/rocketpool/data
sudo chown -R $USER:$USER /srv/rocketpool
Now, download the CLI and daemon binaries (or ignore this and build them from source if you prefer). Choose the platform that your system uses from the tabs below.
Next, grab the validator restart script - Rocket Pool will use this when it needs to restart your Validator Client to load new keys after you create a new minipool:
wget https://github.com/rocket-pool/smartnode-install/raw/release/install/scripts/restart-vc.sh -O /srv/rocketpool/restart-vc.sh
chmod +x /srv/rocketpool/restart-vc.sh
Now open ~/.profile
with your editor of choice and add this line to the end:
alias rp="rocketpool -d /usr/local/bin/rocketpoold -c /srv/rocketpool"
Save it, then reload your profile:
source ~/.profile
This will let you interact with Rocket Pool's CLI with the rp
command, which is a nice shortcut.
# Creating the Services
Next up, we'll create a systemd
service for the Rocket Pool node daemon.
This is the service that will automatically check for and claim RPL rewards after each checkpoint, and stake minipools once you've created them via node deposit
.
Optionally, if you're an Oracle DAO member, create the corresponding watchtower
service as well.
If you are not an Oracle DAO member, you can ignore that service.
# Installing the Execution Client
For the sake of simplicity, we're going to use Geth as our example during this guide. If you have another client in mind, adapt these instructions to that client accordingly.
Start by making a folder for the Geth binary and the log script:
sudo mkdir /srv/geth
sudo chown $USER:$USER /srv/geth
Next, make a folder for the chain data on the SSD. Pick the set up that you have below:
Now, grab the latest Geth binary (opens new window) for your architecture, or build it from source (opens new window) if you want.
If you download it, it will be an archive.
Extract it and copy the contents of the geth
folder to /srv/geth
.
For example, if you have an x64 system:
cd /tmp
wget https://gethstore.blob.core.windows.net/builds/geth-linux-amd64-1.10.3-991384a7.tar.gz
tar xzf geth-linux-amd64-1.10.3-991384a7.tar.gz
cp geth-linux-amd64-1.10.3-991384a7/geth /srv/geth
Next, create a systemd service for Geth. You can use this as a template, and modify the command line arguments as you see fit:
sudo nano /etc/systemd/system/geth.service
NOTE
The above configuration is for the Ethereum mainnet.
If you want to run on the Prater testnet instead, replace the --mainnet
flag in the ExecStart
string with --goerli
.
Some notes:
- You can optionally use the
--cache
flag to lower the amount of RAM that Geth uses.- If you have 4 GB of RAM, set this to 256.
- If you have 8 GB of RAM, you can leave it at 512 so it syncs faster and doesn't require pruning as frequently.
- For larger amounts of RAM, you can ignore this flag.
- You can optionally use the
--maxpeers
flag to lower the peer count. The peer count isn't very important for the Execution client, and lowering it can free up some extra resources if you need them.
Lastly, add a log watcher script so you can check on Geth to see how it's doing:
sudo nano /srv/geth/log.sh
Contents:
#!/bin/bash
journalctl -u geth -b -f
Make it executable:
sudo chmod +x /srv/geth/log.sh
Now you can see the Geth logs by doing $ /srv/geth/log.sh
.
This replaces the behavior that rocketpool service logs eth1
used to provide, since it can't do that without Docker.
All set on the the Execution client; now for the Consensus client.
# Installing the Beacon Node
Start by making a folder for your Beacon Node binary and log script. Choose the instructions for the client you want to run:
Next, create a systemd service for your Beacon Node. The following are examples that show typical command line arguments to use in each one:
Some notes:
- The user is set to
eth2
. - For arm64 systems,
ionice -c 2 -n 0
tells your system to give your Beacon Node the highest possible priority for disk I/O (behind critical system processes), so it can process and attest as quickly as possible
Next, add a log watcher script in the folder you put your Beacon Node into:
With that, the Beacon Node is all set. On to the validator client!
# Installing the Validator Client
NOTE
Nimbus does not have a seperate validator client at this time, so it is not included in these instructions. If you plan to use Nimbus, you've already taken care of this during the Beacon Node setup and can skip this section.
First, create a systemd service for your validator client. The following are examples that show typical command line arguments to use in each one:
Next, add a log watcher script in the folder you put your validator client into:
Now, we have to give the rp
user the ability to restart the validator client when new validator keys are created.
Open the sudoers
file:
sudo nano /etc/sudoers
Add this line under # Cmnd alias specification
, replacing <validator service name>
with the name of your validator service (e.g. lh-vc
, prysm-vc
, or teku-vc
)
Cmnd_Alias RP_CMDS = /usr/bin/systemctl restart <validator service name>
Add this line under # User privilege specification
:
rp ALL=(ALL) NOPASSWD: RP_CMDS
That whole section should now look like this:
# Cmnd alias specification
Cmnd_Alias RP_CMDS = /usr/bin/systemctl restart <validator service name>
# User privilege specification
root ALL=(ALL:ALL) ALL
rp ALL=(ALL) NOPASSWD: RP_CMDS
Finally, modify /srv/rocketpool/restart-vc.sh
:
- Uncomment the line at the end and change it to
sudo systemctl restart <validator service name>
The services are now installed.
# Configuring the Smartnode
Now that your services are all created, it's time to configure the Smartnode stack.
Please visit the Configuring the Smartnode Stack (Native Mode) guide, and return here when you are finished.
# Enabling and Running the Services
With all of the services installed, it's time to:
- Enable them so they'll automatically restart if they break, and automatically start on a reboot
- Start them all!
The last step is to create a wallet with rp wallet init
or rp wallet restore
.
Once that's done, change the permissions on the password and wallet files so the Rocket Pool CLI, node, and watchtower can all use them:
sudo chown rp:rp -R /srv/rocketpool/data
sudo chmod -R 775 /srv/rocketpool/data
sudo chmod 660 /srv/rocketpool/data/password
sudo chmod 660 /srv/rocketpool/data/wallet
And with that, you're ready to secure your operating system to protect your node.
Move on to the Securing your Node section next.