A Guide to Setup & Run a Full Node on Nostr for Vector (Linux)
Experience Required: Intermediate
This is a simple guide teaching you how to install and self-host a full Nostr Node that supports the resiliency of the entire network and Vector. In this guide, it will cover a step-by-step setup a on virtual private server (VPS) that is running Linux (Ubuntu 22).
Setup Your Nostr Relay Node
Prerequisite
Setup VPS
In order to run a Nostr Relay Node, you will need to have a server or dedicated device that is ideally running all the time to really support the network that is reliable and consistent. That is why most users tend to run a virtual private server (VPS) and pay the monthly fee for others to host.
Requirements
Virtual Private Server (VPS)
Recommended: SSD or NVMe with 20GB or more and 4GB of RAM
Recommended: with credits already topped up (ensures higher uptime)
Certbot will prompt you for an email and ask if you want to redirect HTTP to HTTPS (choose yes).
5. Verify Auto-Renewal
Certbot automatically installs a systemd timer for renewal. Verify it's active:
You can also test the renewal process:
Step 2
Compile
Run
Customize
Here is an example that you can copy and paste in your strfry.conf file that will allow you to change and customize certain aspects like the name, description, contact, image, and more. Additionally, this will allow you for you to keep your node running all the time and exit your remote control terminal.
sudo apt install -y git g++ make libssl-dev zlib1g-dev liblmdb-dev libflatbuffers-dev libsecp256k1-dev libzstd-dev
git clone https://github.com/hoytech/strfry && cd strfry/
git submodule update --init
make setup-golpe
make -j4
./strfry relay
##
## Default strfry config
##
# Directory that contains the strfry LMDB database (restart required)
db = "./strfry-db/"
dbParams {
# Maximum number of threads/processes that can simultaneously have LMDB transactions open (restart required)
maxreaders = 256
# Size of mmap() to use when loading LMDB (default is 10TB, does *not* correspond to disk-space used) (restart required)
mapsize = 10995116277760
# Disables read-ahead when accessing the LMDB mapping. Reduces IO activity when DB size is larger than RAM. (restart required)
noReadAhead = false
}
events {
# Maximum size of normalised JSON, in bytes
maxEventSize = 65536
# Events newer than this will be rejected
rejectEventsNewerThanSeconds = 900
# Events older than this will be rejected
rejectEventsOlderThanSeconds = 94608000
# Ephemeral events older than this will be rejected
rejectEphemeralEventsOlderThanSeconds = 60
# Ephemeral events will be deleted from the DB when older than this
ephemeralEventsLifetimeSeconds = 300
# Maximum number of tags allowed
maxNumTags = 2000
# Maximum size for tag values, in bytes
maxTagValSize = 1024
}
relay {
# Interface to listen on. Use 0.0.0.0 to listen on all interfaces (restart required)
bind = "127.0.0.1"
# Port to open for the nostr websocket protocol (restart required)
port = 7777
# Set OS-limit on maximum number of open files/sockets (if 0, don't attempt to set) (restart required)
nofiles = 1000000
# HTTP header that contains the client's real IP, before reverse proxying (ie x-real-ip) (MUST be all lower-case)
realIpHeader = ""
info {
# NIP-11: Name of this server. Short/descriptive (< 30 characters)
name = "Vector Asia"
# NIP-11: Detailed information about relay, free-form
description = "Vector's free and public Nostr Relay for Asia."
# NIP-11: Administrative nostr pubkey, for contact purposes
pubkey = "b8f92e61e71d4586ae6fe6970f73dd0fdb890109589e2f417bef445bbba92c8d"
# NIP-11: Alternative administrative contact (email, website, etc)
contact = "https://vectorapp.io"
# NIP-11: URL pointing to an image to be used as an icon for the relay
icon = "https://i.ibb.co/DHzy6rWc/vector-nostr-asia.png"
# List of supported lists as JSON array, or empty string to use default. Example: "[1,2]"
nips = "[1,2,17,18,24,25,30,38,40,65]"
}
# Maximum accepted incoming websocket frame size (should be larger than max event) (restart required)
maxWebsocketPayloadSize = 131072
# Maximum number of filters allowed in a REQ
maxReqFilterSize = 20
# Websocket-level PING message frequency (should be less than any reverse proxy idle timeouts) (restart required)
autoPingSeconds = 55
# If TCP keep-alive should be enabled (detect dropped connections to upstream reverse proxy)
enableTcpKeepalive = false
# How much uninterrupted CPU time a REQ query should get during its DB scan
queryTimesliceBudgetMicroseconds = 10000
# Maximum records that can be returned per filter
maxFilterLimit = 500
# Maximum number of subscriptions (concurrent REQs) a connection can have open at any time
maxSubsPerConnection = 20
writePolicy {
# If non-empty, path to an executable script that implements the writePolicy plugin logic
plugin = ""
}
compression {
# Use permessage-deflate compression if supported by client. Reduces bandwidth, but slight increase in CPU (restart required)
enabled = true
# Maintain a sliding window buffer for each connection. Improves compression, but uses more memory (restart required)
slidingWindow = true
}
logging {
# Dump all incoming messages
dumpInAll = false
# Dump all incoming EVENT messages
dumpInEvents = false
# Dump all incoming REQ/CLOSE messages
dumpInReqs = false
# Log performance metrics for initial REQ database scans
dbScanPerf = false
# Log reason for invalid event rejection? Can be disabled to silence excessive logging
invalidEvents = true
}
numThreads {
# Ingester threads: route incoming requests, validate events/sigs (restart required)
ingester = 3
# reqWorker threads: Handle initial DB scan for events (restart required)
reqWorker = 3
# reqMonitor threads: Handle filtering of new events (restart required)
reqMonitor = 3
# negentropy threads: Handle negentropy protocol messages (restart required)
negentropy = 2
}
negentropy {
# Support negentropy protocol messages
enabled = true
# Maximum records that sync will process before returning an error
maxSyncEvents = 1000000
}
}