create account

How to run a HAF node - 2023 by mahdiyari

View this thread on: hive.blogpeakd.comecency.com
· @mahdiyari · (edited)
$140.07
How to run a HAF node - 2023
<center>![stolen image from @mickiewicz](https://images.hive.blog/DQmVQDdug7HU4cSLvVaUQiGT4a5L8oBMHzZ5ooMv1MN91ox/haf_layers.png)</center>

Table of contents:
- HAF node for production
- - ZFS notes
- - Requirements
- - Docker installation
- - Build and replay
- HAF node for development
- - Requirements
- - Build and replay

***

### HAF for production
It is highly recommended to setup ZFS compression with LZ4. It doesn't affect the performance but it reduces the storage needs by 50%.

It might seem complicated but it is very easy to setup ZFS. There are plenty of guides out there on how to do so. I'll just provide some notes.

### ZFS notes
When setting up on hetzner servers for example, I enable RAID 0 and allocate 50-100GB to `/` and leave the rest un-allocated during the server setup. Then I create the partitions after the boot on each disk on the remaining space using fdisk. And I use those partitions for ZFS.

I also use the configs from [here](https://bun.uptrace.dev/postgres/tuning-zfs-aws-ebs.html#zfs-config) under "ZFS config".

```bash
zpool create -o autoexpand=on pg /dev/nvme0n1p4 /dev/nvme1n1p4

zfs set recordsize=128k pg

# enable lz4 compression
zfs set compression=lz4 pg

# disable access time updates
zfs set atime=off pg

# enable improved extended attributes
zfs set xattr=sa pg

# reduce amount of metadata (may improve random writes)
zfs set redundant_metadata=most pg
```
You want to change the ARC size depending on your RAM. It is 50% of your RAM by default which is fine for a machine with +64 GB of RAM but for lower RAM you must change the ARC size.

25 GB of RAM is used for shared_memory and you will need 8-16 GB of free RAM for hived + PostgreSQL and general OS depending on your use case. The rest is left for ZFS ARC size.

To see the current ARC size run `cat /proc/spl/kstat/zfs/arcstats | grep c_max`

1 GB is 1073741824
So to set it 50 GB you have to 50 * 1073741824 = 53687091200
```
# set ARC size to 50 GB
echo 53687091200 >> /sys/module/zfs/parameters/zfs_arc_max
```

***

### Requirements (production)
Storage: 2.5TB (compressed LZ4) or +5TB (uncompressed) - Increasing over time
RAM: You might make it work with +32GB - Recommended +64GB
OS: Ubuntu 22

If you don't care about it reducing the lifespan of your NVMe/SSD or potentially taking +1 week on HDD to sync, you can put shared_memory on disk. I don't recommend this at all but if you insist, you can get away with less RAM. 

It is also recommended to allocate the RAM to ZFS ARC size instead of PostgreSQL cache.

Also it is worth going with NVMe for storage. You can get away with 2TB ZFS for only HAF but it is 2.5TB to be safe for at least a while.

***

### Setting up Docker
Installing
```bash
curl -fsSL https://get.docker.com -o get-docker.sh
sudo sh get-docker.sh
```
Add your user to docker group (to run docker non-root - safety)
```bash
addgroup USERNAME docker
```
You must re-login after this.

Changing logging driver to prevent storage filling:
`/etc/docker/daemon.json`
```json
{
  "log-driver": "local"
}
```
Restart docker
```bash
systemctl restart docker
```
You can check the logging driver by
```bash
docker info --format '{{.LoggingDriver}}'
```

***

### Running HAF (production)
Installing the requirements
```bash
sudo apt update
sudo apt install git wget
```

/pg is my ZFS pool

```bash
cd /pg
git clone https://gitlab.syncad.com/hive/haf
cd haf
git checkout v1.27.4.0
git submodule update --init --recursive
```
We run the build and run commands from a different folder
```bash
mkdir -p /pg/workdir
cd /pg/workdir
```
Build
```bash
../haf/scripts/ci-helpers/build_instance.sh v1.27.4.0 ../haf/ registry.gitlab.syncad.com/hive/haf/
```
Make sure /dev/shm has at least 25GB allocated. You can allocate RAM space to /dev/shm by
```bash
sudo mount -o remount,size=25G /dev/shm
```
Run the following command to generate the config.ini file:
```
../haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:instance-v1.27.4.0 --name=haf-instance --data-dir=$(pwd)/haf-datadir --dump-config
```
Then you can edit `/pg/workdir/haf-datadir/config.ini` and add/replace the following plugins as you see fit:
```
plugin = witness account_by_key account_by_key_api wallet_bridge_api
plugin = database_api condenser_api rc rc_api transaction_status transaction_status_api
plugin = block_api network_broadcast_api
plugin = market_history market_history_api
```
You can add the plugins later and restart the node. The only plugin you can't do that is `market_history` and `market_history_api`. If you add them later, you have to replay your node again.

Now you have 2 options. You can either download an existing block_log and replay the node or sync from p2p. Replay is usually faster and takes less than a day (maybe 20h). Follow one of the following.

**- Replaying**
Download block_log provided by @gtg - You can run it inside tmux or screen
```bash
cd /pg/workdir
mkdir -p haf-datadir/blockchain
cd haf-datadir/blockchain
wget https://gtg.openhive.network/get/blockchain/block_log
```

You might need to change the permissions of new files/folders (execute before running haf)
```
sudo chmod -R 777 /pg/workdir
```

Run & replay
```
cd /pg/workdir
../haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:instance-v1.27.4.0 --name=haf-instance --data-dir=$(pwd)/haf-datadir --shared-file-dir=/dev/shm --replay --detach
```

**Note:** You can use the same replay command after stopping the node to continue the replay from where it was left.

**- P2P sync**
Or you can just start haf and it will sync from p2p - I would assume it takes 1-3 days (never tested myself)

```bash
cd /pg/workdir
../haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:instance-v1.27.4.0 --name=haf-instance --data-dir=$(pwd)/haf-datadir --shared-file-dir=/dev/shm --detach
```

Check the logs:
```bash
docker logs haf-instance -f --tail 50
```

**Note:** You can use the same p2p start command after stopping the node to continue the replay from where it was left.
***

### HAF for development
This setup takes around 10-20 minutes and is very useful for development and testing. I usually have this on my local machine.

Requirements:
Storage: 10GB
RAM: +8GB (8GB might need swap/zram for building)
OS: Ubuntu 22

The process is the same as the production till building. I'm going to paste them here. Docker installation is the same as above.

I'm using /pg here but you can change it to whatever folder you have.

```bash
sudo apt update
sudo apt install git wget

cd /pg
git clone https://gitlab.syncad.com/hive/haf
cd haf

# develop branch is recommended for development
# git checkout develop
git checkout v1.27.4.0
git submodule update --init --recursive

mkdir -p /pg/workdir
cd /pg/workdir

../haf/scripts/ci-helpers/build_instance.sh v1.27.4.0 ../haf/ registry.gitlab.syncad.com/hive/haf/
```

Now we get the 5 million block_log provided by @gtg
```bash
cd /pg/workdir
mkdir -p haf-datadir/blockchain
cd haf-datadir/blockchain
wget https://gtg.openhive.network/get/blockchain/block_log.5M
```
rename it
```bash
mv block_log.5M block_log
```

Replay will have an extra option
```bash
cd /pg/workdir
../haf/scripts/run_hived_img.sh registry.gitlab.syncad.com/hive/haf/instance:instance-v1.27.4.0 --name=haf-instance --data-dir=$(pwd)/haf-datadir --shared-file-dir=/dev/shm --stop-replay-at-block=5000000 --replay --detach
```

Check the logs:
```bash
docker logs haf-instance -f --tail 50
```


HAF node will stop replaying at block 5 million. Use the same replay command ☝️ for starting the node if you stop it.

***

### General notes:
- You don't need to add any plugins to hived. Docker files will take care of them.
- To find the local IP address of the docker container you can run `docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' haf-instance` which is `172.17.0.2` for the first container usually.
- To see what ports are exported run `docker ps`
- To stop the node you can run `docker stop haf-instance`
- For replaying from scratch you have to remove shared_memory from `/dev/shm` and also remove `/pg/workdir/haf-datadir/haf_db_store` directory
- You can override PostgreSQL options by adding them in `/pg/workdir/haf-datadir/haf_postgresql_conf.d/custom_postgres.conf`

***

The official GitLab repository includes more information. See /doc.
https://gitlab.syncad.com/hive/haf

***

~~I'm preparing another guide for hivemind, account history and jussi. Hopefully will be published by the time your HAF node is ready.~~

Update: [Running Hivemind & HAfAH on HAF + Jussi](/hive-139531/@mahdiyari/running-hivemind-and-hafah-on-haf--jussi-2023)

Feel free to ask anything.

***

<center>![cat-pixabay](https://files.peakd.com/file/peakd-hive/mahdiyari/23uFRp7fcDNZqL5g3Rn5i5wJXRHCqJthDWV6sEe2e5j9T2k8yKy4vUK3tw7WthfYJDvcs.jpg)</center>
👍  , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and 694 others
properties (23)
authormahdiyari
permlinkhow-to-run-a-haf-node-2023
categoryhive-139531
json_metadata{"app":"peakd/2023.7.1","format":"markdown","tags":["hivedev","haf","hive","node","dev"],"users":["mickiewicz","gtg","mahdiyari"],"image":["https://images.hive.blog/DQmVQDdug7HU4cSLvVaUQiGT4a5L8oBMHzZ5ooMv1MN91ox/haf_layers.png","https://files.peakd.com/file/peakd-hive/mahdiyari/23uFRp7fcDNZqL5g3Rn5i5wJXRHCqJthDWV6sEe2e5j9T2k8yKy4vUK3tw7WthfYJDvcs.jpg"]}
created2023-08-04 15:01:57
last_update2023-09-20 11:35:48
depth0
children10
last_payout2023-08-11 15:01:57
cashout_time1969-12-31 23:59:59
total_payout_value70.098 HBD
curator_payout_value69.969 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length8,856
author_reputation199,864,818,197,856
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,935,988
net_rshares305,251,981,645,800
author_curate_reward""
vote details (758)
@cryptoshots.nft ·
$0.33
@mahdiyari Thanks for the guide 💪
Is there any incentive at the moment to run a HAF node?
👍  
properties (23)
authorcryptoshots.nft
permlinkre-mahdiyari-ryvt41
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2023.7.1"}
created2023-08-04 19:36:03
last_update2023-08-04 19:36:03
depth1
children2
last_payout2023-08-11 19:36:03
cashout_time1969-12-31 23:59:59
total_payout_value0.164 HBD
curator_payout_value0.165 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length89
author_reputation28,323,401,597,540
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,942,526
net_rshares719,131,503,908
author_curate_reward""
vote details (1)
@blocktrades · (edited)
$0.28
A lot of API nodes currently run HAF. After the next release, I expect all of them will (because new hivemind features are only being added to HAF-based version of hivemind).

Other than general purpose API node operators, anyone who builds a HAF app will need to run a HAF server (because their app will run on it) or else they will need to convince someone else who has a HAF server to run it for them.

We're also building several new HAF apps that will probably encourage more people to want to run a HAF server.
👍  , , , , , ,
👎  
properties (23)
authorblocktrades
permlinkryvyk6
categoryhive-139531
json_metadata{"app":"hiveblog/0.1"}
created2023-08-04 21:33:42
last_update2023-08-04 21:36:15
depth2
children1
last_payout2023-08-11 21:33:42
cashout_time1969-12-31 23:59:59
total_payout_value0.142 HBD
curator_payout_value0.141 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length516
author_reputation1,285,459,763,765,806
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,945,553
net_rshares618,050,859,117
author_curate_reward""
vote details (8)
@spiritsurge ·
Cannot wait for these apps to be ready. 
properties (22)
authorspiritsurge
permlinkre-blocktrades-ryyrw9
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2023.7.1"}
created2023-08-06 10:02:33
last_update2023-08-06 10:02:33
depth3
children0
last_payout2023-08-13 10:02:33
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length40
author_reputation7,531,369,142,603
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,992,546
net_rshares0
@hivebuzz ·
Congratulations @mahdiyari! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

<table><tr><td><img src="https://images.hive.blog/60x60/http://hivebuzz.me/badges/toppayoutday.png"></td><td>Post with the highest payout of the day.</td></tr>
</table>

<sub>_You can view your badges on [your board](https://hivebuzz.me/@mahdiyari) and compare yourself to others in the [Ranking](https://hivebuzz.me/ranking)_</sub>
<sub>_If you no longer want to receive notifications, reply to this comment with the word_ `STOP`</sub>



**Check out our last posts:**
<table><tr><td><a href="/hive-102201/@hivebuzz/wc2023-recap-day15"><img src="https://images.hive.blog/64x128/https://files.peakd.com/file/peakd-hive/hivebuzz/48RPErHeEVfBM2s9tm6koLhrDsXZtJVv8AjvNGXCEG7euvYiR1nrK7vxu6XT5e4aty.png"></a></td><td><a href="/hive-102201/@hivebuzz/wc2023-recap-day15">Women's World Cup Contest - Recap of day 15</a></td></tr><tr><td><a href="/hive-122221/@hivebuzz/pum-202307-delegations"><img src="https://images.hive.blog/64x128/https://i.imgur.com/fg8QnBc.png"></a></td><td><a href="/hive-122221/@hivebuzz/pum-202307-delegations">Our Hive Power Delegations to the July PUM Winners</a></td></tr><tr><td><a href="/hive-102201/@hivebuzz/wc2023-recap-day14"><img src="https://images.hive.blog/64x128/https://files.peakd.com/file/peakd-hive/hivebuzz/48VFyuwmam8fRQHmPwTEJ8sQW5uMoyufbrsJNCRQL8czXc85BamToPtNT4nim1CcC5.png"></a></td><td><a href="/hive-102201/@hivebuzz/wc2023-recap-day14">Women's World Cup Contest - Recap of day 14</a></td></tr></table>
👎  ,
properties (23)
authorhivebuzz
permlinknotify-mahdiyari-20230805t005416
categoryhive-139531
json_metadata{"image":["http://hivebuzz.me/notify.t6.png"]}
created2023-08-05 00:54:15
last_update2023-08-05 00:54:15
depth1
children0
last_payout2023-08-12 00:54:15
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length1,582
author_reputation369,818,460,071,163
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,951,475
net_rshares-190,486,061,955
author_curate_reward""
vote details (2)
@hivebuzz ·
Congratulations @mahdiyari! Your post has been a top performer on the Hive blockchain and you have been rewarded with this rare badge

<table><tr><td><img src="https://images.hive.blog/60x60/http://hivebuzz.me/badges/toppayoutday.png"></td><td>Post with the highest payout of the day.</td></tr>
</table>

<sub>_You can view your badges on [your board](https://hivebuzz.me/@mahdiyari) and compare yourself to others in the [Ranking](https://hivebuzz.me/ranking)_</sub>
<sub>_If you no longer want to receive notifications, reply to this comment with the word_ `STOP`</sub>



**Check out our last posts:**
<table><tr><td><a href="/hivebuzz/@hivebuzz/recovery"><img src="https://images.hive.blog/64x128/https://files.peakd.com/file/peakd-hive/hivebuzz/23uFKTNcnw2EPu4zmyKv64xU3B44awk3KsJtwgZ6amzVpuoeneaMkE4dWHZCNnem6c2Y1.png"></a></td><td><a href="/hivebuzz/@hivebuzz/recovery">Rebuilding HiveBuzz: The Challenges Towards Recovery</a></td></tr></table>
properties (22)
authorhivebuzz
permlinknotify-mahdiyari-20240310t210359
categoryhive-139531
json_metadata{"image":["https://hivebuzz.me/notify.t6.png"]}
created2024-03-10 21:04:00
last_update2024-03-10 21:04:00
depth1
children0
last_payout2024-03-17 21:04:00
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length951
author_reputation369,818,460,071,163
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id131,946,723
net_rshares0
@ibbtammy ·
Wow!!! You developers are just so cool!
properties (22)
authoribbtammy
permlinkre-mahdiyari-ryxd9h
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2023.7.1"}
created2023-08-05 15:48:54
last_update2023-08-05 15:48:54
depth1
children0
last_payout2023-08-12 15:48:54
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length39
author_reputation234,942,234,256,995
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,967,852
net_rshares0
@pizzabot ·
<center>PIZZA!
 The Hive.Pizza team manually curated this post.

<sub>Please <a href="https://vote.hive.uno/@pizza.witness">vote for pizza.witness</a>!</sub></center>
👎  ,
properties (23)
authorpizzabot
permlinkre-how-to-run-a-haf-node-2023-20230804t154730z
categoryhive-139531
json_metadata"{"app": "pizzabot"}"
created2023-08-04 15:47:30
last_update2023-08-04 15:47:30
depth1
children0
last_payout2023-08-11 15:47:30
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length166
author_reputation7,627,276,977,827
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,937,069
net_rshares-189,427,725,229
author_curate_reward""
vote details (2)
@rishi556 · (edited)
$0.99
> - P2P sync
Or you can just start haf and it will sync from p2p - I would assume it takes 1-3 days (never tested myself)

On a Ryzen 7950X machine with 128 GB of ram, done it in just about 24 hours(either slightly under or over, don't recall which). This was about 1.5 months ago so should be basically the same now.
👍  ,
properties (23)
authorrishi556
permlinkre-mahdiyari-rywgql
categoryhive-139531
json_metadata{"tags":"hive-139531"}
created2023-08-05 04:06:21
last_update2023-08-05 04:06:39
depth1
children0
last_payout2023-08-12 04:06:21
cashout_time1969-12-31 23:59:59
total_payout_value0.495 HBD
curator_payout_value0.495 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length317
author_reputation133,713,715,350,218
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,954,459
net_rshares2,133,257,206,201
author_curate_reward""
vote details (2)
@spiritsurge ·
I would highly recommend for anyone wanting to start a multiplexer session for a HAF node to use a .service file to run it in the background. This will simplify the process of starting and stopping services without having to jump into different sessions repeatedly.

This has been a core thing that I have used for my own unity games to run with mongodb.

Also Tmux is more customizable than screen, but each to their own.
properties (22)
authorspiritsurge
permlinkre-mahdiyari-ryyseg
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2023.7.1"}
created2023-08-06 10:13:30
last_update2023-08-06 10:13:30
depth1
children0
last_payout2023-08-13 10:13:30
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length422
author_reputation7,531,369,142,603
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,992,734
net_rshares0
@theguruasia ·
$WINE
properties (22)
authortheguruasia
permlinkre-mahdiyari-rywdfm
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2023.7.1"}
created2023-08-05 02:54:54
last_update2023-08-05 02:54:54
depth1
children0
last_payout2023-08-12 02:54:54
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length5
author_reputation72,582,528,957,478
root_title"How to run a HAF node - 2023"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id125,953,373
net_rshares0