create account

Details about the setup of my public node by pharesim

View this thread on: hive.blogpeakd.comecency.com
· @pharesim · (edited)
$83.46
Details about the setup of my public node
This post will be a go-to reference for myself, and a resource for anyone interested in setting up an api node. It's very technical, so if you don't belong to one of those two groups you can skip it.

<center>https://upload.wikimedia.org/wikipedia/commons/thumb/b/b8/Compiling.jpg/640px-Compiling.jpg</center>

---------

# Introduction

My current setup is close to the default hivemind setup provided in the example jussi config. After talks with @gtg one of the nodes offers a few more apis for performance reasons, as the fat node is very slow. There surely are not-optimal or even misconfigurations, optimization will be an ongoing process. See the end of the post for more info, make sure to read and understand everything before you start. I will update this post with any important changes.

---------

# Hardware

First, we need hardware. The setup consists of 3 nodes, for which I selected the following specs:

- _hivemind_
32GB RAM
2x240GB SSD RAID0
- _fat_
64GB RAM
2x480GB SSD RAID0
- _accounthistory_
64GB RAM
2x512GB NVMe RAID0 (64GB SWAP)

All are set up with a clean install of Ubuntu 18.04

-------------------

# Setup

## Common steps

Set up each server to log in securely and with a dedicated user. The user on all machines will be called "hive" during this process. Individual needs may differ so I don't go into details here and only provide the steps necessary to proceed.

```sudo apt-get update && sudo apt-get upgrade```
```sudo apt-get install -y screen git nginx  certbot python-certbot-nginx```

## Step 1: make everything sync

### hivemind node

install software
```
cd ~
sudo apt-get install -y python3 python3-pip postgresql postgresql-contrib docker.io
git clone https://gitlab.syncad.com/hive/hivemind.git
```

```
cd hivemind
sudo pip3 install -e .[test]
```

setup database
```
sudo su postgres
createdb hive
```

Create db user hive and grant access to database
```createuser --interactive```
```psql```
```GRANT ALL PRIVILEGES ON DATABASE hive TO hive;```
```\q```
```exit```

optimize postgres
```sudo nano /etc/postgresql/10/main/postgresql.conf```
use https://pgtune.leopard.in.ua/ to find the optimal settings for your machine. I used the following:
```
# DB Version: 10
# OS Type: linux
# DB Type: web
# Total Memory (RAM): 32 GB
# Data Storage: ssd

max_connections = 200
shared_buffers = 8GB
effective_cache_size = 24GB
maintenance_work_mem = 2GB
checkpoint_completion_target = 0.7
wal_buffers = 16MB
default_statistics_target = 100
random_page_cost = 1.1
effective_io_concurrency = 200
work_mem = 20971kB
min_wal_size = 1GB
max_wal_size = 4GB
```
```sudo service postgresql restart```

The irredeemables list is a blacklist containing mass spammers mostly. It's recommended to use it if you serve browser based interfaces, because the amount of comments by these accounts creates a lot of traffic and is a burden on browsers. It's defined in ```/home/hive/hivemind/hive/conf.py``` under ```--muted-accounts-url```. You can change it there, or add the environment variable ```MUTED_ACCOUNTS_URL``` in both scripts if you do not want to use the default. I offer an [empty version](https://raw.githubusercontent.com/pharesim/irredeemables/master/full.txt) if you don't want to filter the results.

Create sync script
```nano sync.sh```
Insert the following (the STEEMD_URL is temporary until your own fat node has synced, update and restart it in a few days to speed up the hivemind sync):
```
#!/bin/bash
export DATABASE_URL=postgresql://hive:pass@localhost:5432/hive
export STEEMD_URL='{"default": "https://fat.pharesim.me"}'
export HTTP_SERVER_PORT=28091
hive sync
```

```
chmod +x sync.sh
screen -S hivesync
```
```./sync.sh```
Use ```Ctrl-a d``` to detach screen, ```screen -r hivesync``` to reattach

The whole sync process takes about a week. Don't forget changing the STEEMD_URL when your fat node is finished. Unlike the steemd replays, you can interrupt this sync at any time and it picks up where you stopped.

The sync is finished when you see single blocks coming in. Keep it running, and set up the server:
```
cp sync.sh hivemind.sh
nano hivemind.sh
```
Change ```sync``` in the end for ```server```

```
screen -S hivemind
```
```./hivemind.sh```
Use ```Ctrl-a d``` to detach screen, ```screen -r hivemind``` to reattach

### steemd nodes

Both the fat and the accounthistory node will run an instance of steemd, these are the steps to prepare them:
```
sudo apt-get install -y autoconf automake cmake g++ git libbz2-dev libsnappy-dev libssl-dev libtool make pkg-config python3-jinja2 libboost-chrono-dev libboost-context-dev libboost-coroutine-dev libboost-date-time-dev libboost-filesystem-dev libboost-iostreams-dev libboost-locale-dev libboost-program-options-dev libboost-serialization-dev libboost-signals-dev libboost-system-dev libboost-test-dev libboost-thread-dev doxygen libncurses5-dev libreadline-dev perl ntp
```
```
cd
git clone https://github.com/openhive-network/hive
cd hive
git checkout v0.23.0
git submodule update --init --recursive
mkdir build
cd build
```
The build options differ for the two nodes

_fat_
```
cmake -DCMAKE_BUILD_TYPE=Release -DLOW_MEMORY_NODE=OFF -DCLEAR_VOTES=OFF -DSKIP_BY_TX_ID=OFF -DBUILD_STEEM_TESTNET=OFF -DENABLE_MIRA=ON -DSTEEM_STATIC_BUILD=ON ..
```

_accounthistory_
```
cmake -DCMAKE_BUILD_TYPE=Release -DLOW_MEMORY_NODE=ON -DCLEAR_VOTES=ON -DSKIP_BY_TX_ID=OFF -DBUILD_STEEM_TESTNET=OFF -DENABLE_MIRA=OFF -DSTEEM_STATIC_BUILD=ON ..
```

Again on both:
```
make -j$(nproc) steemd
cd
mkdir bin
cp /home/hive/hive/build/programs/steemd bin/v0.23.0
mkdir .steemd
nano .steemd/config.ini
```

And again, the configs differ for the two nodes

_fat_
```
log-appender = {"appender":"stderr","stream":"std_error"} {"appender":"p2p","file":"logs/p2p/p2p.log"}
log-logger = {"name":"default","level":"info","appender":"stderr"} {"name":"p2p","level":"warn","appender":"p2p"}
backtrace = yes

plugin = webserver p2p json_rpc witness account_by_key reputation market_history
plugin = database_api account_by_key_api network_broadcast_api reputation_api
plugin = market_history_api condenser_api block_api rc_api

history-disable-pruning = 0
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"
block-data-export-file = NONE
block-log-info-print-interval-seconds = 86400
block-log-info-print-irreversible = 1
block-log-info-print-file = ILOG
sps-remove-threshold = 200

shared-file-dir = "blockchain"
shared-file-size = 360G

shared-file-full-threshold = 0
shared-file-scale-rate = 0
follow-max-feed-size = 500
follow-start-feeds = 0
market-history-bucket-size = [15,60,300,3600,86400]
market-history-buckets-per-size = 5760

p2p-seed-node = anyx.io:2001 gtg.steem.house:2001 seed.jesta.us:2001

rc-skip-reject-not-enough-rc = 0
rc-compute-historical-rc = 0
statsd-batchsize = 1
tags-start-promoted = 0
tags-skip-startup-update = 0
transaction-status-block-depth = 64000
transaction-status-track-after-block = 0

webserver-http-endpoint = 0.0.0.0:28091
webserver-ws-endpoint = 0.0.0.0:28090
webserver-thread-pool-size = 32

enable-stale-production = 0
required-participation = 33
witness-skip-enforce-bandwidth = 1
```
Not sure about the shared_file_size here, it's rocksdb? Better safe than sorry...

_accounthistory_
```
log-appender = {"appender":"stderr","stream":"std_error"} {"appender":"p2p","file":"logs/p2p/p2p.log"}
log-logger = {"name":"default","level":"info","appender":"stderr"} {"name":"p2p","level":"warn","appender":"p2p"}
backtrace = yes

plugin = webserver p2p json_rpc witness
plugin = rc market_history_account_history_rocksdb transaction_status account_by_key
plugin = database_api condenser_api market_history_api account_history_api transaction_status_api account_by_key_api
plugin = block_api network_broadcast_api rc_api

history-disable-pruning = 1
account-history-rocksdb-path = "blockchain/account-history-rocksdb-storage"
block-data-export-file = NONE
block-log-info-print-interval-seconds = 86400
block-log-info-print-irreversible = 1
block-log-info-print-file = ILOG
sps-remove-threshold = 200

shared-file-dir = "/run/hive"
shared-file-size = 120G

shared-file-full-threshold = 9500
shared-file-scale-rate = 1000
flush-state-interval = 0
follow-max-feed-size = 500
follow-start-feeds = 0
market-history-bucket-size = [15,60,300,3600,86400]
market-history-buckets-per-size = 5760

p2p-seed-node = anyx.io:2001 seed.jesta.us:2001

rc-skip-reject-not-enough-rc = 0
rc-compute-historical-rc = 0
statsd-batchsize = 1
tags-start-promoted = 0
tags-skip-startup-update = 0
transaction-status-block-depth = 64000
transaction-status-track-after-block = 42000000

webserver-http-endpoint = 0.0.0.0:28091
webserver-ws-endpoint = 0.0.0.0:28090
webserver-thread-pool-size = 32

```

The _fat_ node also needs a database.cfg
```nano .steemd/database.cfg```

These settings are for 32GB of RAM. Adapt global.shared_cache.capacity, global.write_buffer_manager.write_buffer_size and global.object_count accordingly
```
{
  "global": {
    "shared_cache": {
      "capacity": "21474836480"
    },
    "write_buffer_manager": {
      "write_buffer_size": "4294967296"
    },
    "object_count": 250000,
    "statistics": false
  },
  "base": {
    "optimize_level_style_compaction": true,
    "increase_parallelism": true,
    "block_based_table_options": {
      "block_size": 8192,
      "cache_index_and_filter_blocks": true,
      "bloom_filter_policy": {
        "bits_per_key": 10,
        "use_block_based_builder": false
      }
    }
  }
}
```
As well as increased file limits (hive is the username, adapt to your memory again)
```sudo nano /etc/security/limits.conf```
Insert near the end of the file
```
hive soft  nofile 262140
hive hard nofile 262140
```
```sudo nano /etc/sysctl.conf ```
Insert near the end of the file
```
fs.file-max = 2097152
```
You need to log in to the server again for these to take effect.

The _accounthistory_ requires a change of the size of /run 
```mount -o remount ,size=120G /run```
and a directory /run/hive
```
sudo mkdir /run/hive
sudo chown hive:hive /run/hive
```

Then continue on both servers.
Download block_log.index and block_log
```
rsync -avh --progress --append rsync://files.privex.io/hive/block_log.index .steemd/blockchain/block_log.index
rsync -avh --progress --append rsync://files.privex.io/hive/block_log .steemd/blockchain/block_log
```
Go for a walk or have dinner.

Start up steemd and replay blockchain
```screen -S hive```
```
echo    75 | sudo tee /proc/sys/vm/dirty_background_ratio
echo  1000 | sudo tee /proc/sys/vm/dirty_expire_centisecs
echo    80 | sudo tee /proc/sys/vm/dirty_ratio
echo 30000 | sudo tee /proc/sys/vm/dirty_writeback_centisecs
```
```~/bin/v0.23.0 --replay```
Use ```Ctrl-d a``` to detach from screen, and ```screen -r hive``` to reattach.

The sync process takes a bit more than 2 days on the fat node, less on accounthistory.  Do not interrupt them, or you will have to start over. If you are syncing a hivemind node, don't forget to switch to the fat node when that's finished.

## Step 2: webserver + routing

### all nodes

All requests will be proxied by nginx, so we need this on all machines. We will install SSL certificates, so all communication is encrypted and all nodes can be called individually.

```sudo nano /etc/nginx/sites-enabled/hive```
The config is the same for each node, only change the server_name
```
upstream hivesrvs {
# Dirty Hack. Causes nginx to retry node
   server 127.0.0.1:28091;
   server 127.0.0.1:28091;
   server 127.0.0.1:28091;
   server 127.0.0.1:28091;
   keepalive 10;
}

server {
    server_name hivemind/fat/acchist.you.tld;
    root /var/www/html/;

    location ~ ^(/|/ws) {
        proxy_pass http://hivesrvs;
        proxy_set_header Connection "";
        include snippets/rpc.conf;
    }
```
Add the rpc.conf to each server
```sudo nano /etc/nginx/snippets/rpc.conf```
Insert
```
access_log off;

proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
proxy_connect_timeout 10;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";

keepalive_timeout 65;
keepalive_requests 100000;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
proxy_ssl_verify off;
```

Let certbot configure the domains for automatic redirect to https
```sudo certbot --nginx```

### hivemind node

We need an additional file for nginx, for the general entry point 
```
sudo cp /etc/nginx/sites-enabled/hive /etc/nginx/sites-enabled/api
sudo nano /etc/nginx/sites-enabled/api
```

Change both occurances of ```hivesrvs``` to ```jussisrv```, the ports from ```28091``` to ```9000```, and the server_name to api., or whatever you want your node to be accessible on.


#### jussi

```
cd ~
git clone https://gitlab.syncad.com/hive/jussi.git
```

Create build script
```nano build.sh```
and insert
```
#!/bin/bash
cd /home/hive/jussi
sudo docker build -t="$USER/jussi:$(git rev-parse --abbrev-ref HEAD)" .
```
```chmod +x build.sh```

Create run script
```nano jussi.sh```
and insert
```
#!/bin/bash
cd /home/hive/jussi
sudo docker run -itp 9000:8080 --log-opt max-size=50m "$USER/jussi:$(git rev-parse --abbrev-ref HEAD)"
```
```chmod +x jussi.sh```

```screen -S jussi```
```
cd ~/jussi
nano DEV_config.json
```

Currently, my config looks like this:

```
{
    "limits": { "accounts_blacklist": [ "accounttoblock" ] },
    "upstreams": [
      {
        "name": "steemd",
        "translate_to_appbase": true,
        "urls": [["steemd", "https://fat1.pharesim.me" ]],
        "ttls": [["steemd", 3]],
        "timeouts": [["steemd",3]]
      },
      {
        "name": "appbase",
        "urls": [
          ["appbase", "https://fat1.pharesim.me"],

          ["appbase.account_history_api", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_account_history", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_ops_in_block", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_transaction", "https://acchist1.pharesim.me"],

          ["appbase.condenser_api.get_followers", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_following", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_follow_count", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_trending", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_hot", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_promoted", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_created", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_blog", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_feed", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_comments", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_reblogged_by", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_replies_by_last_update", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_trending_tags", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_discussions_by_author_before_date", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_post_discussions_by_payout", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_comment_discussions_by_payout", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_blog", "http://localhost:28091"],
          ["appbase.condenser_api.get_blog_entries", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_account_votes", "https://hivemind.pharesim.me"],
          ["appbase.condenser_api.get_state", "https://hivemind.pharesim.me"],

          ["appbase.condenser_api.get_state.params=['witnesses']", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_state.params=['/witnesses']", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_state.params=['/~witnesses']", "https://acchist1.pharesim.me"],
          ["appbase.condenser_api.get_state.params=['~witnesses']", "https://acchist1.pharesim.me"],

          ["appbase.follow_api", "https://hivemind.pharesim.me"],
          ["appbase.tags_api", "https://hivemind.pharesim.me"],

          ["appbase.market_history_api", "https://acchist1.pharesim.me"],
          ["appbase.transaction_status_api", "https://acchist1.pharesim.me"],
          ["appbase.account_by_key_api", "https://acchist1.pharesim.me"],
          ["appbase.block_api", "https://acchist1.pharesim.me"],
          ["appbase.network_broadcast_api", "https://acchist1.pharesim.me"],
          ["appbase.rc_api", "https://acchist1.pharesim.me"]
        ],
        "ttls": [
          ["appbase", 3],
          ["appbase.login_api",-1],
          ["appbase.network_broadcast_api", -1],
          ["appbase.follow_api", 10],
          ["appbase.market_history_api", 1],
          ["appbase.condenser_api", 3],
          ["appbase.condenser_api.get_block", -2],
          ["appbase.condenser_api.get_block_header", -2],
          ["appbase.condenser_api.get_content", 1],
          ["appbase.condenser_api.get_state", 1],
          ["appbase.condenser_api.get_state.params=['/trending']", 30],
          ["appbase.condenser_api.get_state.params=['trending']", 30],
          ["appbase.condenser_api.get_state.params=['/hot']", 30],
          ["appbase.condenser_api.get_state.params=['/welcome']", 30],
          ["appbase.condenser_api.get_state.params=['/promoted']", 30],
          ["appbase.condenser_api.get_state.params=['/created']", 10],
          ["appbase.condenser_api.get_dynamic_global_properties", 3]
        ],
        "timeouts": [
          ["appbase", 3],
          ["appbase.network_broadcast_api",0],
          ["appbase.chain_api.push_block", 0],
          ["appbase.chain_api.push_transaction", 0],
          ["appbase.condenser_api.broadcast_block", 0],
          ["appbase.condenser_api.broadcast_transaction", 0],
          ["appbase.condenser_api.broadcast_transaction_synchronous", 0],
          ["appbase.condenser_api.get_account_history", 20],
          ["appbase.condenser_api.get_account_votes", 20],
          ["appbase.condenser_api.get_ops_in_block.params=[2889020,false]", 20],
          ["appbase.account_history_api.get_account_history", 20],
          ["appbase.account_history_api.get_ops_in_block.params={\"block_num\":2889020,\"only_virtual\":false}", 20]
        ]
      },
      {
        "name": "hive",
        "urls": [["hive", "http://localhost:28091"]],
        "ttls": [["hive", -1]],
        "timeouts": [["hive", 30]]
      },
    {
      "name": "bridge",
      "translate_to_appbase": false,
      "urls": [["bridge","http://localhost:28091"]],
      "ttls": [["bridge",-1]],
      "timeouts": [["bridge",30]]
    }
    ]
  }
```

```cd```
```./build.sh```
This takes a while, when finished
```./run.sh```
Use ```Ctrl-a d``` to detach screen, ```screen -r jussi``` to reattach
If you update your config, run build.sh outside of screen, then attach to screen and restart the run script. (There may be faster ways to update the docker with a new config, but I'm new to this).

### Something about steemd apis

As you might have realized, there are some duplicates in the apis _fat_ and _accounthistory_ provide. That's because of what I mentioned above, _fat_ is _slow_. I did not investigate which aren't needed on _fat_ now to still work for hivemind, so I didn't change anything in that default configuration. I also just took the list for apis on _accounthistory_ from @gtg without questioning. There is unnecessary redundancy for sure, instructions may change in the future to improve on this. Your perfect setup may differ completely, depending on which apis you need to serve (most).

## Finishing words

That's it. After everything is synced, you should have a working public node ready to serve requests! If this guide has helped you and/or you want to support my work on Hive infrastructure, education, onboarding and retention, please [help me secure my witness spot](https://peakd.com/me/witnesses)!

## Stats

Current output of ```df -h``` on the three servers (April 14):

_hivemind_
```
/dev/md2        407G  272G  115G  71% /
```

_fat_
```
/dev/md2        815G  369G  406G  48% /
```

_accounthistory_
```
tmpfs           120G   58G   63G  48% /run
/dev/md2        874G  531G  299G  65% /
```
👍  , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and 294 others
👎  , , ,
properties (23)
authorpharesim
permlinkdetails-about-the-setup-of-my-public-node
categoryhive-139531
json_metadata{"tags":["hive","witness","tutorial","rpc","howto"],"users":["gtg","localhost"],"image":["https://upload.wikimedia.org/wikipedia/commons/thumb/b/b8/Compiling.jpg/640px-Compiling.jpg"],"links":["/@gtg","https://pgtune.leopard.in.ua/","https://raw.githubusercontent.com/pharesim/irredeemables/master/full.txt","/@gtg","/me/witnesses"],"app":"peakd/2020.09.1","format":"markdown"}
created2020-04-10 20:32:57
last_update2020-09-10 11:39:57
depth0
children28
last_payout2020-04-17 20:32:57
cashout_time1969-12-31 23:59:59
total_payout_value44.243 HBD
curator_payout_value39.214 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length20,649
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,790,755
net_rshares232,388,660,002,545
author_curate_reward""
vote details (362)
@ackza ·
I rEALLy want a dogecoin BTC eth tip bot and @discordtip COULD add Hive posts, it DOES have hive in discord, but imagine COMMENT bot that could TIP u, posts and comments, and it SHOWS ON THE POST HOW MUCH u have MADE in TIPS which formatted can just be like a special consensus  memo... we all pick a special memo format for sending BTCP to someone on hive-engine and we can chill
properties (22)
authorackza
permlinkq8nodg
categoryhive-139531
json_metadata{"users":["discordtip"],"app":"hiveblog/0.1"}
created2020-04-12 03:29:39
last_update2020-04-12 03:29:39
depth1
children0
last_payout2020-04-19 03:29:39
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length380
author_reputation287,600,155,981,972
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,806,691
net_rshares0
@deathwing ·
$0.02
Hey @pharesim, thank you for setting up a node for HIVE. I do have a few questions on the hardware side though.

1 - Why did you go with RAID0 when you have NVMe? It shouldn't really give *groundbreaking* boost to the RW.
2 - Why 3 different servers/nodes rather than going with just one big server/node? 
👍  
properties (23)
authordeathwing
permlinkre-pharesim-q8lb2a
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-10 20:47:00
last_update2020-04-10 20:47:00
depth1
children2
last_payout2020-04-17 20:47:00
cashout_time1969-12-31 23:59:59
total_payout_value0.012 HBD
curator_payout_value0.012 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length305
author_reputation269,077,595,754,009
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,790,917
net_rshares126,989,683,636
author_curate_reward""
vote details (1)
@pharesim ·
1) For storage space ;)
2) How you set that up in detail really depends on what you want. Postgres and rocksdb are heavy on disk i/o, the accounthistory node will soon use the disk for swapping. Other setups work too, in fact this guide by @privex with a completely different one was one of my resources for setting it up: https://hackmd.io/@KoCktFVzTnePd9BdXfC7og/HJxKBAhyv8
properties (22)
authorpharesim
permlinkre-deathwing-q8lble
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-10 20:58:30
last_update2020-04-10 20:58:30
depth2
children1
last_payout2020-04-17 20:58:30
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length375
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,791,060
net_rshares0
@ackza · (edited)
I love hearing about old school steemians now on hive! so refreshing! I remember pharesim from steeminvite which was just a great time during steem's glory days. we can recapture them again with enough youtube videos @jerrybanfield style focused on hive and bee memes!

WOOO I read that SO fast! I resteemed and voted because I saw PHARESIM! Anything PHARESIM posst I will Support Blindly untill someone stops me! WOOOOOo I love having a vague idea of whose valuable based on a superficial view of a blockchain reddit a few years ago WOOOOO 

Im so EXCITED for the good old days!!!



![HiveFrogPharsim1.png](https://images.hive.blog/DQmPfY5LzqEhJkgGejLhFHB3JWwonjGnGpW1hhiDPA1DYQA/HiveFrogPharsim1.png)
👍  
properties (23)
authorackza
permlinkq8nod0
categoryhive-139531
json_metadata{"image":["https://images.hive.blog/DQmPfY5LzqEhJkgGejLhFHB3JWwonjGnGpW1hhiDPA1DYQA/HiveFrogPharsim1.png"],"app":"hiveblog/0.1","users":["jerrybanfield"]}
created2020-04-12 03:29:24
last_update2020-04-12 03:34:06
depth3
children0
last_payout2020-04-19 03:29:24
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length703
author_reputation287,600,155,981,972
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,806,690
net_rshares5,333,957,791
author_curate_reward""
vote details (1)
@donchate ·
Great effort. Thanks for documenting your process.

Minor addendum:
STEEMD_URL syntax has changed, is now:
```
export STEEMD_URL='{"default": "http://api.example.com:8091"}'
properties (22)
authordonchate
permlinkre-pharesim-q9rogy
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.04.5"}
created2020-05-03 17:55:57
last_update2020-05-03 17:55:57
depth1
children1
last_payout2020-05-10 17:55:57
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length173
author_reputation956,784,034,700
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id97,164,043
net_rshares0
@pharesim ·
Thanks for the notice, updated
properties (22)
authorpharesim
permlinkre-donchate-q9t84o
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.04.5"}
created2020-05-04 13:58:03
last_update2020-05-04 13:58:03
depth2
children0
last_payout2020-05-11 13:58:03
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length30
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id97,181,097
net_rshares0
@flugschwein ·
Downvote solely for disagreement on rewards, not because I dislike your post.
I actually really appreciate that you are running a full node, and that you are documenting how you did it etc., I just don't believe that any Hive post in the current reward "environment" deserves >80$ Payout.
properties (22)
authorflugschwein
permlinkre-pharesim-q8oyp9
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-12 20:10:24
last_update2020-04-12 20:10:24
depth1
children0
last_payout2020-04-19 20:10:24
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length288
author_reputation11,950,112,708,339
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,816,011
net_rshares0
@fulltimegeek ·
I'm leaving this comment as a bookmark for myself.

God bless all the nerds that are running API NODES!!! The amount of RAM that it requires is astronomical. For me, the only difficult part is acquiring the hardware+bandwith, setting it up is the easy part ... 
properties (22)
authorfulltimegeek
permlinkre-pharesim-q8ot43
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-12 18:09:42
last_update2020-04-12 18:09:42
depth1
children0
last_payout2020-04-19 18:09:42
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length261
author_reputation82,542,358,852,913
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,814,584
net_rshares0
@hivedelegation ·
Again, thanks for your service on all this!
properties (22)
authorhivedelegation
permlinkre-pharesim-q8ngag
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-12 00:35:09
last_update2020-04-12 00:35:09
depth1
children0
last_payout2020-04-19 00:35:09
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length43
author_reputation22,685,284,005
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,805,456
net_rshares0
@idiosyncratic1 ·
$0.02
Thank you for all your efforts 😌 All the things you're doing for HIVE are sincerely appreciated by the community members.
👍  
properties (23)
authoridiosyncratic1
permlinkre-pharesim-2020411t14525438z
categoryhive-139531
json_metadata{"tags":["hive","witness","tutorial","rpc","howto"],"app":"esteem/2.2.5-surfer","format":"markdown+html","community":"esteem.app"}
created2020-04-10 22:45:27
last_update2020-04-10 22:45:27
depth1
children0
last_payout2020-04-17 22:45:27
cashout_time1969-12-31 23:59:59
total_payout_value0.011 HBD
curator_payout_value0.012 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length121
author_reputation497,306,620,132,905
root_title"Details about the setup of my public node"
beneficiaries
0.
accountesteemapp
weight300
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,791,942
net_rshares124,450,960,606
author_curate_reward""
vote details (1)
@pharesim ·
$0.02
This comment is for issues, changes and such
👍  ,
properties (23)
authorpharesim
permlinkre-pharesim-q8mipl
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-11 12:29:48
last_update2020-04-11 12:29:48
depth1
children5
last_payout2020-04-18 12:29:48
cashout_time1969-12-31 23:59:59
total_payout_value0.012 HBD
curator_payout_value0.012 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length44
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,798,206
net_rshares127,188,579,999
author_curate_reward""
vote details (2)
@pharesim ·
$0.02
condenser_api.get_account_history needs a higher timeout in jussi, added 20s
👍  
properties (23)
authorpharesim
permlinkre-pharesim-q8miqn
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-11 12:30:27
last_update2020-04-11 12:30:27
depth2
children0
last_payout2020-04-18 12:30:27
cashout_time1969-12-31 23:59:59
total_payout_value0.011 HBD
curator_payout_value0.011 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length76
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,798,214
net_rshares119,524,843,146
author_curate_reward""
vote details (1)
@pharesim · (edited)
- added get_reblogged_by to jussi, is served by hivemind
- second nginx config file on _hivemind_ (api+hivemind)
- completed setup of hivemind
- updated stats
properties (22)
authorpharesim
permlinkre-pharesim-q8s0i7
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-14 11:42:12
last_update2020-04-14 11:53:09
depth2
children0
last_payout2020-04-21 11:42:12
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length158
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,836,014
net_rshares0
@pharesim · (edited)
- condenser_api.get_transaction routed to acchist
- raised timeout for account_history_api.get_account_history 
properties (22)
authorpharesim
permlinkre-pharesim-q9luoq
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.04.5"}
created2020-04-30 14:24:27
last_update2020-04-30 14:28:51
depth2
children0
last_payout2020-05-07 14:24:27
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length111
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id97,097,250
net_rshares0
@pharesim ·
Updated STEEMD_URL to new syntax
properties (22)
authorpharesim
permlinkre-pharesim-q9t84a
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.04.5"}
created2020-05-04 13:57:51
last_update2020-05-04 13:57:51
depth2
children0
last_payout2020-05-11 13:57:51
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length32
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id97,181,093
net_rshares0
@pharesim ·
Added max log size to jussi's docker run command to stop the disk from filling up
properties (22)
authorpharesim
permlinkre-pharesim-qgfxtp
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.09.1"}
created2020-09-10 11:41:51
last_update2020-09-10 11:41:51
depth2
children0
last_payout2020-09-17 11:41:51
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length81
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id99,552,135
net_rshares0
@valued-customer ·
$0.02
I probably cannot adequately express my appreciation for folks undertaking to improve the decentralization of Hive, as you are.

I am not a dev, but have some questions regarding how your build affects censorship and resistance to it on Hive.

>"bloom_filter_policy"

Is this a reference to @bloom?

I note you are incorporating @themarkymark's blacklist.  Does this include #irredeemables github list?  I did not see a specific reference to that list, so inquire.  I have strongly advocated for far greater public awareness and involvement in that absolute censorship of affected accounts, and it is my hope that full nodes that do not simply mirror that API censorship mechanism arise.

I would appreciate any comment you might feel appropriate regarding that apparently covertly exercised censorship mechanism, and specifically regarding it's present application to @joe.public, who seems to have been placed on that list and completely censored on all front ends on Hive using extant full nodes, for no other reason than annoying Bernie and Marty.

I do not advocate being a troll.  However, I am confident that the definition of a troll is highly subjective, and allowing such total censorship to be applied based on personal opinion is a grave threat to Hive's ability to secure free speech.  I would rather deal with trolls and flags than potential censorship because powerful accounts don't like them, or maybe any of us.

Trolls are annoying.  Censorship is existential.  Steem today reveals what that slippery slope ensures Hive will become if public information and involvement in API mediated censorship is not established presently.

Thanks very much!
👍  
properties (23)
authorvalued-customer
permlinkre-pharesim-q8lfu5
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-10 22:30:12
last_update2020-04-10 22:30:12
depth1
children11
last_payout2020-04-17 22:30:12
cashout_time1969-12-31 23:59:59
total_payout_value0.012 HBD
curator_payout_value0.012 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length1,664
author_reputation353,226,184,141,697
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,791,828
net_rshares132,222,562,629
author_curate_reward""
vote details (1)
@gtg ·
$0.02
> I would appreciate any comment you might feel appropriate regarding that apparently covertly exercised censorship mechanism, and specifically regarding it's present application to @joe.public, who seems to have been placed on that list and completely censored on all front ends on Hive using extant full nodes

Not true. You must be mixing Hive's blacklists and Steemit's censorship.
To see the difference, just compare:
https://hive.blog/@joe.public
https://steemit.com/@gtg
👍  
properties (23)
authorgtg
permlinkq8lhb2
categoryhive-139531
json_metadata{"users":["joe.public"],"links":["https://hive.blog/@joe.public","https://steemit.com/@gtg"],"app":"hiveblog/0.1"}
created2020-04-10 23:01:51
last_update2020-04-10 23:01:51
depth2
children7
last_payout2020-04-17 23:01:51
cashout_time1969-12-31 23:59:59
total_payout_value0.012 HBD
curator_payout_value0.012 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length477
author_reputation461,775,164,998,463
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,792,073
net_rshares129,580,217,340
author_curate_reward""
vote details (1)
@joe.public ·
You seem to be  completely missing the point he is making.
No surprises there I guess
properties (22)
authorjoe.public
permlinkq8lvms
categoryhive-139531
json_metadata{"app":"hiveblog/0.1"}
created2020-04-11 04:11:18
last_update2020-04-11 04:11:18
depth3
children0
last_payout2020-04-18 04:11:18
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length85
author_reputation-6,029,891,516,751
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,794,378
net_rshares0
@valued-customer · (edited)
$0.02
While Stinc under Yuchen seems to have gone after back catalogs, I received a comment from @joe.public today, which is now invisible, and my reply is also invisible.  This is censorship at the API level, exactly as it is being undertaken by Stinc and the CCP on Steem, just on Hive it is only applied to active communications, and not catalogs.

>"The users in the irredeemables list will have their comments and posts filtered out and their flags will not be considered in the logic that determines if comments or posts are hidden in most front end applications including Condenser (the software that powers steemit.com)."

The latest update to the #irredeemables list was <a href="https://github.com/steemit/irredeemables">last month</a>, just before Hive forked off Steem.  I have received no indication Hive treated this list any differently than Steem did, and absent intentional changes to how Hive works, this exact list should have the exact same affect on Hive it does on Steem.

<a href="https://github.com/steemit/irredeemables/blob/master/full.txt">here</a> you can see the full list as of that time, administered by @themarkymark and friends, and @joe.public is listed as #622.

No reason given for his being on that list is compelling enough to mandate his total censorship on Hive.  He annoyed @themarkymark and Bernie.  

That's all it takes to generate a complete censorship of Hive users.  That's damn little difference from what Sun Yuchen is doing to Steem.

Hive needs censorship resistance, not a band of merry censors keeping the narrative safe for consensus witnesses and ninjamine whales.

I do appreciate your comment.  I am aware of the feathers I have ruffled, and expect your adamance regarding independent thought is particularly revealed by your comment here.  I note you're an expert coder, and that nothing I have said in this comment is new information to you.  

So, why do you note the minor difference in target of the same exact mechanism that is actively censoring folks on both Steem, wielded by Sun Yuchen today, and @themarkymark and his band of merry silencers of dissent on Hive today?  Seems a bit misleading.

Edit: I should say 'the exact same method' rather than 'mechanism, as you are not on the #irredeemables list, which shows that Yuchen is using a different list via the API method to censor folks.
👍  ,
properties (23)
authorvalued-customer
permlinkre-gtg-q8lr8o
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-11 02:36:30
last_update2020-04-11 02:54:03
depth3
children5
last_payout2020-04-18 02:36:30
cashout_time1969-12-31 23:59:59
total_payout_value0.012 HBD
curator_payout_value0.011 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length2,351
author_reputation353,226,184,141,697
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,793,673
net_rshares122,912,228,007
author_curate_reward""
vote details (2)
@pharesim ·
> Is this a reference to @bloom?

No

> I note you are incorporating @themarkymark's blacklist

The irredeemables were taken from steemit without changes for now. As it was managed by them (with the help of marky), that wasn't something I could influence so I never really cared.
I do now! This list definitely has to be discussed and a process established to not let a single person just add someone who annoys them. Thanks for pointing it out, I will check more opinions.
👍  ,
properties (23)
authorpharesim
permlinkre-valued-customer-q8m9nf
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-11 09:14:15
last_update2020-04-11 09:14:15
depth2
children2
last_payout2020-04-18 09:14:15
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length473
author_reputation239,352,620,843,716
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,796,604
net_rshares38,353,367,315
author_curate_reward""
vote details (2)
@valued-customer · (edited)
Please keep me in the loop as you manage the censorship issue.

I have advocated for requiring a vote on HPS proposals to include an account on #irredeemables, due to it's severe and total censorship of affected accounts.  I feel strongly that the same mechanism considered sufficient to elect witnesses should be the mechanism to essentially forever silence people.

Advocates of censoring the account can present evidence in support of the inclusion, the accused can present their own, and voters decide.  I also recommend periodic appeals be allowed of the censored account via the same HPS method.  If they receive enough votes, they should be allowed to get off the list.

We need a robust mechanism to undertake such a terminal threat to users of Hive as API total and complete censorship of all ability to communicate with other folks here, and I can't think of one better atm.

Thanks!
properties (22)
authorvalued-customer
permlinkre-pharesim-q8mcun
categoryhive-139531
json_metadata{"tags":["hive-139531"],"app":"peakd/2020.03.14"}
created2020-04-11 10:23:18
last_update2020-04-11 10:48:27
depth3
children1
last_payout2020-04-18 10:23:18
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length893
author_reputation353,226,184,141,697
root_title"Details about the setup of my public node"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id96,797,144
net_rshares0