***This can be useful for ZEN SecNodeTracker and other server applications.***
On GNU Bash and most other shells for GNU/Linux/UNIX/etc, there is a way to fork any process to the background like a daemon. It will continue to run just as usual but in the background.
It is useful when you want a process to continue running when you exit an SSH session for example, with no overhead or dependencies of screen or process managers/monitors.
# Forking Any Process to the Background Like a Daemon #
**For this tutorial we will use `node app.js` as the main process.**
To fork a process to the background simply add the control operator `&`:
* `node app.js&`
This will output something similar to:
* `[1] 1234`
### Further information ###
The first number `[1]` is the job number. The second number `1234` is the Process ID (PID).
The process is now running as it usually runs, except it is running in the background like a daemon.
* You can list the currently running jobs and their PIDs by invoking:
`jobs -l`
* if you have started a new session and the jobs are not listed, you can use `ps` to list the PIDs:
`ps -e | grep app`
* You can terminate a process cleanly (sigterm) by invoking:
`kill 1234`
* You can force-kill a process brutally (sigkill) by invoking:
`kill -9 1234`
Alternatively, you can invoke `htop` for interactive process management, and then use `F9` function key to access `sigterm` and `sigkill`inside `htop`
# Directing the Output and Error Messages to a Log File #
Now you know how to fork a process to the background, you may want to also log its standard output and error messages to a file for intermittent inspection and archiving.
To achieve this you can invoke the following:
* `node app.js >>logfile 2>&1&`
### Further information ###
The first operator `>>` tells the shell to *append* the data to a file called `logfile`. This means it will add lines at the end of a file if it already exists. It will create the file to begin with if it does not exist.
The second part `2>&1` is the additional type of data to be included in logfile; the 2 is a file descriptor for `stderr` (the standard error output), and the >&1 is a direction to the file descriptor 1 `stdout` (the standard output). *The extra `&` may be confusing -- this extra & is telling the shell that 1 is a file `descriptor` and not a file ( >1 would direct to a file called 1)*
The final `&` is the control operator as explained earlier telling it to fork the job into the background similar to a daemon.
# Working with and Maintaining the Log File #
To check the entire logfile page by page, invoke:
* `cat logfile | more`
To check just the end of the logfile:
* `tail logfile -n X`
*where X is the number of tail lines to show*
To check just the beginning of the logfile:
* `head logfile -n X`
*where X is the number of head lines to show*
To search for a specific line containing a word "Exception" and list 8 lines after and before every line which contains that word:
* `cat logfile | grep Exception -A 8 -B 8`
If there is more than one screen of output, you can pipe (`|`) the output into `more` like before to browse it in sections:
* `cat logfile | grep Exception -A 8 -B 8 | more`
To create a backup of the logfile for archiving and start a new file with the current date and time at the top:
* `cp logfile logfile.bak`
* `date > logfile`
### Further information ###
The first command will copy the existing logfile to a new file named logfile.bak.
The second command will direct the output of the `date` command and overwrite the whole logfile with it. If the earlier job was invoked with >> to append to the logfile, it will continue to append to the logfile now after the first date line.
*it is ***Essential*** that `date` is directed to the same `logfile` name which the original process was directed to append to. Thus, `date >` will truncate the whole logfile to one line containing the output of date, and the `node app.js >>` will continue appending from there.*
# The Benefits of Using Standard Shell and System Commands #
**Due to the fact these are all standard shell and system commands, it means they can be easily scripted and automated, all with no extra dependencies or overhead of other packages or processes.**
**This is best for security and performance -- every extra process or package would use extra resources and introduce new potential weaknesses in the system. As a timeless rule of thumb; the fewer processes running on a system, the easier it is to maintain and secure.**
**If this guide has helped you, please consider an upvote or donation!**
**ZEN:** `znSTMxvU3AizLV9cAm4iNPT5uLoJ2wbfHy9`
**Private ZEN:** `zcK5A39UwgaufiyUVtVqTXMFQXxxCUCvicvuMxcCE9QrgBMAGW5yCQW9a5zRqwZbYBTCMhTZgyhKH3TMMHq4xwLADQvqrM3`
**BTC:** `1EwGXrmGdiD6Xd8uPnmRufyoWowJ7qpkJ1`