Help us fight COVID-19 in Piedmont

For more than twelve years I write on this blog about IT stuff. I’ve provided a free service in English, even if Italian is my mother tongue, to reach the widest audience. Now this little service is at risk, because my family, friends and I are.

I live in Italy, more precisely in Piedmont. After Lombardy it was one of the first region to close schools and cancel public events before the national lockdown. We cannot leave our homes without a valid reason.

Here there’s a public healthcare system, but handled region by region. At the time I’m writing, Lombardy is about to fill all ICUs. There are conflicts between central government in Rome and regions.

They are not receiving enough materials and support from Rome or it’s too slow, so regions have to handle the emergency with slow responses from central government. Lockdown decided centrally is useful, but there aren’t enough personal protective equipment for the healthcare personnel so nurses and doctors become sick in areas where contagion is already spread. Here’s the real-time status.

I don’t know if Piedmont will be the next facing the same fate, so region needs resources to avoid the collapse of the healthcare system.

Please donate if you can all instruction in this institutional, verified link:

What we’ll do here can be useful to know how to fight the COVID-19 in other countries.

Some practices we are experimenting are described below.

Update 2020-03-30:

Update 2020-03-20:

  • 4 rapid tests devices by DiaSorin are available on two Turin hospitals according to regional authorities. These new devices will process 16 samples every 90 minutes.
  • 7 Piedmont ER was shut down to reassign medical personnel to handle COVID-19 pandemic
  • 4 Arpa (local environment protection agency) laboratories produced 568 liters of hand sanitizer gel to deliver to police and medical personnel
  • Rural tourism operators can now run home delivery services to both address supply of meals from population and try to save these activities after the crash of tourism demand
  • People under quarantine haven’t to waste sorting. All waste should be disposed in two different plastic bag, one inside another, following strict rules. To avoid contamination, waste sorting facilities (isole ecologiche) are closed.

Update 2020-03-17:

  • A new regional hospital for Covid-19 patients will be opened on 2020-03-22, setup is underway 1
  • Regional production of personal protection devices is underway converting production of plants like Miroglio
  • Regional headquartered DiaSorin received US Federal Funds to develop a rapid Covid-19 test to be submitted to FDA by the end of March 2020


Give flatpack a chance

Talking about package manager on Linux, flatpak gained attention recently. Installation is easy, this is about installing on Ubuntu.

If you’ve some trouble installing an application using the package manager shipped with your distro, you can give it a try, since it’s available on 22 distro by now.

In the above case, torbrowser packet is broken on a Ubuntu 18.04, as many users noted out on the package page. Installing the package via flatpak, everything run smoothly in minutes.


flatpak search PACKAGENAME


flatpak install PACKAGENAME

Command line output is very complete, but if you want to use a GUI, on Ubuntu gnome-software (Ubuntu Software) will list the flatpak packages too, just read the package info on bottom of the description page and search packages as usual.

Full backup of PostgreSQL and restore on docker

In this howto I will show how to backup all PostgreSQL databases, roles and permission on a server and restore them on another server on a docker.

1. Backup databases on origin server

To backup all PostgreSQL databases automatically, preserving roles and permission, you can create this cron.d script (this example is from a CentOS):

# Dump postgresql database automatically
40 22 * * 0 postgres /bin/bash -c 'pg_dumpall -Upostgres | gzip > "/var/backup/myexport.sql.gz"'

In order:

  1. Run on at 22:40 on Sunday
  2. Run as postgres user (cron is executed by root, and root can impersonate it without password)
  3. Use the bash interpreter (not sh)
  4. Run pg_dumpall to dump all database
  5. Pipe the output to gzip to compress the SQL into a /var/backup/myexport.sql.gz file

2. Transfer on another server

Transfer the dump using ssh using something like this:

rsync -rltvz --no-o --no-g myuser@originserver:/var/backup/dumpall.sql.gz /opt/backup/myproj/data/

Use .ssh/config to store connection info.

3. Restore on a clean docker container

The following bash script will create a docker container, populating it with

set -e
echo "Remove identical container, keep data on directory"
NOW=`date +%Y-%m-%d`
# create an id based on timestamp plus MAC address 
UNIQUID=`uuidgen -t`
echo "Get postgres docker image"
docker pull postgres:9.4-alpine
echo "Generate 1 password with 48 char length"
PPASS=`pwgen 48 1`
mkdir -p /opt/docker_data/$NAME
echo "Save psql password on /opt/docker_data/$NAME.pwd"
echo $PPASS > "/opt/docker_data/$NAME/psql.pwd"
echo "extract database"
mkdir -p /opt/docker_data/$NAME/psqldata
mkdir -p /opt/docker_data/$NAME/share
gunzip -c /opt/backup/myproj/data/dumpall.sql.gz > /opt/docker_data/$NAME/share/restore.out
echo "Run a clean docker"
docker run --name $NAME -e POSTGRES_PASSWORD=$PPASS -d -p 44432:5432 -v /opt/docker_data/$NAME/psqldata:/var/lib/postgresql/data -v /opt/docker_data/$NAME/share:/extshare --restart always postgres:9.4-alpine
sleep 10
echo "Restore from /extshare/restore.out using user postgres (-upostgres) the database postgres (all dbs)"
docker exec -upostgres $NAME psql -f /extshare/restore.out postgres
echo "Clear the restore.out file"
rm /opt/docker_data/$NAME/share/restore.out

In order, this script:

  1. Download a postgres:9.4-alpine image (choose your version)
  2. Generate a random container name postgresql-myproj-YYYY-MM-DD-random-id based on MAC address and timestamp
  3. Generate a random password and save on a file
  4. Generate a directory structure on host system to keep postgres file and dump outside docker
  5. Create a new postgres container exposed to host on port 44432
  6. Save postgres files on /opt/docker_data/$NAME/psqldata on host
  7. Expose the dump file directory on /extshare on guest
  8. Restore the dump using role postgres
  9. Delete the dump

Resulting directory structure on host will be:

└── postgresql-myproj-2019-28-12-*******
    β”œβ”€β”€ psqldata [error opening dir]
    β”œβ”€β”€ psql.pwd
    └── share
        └── restore.out

Then to connect just use the same user and password of the database on origin server.


Usually on /opt/docker_data/$NAME/psqldata/pg_hba.conf, you’ve to add a line like this:

host all all md5

giving to host (reachable by inside docker) full access to database. But the default image ship a handy, permissive entry:

host all all all md5

So you can connect without any step to the database.


If you connect to the destination server with ssh, if you cannot access the port remember to forward the PostgreSQL on .config like:

Host destinationsrv
    Hostname destinationaddr
    User mydestservuser
    IdentityFile /home/myuser/.ssh/id_rsa_destination_server
    LocalForward 44432

Remember that if you haven’t a firewall, docker container will start with a permissive rule like:>5432/tcp

So will be exposed and reachable using password.

Disable search and autocomplete on url bar of Firefox

The default configuration of Firefox can cause a search to be performed when an url is wrong using the main bar, or auto-suggestion to be performed silently. This setting may disclose to third parties local path when mistyped.

At the end of this howto, Firefox will be back to a status where the user take back more control of searches, reducing disclosed data to external services. You can also disable the suggestions when typing.

How to disable automatic search

To disable search on url bar of Firefox, first re-enable the double search bar:

  1. Go to Settings
  2. Go to Search settings
  3. On Search Bar, select the 2 bar option

Now disable the search on the url bar:

  1. Visit on the main bar about:config
  2. Accept to continue
  3. Search “keyword”
  4. Change the status from true to false (double click on it)

Now if you try to type a keyword, you will be redirected to the website, e.g. boh will redirect to

Data aren’t sent anymore to other search engines except for autocompletes, but you can disable it following next steps.

Disable search autocomplete on urlbar

  1. Go to about:config
  2. Search browser.urlbar.suggest.searches
  3. Set value from true to false (double click on it)

Now if you type a search on the url bar, the search will be performed on the history alone, but if you use the search bar autocomplete will works normally.

Disable search autocomplete on search bar

To completely disallow search suggestion on the new right searchbar:

  1. Look for on about:config
  2. Double click on it to set false

Disable auto .com

Note: if you wrongly type a search in the url bar, it will be autocompleted as (YOUR SEARCH + “.com”).

To disable the “.com” suffix or the “www” prefix:

  1. Look for browser.fixup.alternate.enabled on about:config
  2. Double click on it to set false

Final behaviour

With all these changes applied:

  1. The url bar will look for a URL exactly as you’ve typed (except for the https / http that will be autocompleted)
  2. The search bar will perform the search on the search engine you’ve specified
  3. No silent request will be forwarded to search engines on url bar or search bar (as far as I know)

Sort of CDN to serve client-side libraries via an auto-pull git repo on tmpfs

This configuration will allow to install on a Debian-based system a fast server for client libraries. Key technologies used are:

  • tmpfs to serve files from volatile memory
  • git / mercurial from github / bitbucket to get files from a public or private repository
  • systemd units to mount tmpfs and sync
  • nginx to serve files to user

On this first step you’ll create a service to reserve some RAM for static files, pulling them from a private or public repo.

Mount tmpfs with systemd

To serve files directly from RAM, you have to mount a tmpfs directory. You can do it on fstab:


tmpfs /mnt/cdn tmpfs rw,nodev,nosuid,size=300M 0 0

Or with a systemd unit:


Description=Mount empty CDN directory on volatile memory


  • noatime will disable last access on contained files, reducing write on disk
  • size will reserve 300MB for /mnt/cdn partition on RAM (increase as needed)
  • mount the partition on runlevel 3 (multi-user mode with networking)

Create two units on a local path like /usr/local/share/systemd then create a symlinks on /etc/systemd/system or create directly them on /etc/systemd/system. You can also directly create them on /usr/local/share/systemd.

Create the pull service

When the /mnt/cdn is successfully loaded, pull static files from your repository.


Description=Pull on CDN directory.


  • Clone the git repository with a user on system using a key with an alias
  • Change youruserhere to the user who cloned the repository
  • Add to /root/.ssh/config and toΒ  /root/.ssh/my_private_key the private key to do the pull


  • WantedBy=mnt-cdn.mount copy the files to RAM only after the /mnt/cdn is created
  • After=network-online.targetΒ pull the repository only when the network is ready

On pull, all files will be written by root as youruserhere:youruserhere.

After the pull, to reduce RAM occupation, this script doesn’t download directly to RAM .git directory but copy them with rsync excluding them:


# stop on first error
set -e
cd /srv/cdn-all
git pull
exec rsync -a --exclude=.git --exclude=.gitignore /srv/cdn-all/* /mnt/cdn/

Get systemd to know about the mount and service

To reload systemd units, you have to

systemctl daemon-reload

Then do the mount via the systemd unit:

systemctl start mnt-cdn.mount

Enable on boot

Since the cdn-pull.service is tied to mnt-cdn.mount, both have to be enabled to run:

systemctl enable mnt-cdn.mount
systemctl enable cdn-pull.service
  1. When the system is ready create the tmpfs on /mnt/cdn/
  2. After tmpfs is successfully created by the unit, the file will be automatically synced through cdn-pull.service.

Mount will auto-start sync

Start only the mnt-cdn.mount:

systemctl start mnt-cdn.mount

And then ask for info about both services:

systemctl status mnt-cdn.mount
systemctl status cdn-pull.service
  • mnt-cdn.mount have to be active (mounted)
  • cdn-pull.service should be active (script is running) or inactive (sync is completed). In both cases, it’s ok.

With this set-up, when you restart the mnt-cdn.mount files will be automatically pulled and synced to RAM when system starts and when you start or restart mnt-cdn.mount service.

Next you can serve these files on nginx and the final step could be to auto-detect push to update files automagically.

See also

Autorestart nodejs app with supervisor on error and changes

This howto will show how to restart automatically a nodejs app on crash or on file changes using supervisor and nodemon.

Autorestart on changes

To install nodemon to autorestart app when you change app files:

npm install -g nodemon

To test it’s working, use nodemon like node, passing all parameters you would pass to node:

nodemon app.js --myoptionalparameter MYVALUE;

Autorestart on errors

To install supervisor on a Debian-based system to restart app on crashes:

sudo apt-get install supervisor

Then create a wrapper script on a custom location to monitor:

cd /path/to/my/app;
exec node app.js --myoptionalparameter MYVALUE;
  • exec is very important since it gave to supervisor the control of the process
  • if you specify nodemon instead of node , the app will not autorestart on crashes but only on changes. On production, only node should be used while on development nodemon is fine to track errors.

Now set up the config file for supervisor creating a new file on /etc/supervisor/conf.d/myapp.conf with:

command=bash /path/to/my/
priority=10                ; the relative start priority (default 999)
autostart=true              ; start at supervisord start (default: true)
autorestart=true            ; retstart at unexpected quit (default: true)
; startsecs=-1                ; number of secs prog must stay running (def. 10)
; startretries=3              ; max # of serial start failures (default 3)
exitcodes=0,2               ; 'expected' exit codes for process (default 0,2)
stopsignal=QUIT             ; signal used to kill process (default TERM)
; stopwaitsecs=10             ; max num secs to wait before SIGKILL (default 10)
user=USER_TO_RUN_APP_HERE                   ; setuid to this UNIX account to run the program
log_stdout=true             ; if true, log program stdout (default true)
log_stderr=true             ; if true, log program stderr (def false)
logfile_maxbytes=10MB        ; max # logfile bytes b4 rotation (default 50MB)
logfile_backups=10          ; # of logfile backups (default 10)
  • change USER_TO_RUN_APP_HERE to a system user who can access to app files and directory

Now you have to reread to apply changes without restarting all other services:

sudo supervisorctl reread

So in case of errors you got something like:

ERROR: CANT_REREAD: Invalid user name fakeuserhere in section ‘program:myapp’ (file: ‘/etc/supervisor/conf.d/myapp.conf’)

On success:

myapp: available

If you’ve changed the app configuration, you have to:

sudo supervisorctl update

To apply, then restart the specific app.

sudo supervisorctl restart myapp

Keep an eye on supervisor processes with:

sudo supervisorctl status


myapp                            RUNNING   pid ****, uptime 0:00:59
anotherapp                       RUNNING   pid ****, uptime 0:29:33

Control the processes

Since exec was specified in the wrapper script before, supervisor can stop the node app on demand with:

sudo supervisorctl stop myapp

Then supervisorctl status will display something like this:

myapp                             STOPPED   Apr 27 22:53 AM
anotherapp                        RUNNING   pid ****, uptime 0:28:16

To run again:

sudo supervisorctl start myapp
  • When you will restart the service with systemctl restart supervisor, all /supervisor/conf.d/ files will be read again, then if they are set to autostart they will even if you’ve stopped them.
  • If your node.js app has more than one file to run (e.g. a couple of servers) you can copy and append the [program:myapp] configuration on the same file changing this second block to something like [program:myapptwo] specifying a new wrapper script

Using multiple deploy keys on github using .ssh/config

You can use multiple deploy keys for Github created with ssh-keygen following with these steps.

You have to add to your ~/.ssh/config

Host github_deploy_key_1
    User git
    IdentityFile ~/.ssh/github_deploy_key_1_rsa

Host github_deploy_key_2
    User git
    IdentityFile ~/.ssh/github_deploy_key_2_rsa

If you haven’t added your github name on git:

git config --global "yourgithubname"
git config --global ""

Then clone your repository specifying your custom host, adapting what github suggest to you on repo page:

git clone git@github_deploy_key_1:yourgithubname/your-repo.git

If you have enabled push permissions you can use this deploy key even to update the repository.

In this way you can keep a server clean from your github passepartout and add only the keys it needs.