Upgrade from Windows 7 OEM to Windows 10

Can still be feasible to upgrade an existing Window 7, 32 bit installation on 2020? I tried and yes, it is.

Actually this is a suggestion of some of the best articles on this topic, this ZDnet howto by Ed Bott (archive) and this howtogeek howto by Chris Hoffman (archive) to convert a Windows 10 32bit to 64bit.

Tested upgrade path

I’ve walked a long installation path, starting from an 11 years old DVD. This is not the ideal scenario, but I got a very clean state to start from.

  1. Install Windows 7 Professional OEM from DVD or restore from system partition to get a clean state
  2. Upgrade Windows 7 Professional using Windows Update
  3. Upgrade to Windows 10 Pro 32bit, preserving the reserved partitions using the Upgrade this PC now path
  4. From Windows 10 32 bit, create an installation USB with Windows 10 Media creation tool choosing to create a media for an Other PC
  5. Install the new system above the old, preserving the reserved partitions


The howtos cited explains the above steps, but here are some supplmental tips on different scenarios, plus some things that maybe aren’t so clear at start. I’ve tested all of these:

  • You need an installed Windows 7 version to do the update following the howtos above.
  • Do not mess with reserved partitions: I don’t know if it will broke the upgrade path, but I don’t and it works.
  • You can use Clonezilla to make copies of your OS any time. Personally I do:
    • Before the Windows 7 to Windows 10 update (between 2 and 3)
    • After the 32bit installation (before 4)
    • After the 64bit installation (after 5)
  • Using Clonezilla, you can easily change the Hard Drive keeping the previous state of the OS intact if something goes wrong, ready to be restored.
  • If you have a Windows 7 Professional OEM, you will get a Windows 10 Pro with auto-activated digital license.
  • If you have Windows 7 on an unreadable DVD, try to change the reader before trying to clean up the surface.
  • If you have a DVD, make a copy with something like k3b and flash it on an USB with WoeUSB or similar to speed up the installation


Migrate mercurial code hosting from Bitbucket to your server in 9 steps using docker

By now Atlassian is dropping support to Mercurial on the popular Bitbucket service. Here is a proof of concept to use a Docker container as a separate environment where self-host your code using basic mercurial features without bells and whistles.

To do so, a docker container based on popular and lightweight jdeathe/centos-ssh image will be used. In this example, it’s supposed to use a remote server with docker service up and running.

1. Generate public / private pair

Create new keys to authenticate to the new container. Protect it with a password to deploy on external servers safely. In this example, an EdDSA type key is used.

ssh-keygen -t ed25519 -C "Key for xxx at xxx on xxx"

2. Choose keys and passwords

Choose a name for your new container here:

export SSHCONTAINER=mycodehosting.example.org

Create a new file named .env with the following content plus a custom:

  • content of the generated .pub file in the AUTHORIZED_KEYS row
  • a strong password to switch from hg to root via sudo su – root
  • timezone
SSH_AUTHORIZED_KEYS=*******PASTE PUB KEY here ***********
SSH_USER_PASSWORD=*******STRONG PASSWORD HERE (without ")***********
SYSTEM_TIMEZONE=********YOUR TIMEZONE HERE e.g. Europe/Rome***********

This configuration:

  • Allow the connection using the private key generated before
  • Disable password authentication
  • Set default user name to hg
  • Allow all users to switch to sudo (there will be only an hg user)
  • Set server with preferred timezone

3. Create the centos-ssh container

On the same directory where resides the .env file before, create a new container:

docker run -d \
  --name $SSHCONTAINER \
  -p 12120:22 \
  --env-file .env \
  -v /opt/path/to/some/host/dir:/home \
  • create a detached container named $SSHCONTAINER
  • expose the container on port 12120. If you want lo limit to localhost only, use or iptables will be set up to bind because docker mess up with iptables. You can also disable iptables on docker.
  • map the whole container /home directory to a new directory created by root on host /opt/path/to/some/host/dir:

Note: do not use ACL (e.g. setfacl) on /opt/path/to/some/host/dir or .ssh directory will broke (e.g. Bad owner or permissions)

4. Install mercurial on container

Now on container install mercurial and its dependencies. You can login as root using docker:

docker exec -it $SSHCONTAINER bash

or saving this script then chmod a+x it and launch:

set -e
docker exec -it -u root $SSHCONTAINER  yum install -y python36-devel python36-setuptools gcc
docker exec -it -u root $SSHCONTAINER /usr/bin/pip3 install mercurial

Restart the container:

docker container restart $SSHCONTAINER

Then check if mercurial is running for user hg:

docker exec -it -u hg $SSHCONTAINER hg --version

Then if container is running smoothly, you can update it to restart always on reboot or on docker service restart:

docker container update $SSHCONTAINER --restart always

then check if it’s applied:

docker container inspect $SSHCONTAINER | grep -B0 -A3 RestartPolicy

5. Login to container directly

Now on your local machine you can connect directly to the container using SSH without caring about the host.

By default an iptables rule is created by docker to allow connections from outside. Anyway, you have to specify the port and the user name .ssh/config like this:

Host mycodehosting.example.org
    Hostname mycodehosting.example.org
    User hg
    Port 12120
    PreferredAuthentications publickey
    IdentityFile /home/chirale/.ssh/id_ed25519_mycodehosting_example_org

This configuration is useful when you create a subdomain exclusively to host code, then you associate it a port and a username to obtain a mercurial url like this:


where dir and subdir are directly in /home/hg directory of container, on host /opt/path/to/some/host/dir/hg/test/project. Differently from Bitbucket, you can have how manyΒ  directory level you want to host the project.

6. Create a test repo

Create a test repository inside this container. You can access everywhere with the above ssh configuration using:

ssh mycodehosting.example.com

Then you can

cd repo/
mkdir alba
cd alba/
hg init
hg verify
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
checked 0 changesets with 0 changes to 0 files
cat > README.txt
hg addremove
adding README.txt
hg commit -m "First flight"
abort: no username supplied
(use 'hg config --edit' to set your username)
hg config --edit
hg commit -m "First flight"

7. Clone the test repo

Then from everywhere you can clone the repo adding :

hg clone ssh://mycodehosting.example.org/repo/alba

You can commit and push now.

If you login to mycodehosting.example.org, no new file was added. You’ve simply to run

hg update

to get it. Note that you haven’t to update every time you push new commits on alba to mycodehosting.example.org. Simply all changes are recorded, but not yet reflected on directory structure inside container.

If this is a problem for you, you can automate the update every time hg has a new changeset using for example supervisor service, shipped with centos-ssh.

Compare these:

parent: 1:9f51cd87d912 tip
 Second flight
branch: default
commit: (clean)
update: (current)
hg summary
missing pager command 'less', skipping pager
parent: 1:9f51cd87d912
 Second flight
branch: default
commit: (clean)
update: 1 new changesets (update)

The first hasn’t change update: (current), the second has update: 1 new changesets (update).

8. Migrate the code from Bitbucket to self-host

From the container, logged as hg user, import temporary your key to download the repository from old bitbucket location following Bitbucket docs, then:

cd ~
mkdir typeofproject
cd typeofproject
hg clone ssh://hg@bitbucket.org/yourbbuser/youroldbbrepo

Then you can alter the directory as you like:

  • edit the .hg/hgrc file changing parameters as you like
  • rename youroldbrepo directory

Remember to temporary store on the container the ssh keys and config to access to Bitbucket if any (permission should be 600). You can remove these keys when migration is done.

After a test clone you can drop the Bitbucket repo.

9. Find your flow

With a self-hosted solution you have to maintain the service. This is a relatively simple solution to set up and maintain.

If you are comfortable with old Bitbucket commit web display, you can use PyCharm to see a nice commit tree like this:

Tested on release 2.6.1 with centos:7.6.1810.


Give flatpack a chance

Talking about package manager on Linux, flatpak gained attention recently. Installation is easy, this is about installing on Ubuntu.

If you’ve some trouble installing an application using the package manager shipped with your distro, you can give it a try, since it’s available on 22 distro by now.

In the above case, torbrowser packet is broken on a Ubuntu 18.04, as many users noted out on the package page. Installing the package via flatpak, everything run smoothly in minutes.


flatpak search PACKAGENAME


flatpak install PACKAGENAME

Command line output is very complete, but if you want to use a GUI, on Ubuntu gnome-software (Ubuntu Software) will list the flatpak packages too, just read the package info on bottom of the description page and search packages as usual.

Full backup of PostgreSQL and restore on docker

In this howto I will show how to backup all PostgreSQL databases, roles and permission on a server and restore them on another server on a docker.

1. Backup databases on origin server

To backup all PostgreSQL databases automatically, preserving roles and permission, you can create this cron.d script (this example is from a CentOS):

# Dump postgresql database automatically
40 22 * * 0 postgres /bin/bash -c 'pg_dumpall -Upostgres | gzip > "/var/backup/myexport.sql.gz"'

In order:

  1. Run on at 22:40 on Sunday
  2. Run as postgres user (cron is executed by root, and root can impersonate it without password)
  3. Use the bash interpreter (not sh)
  4. Run pg_dumpall to dump all database
  5. Pipe the output to gzip to compress the SQL into a /var/backup/myexport.sql.gz file

2. Transfer on another server

Transfer the dump using ssh using something like this:

rsync -rltvz --no-o --no-g myuser@originserver:/var/backup/dumpall.sql.gz /opt/backup/myproj/data/

Use .ssh/config to store connection info.

3. Restore on a clean docker container

The following bash script will create a docker container, populating it with

set -e
echo "Remove identical container, keep data on directory"
NOW=`date +%Y-%m-%d`
# create an id based on timestamp plus MAC address 
UNIQUID=`uuidgen -t`
echo "Get postgres docker image"
docker pull postgres:9.4-alpine
echo "Generate 1 password with 48 char length"
PPASS=`pwgen 48 1`
mkdir -p /opt/docker_data/$NAME
echo "Save psql password on /opt/docker_data/$NAME.pwd"
echo $PPASS > "/opt/docker_data/$NAME/psql.pwd"
echo "extract database"
mkdir -p /opt/docker_data/$NAME/psqldata
mkdir -p /opt/docker_data/$NAME/share
gunzip -c /opt/backup/myproj/data/dumpall.sql.gz > /opt/docker_data/$NAME/share/restore.out
echo "Run a clean docker"
docker run --name $NAME -e POSTGRES_PASSWORD=$PPASS -d -p 44432:5432 -v /opt/docker_data/$NAME/psqldata:/var/lib/postgresql/data -v /opt/docker_data/$NAME/share:/extshare --restart always postgres:9.4-alpine
sleep 10
echo "Restore from /extshare/restore.out using user postgres (-upostgres) the database postgres (all dbs)"
docker exec -upostgres $NAME psql -f /extshare/restore.out postgres
echo "Clear the restore.out file"
rm /opt/docker_data/$NAME/share/restore.out

In order, this script:

  1. Download a postgres:9.4-alpine image (choose your version)
  2. Generate a random container name postgresql-myproj-YYYY-MM-DD-random-id based on MAC address and timestamp
  3. Generate a random password and save on a file
  4. Generate a directory structure on host system to keep postgres file and dump outside docker
  5. Create a new postgres container exposed to host on port 44432
  6. Save postgres files on /opt/docker_data/$NAME/psqldata on host
  7. Expose the dump file directory on /extshare on guest
  8. Restore the dump using role postgres
  9. Delete the dump

Resulting directory structure on host will be:

└── postgresql-myproj-2019-28-12-*******
    β”œβ”€β”€ psqldata [error opening dir]
    β”œβ”€β”€ psql.pwd
    └── share
        └── restore.out

Then to connect just use the same user and password of the database on origin server.


Usually on /opt/docker_data/$NAME/psqldata/pg_hba.conf, you’ve to add a line like this:

host all all md5

giving to host (reachable by inside docker) full access to database. But the default image ship a handy, permissive entry:

host all all all md5

So you can connect without any step to the database.


If you connect to the destination server with ssh, if you cannot access the port remember to forward the PostgreSQL on .config like:

Host destinationsrv
    Hostname destinationaddr
    User mydestservuser
    IdentityFile /home/myuser/.ssh/id_rsa_destination_server
    LocalForward 44432

Remember that if you haven’t a firewall, docker container will start with a permissive rule like:>5432/tcp

So will be exposed and reachable using password.

Sort of CDN to serve client-side libraries via an auto-pull git repo on tmpfs

This configuration will allow to install on a Debian-based system a fast server for client libraries. Key technologies used are:

  • tmpfs to serve files from volatile memory
  • git / mercurial from github / bitbucket to get files from a public or private repository
  • systemd units to mount tmpfs and sync
  • nginx to serve files to user

On this first step you’ll create a service to reserve some RAM for static files, pulling them from a private or public repo.

Mount tmpfs with systemd

To serve files directly from RAM, you have to mount a tmpfs directory. You can do it on fstab:


tmpfs /mnt/cdn tmpfs rw,nodev,nosuid,size=300M 0 0

Or with a systemd unit:


Description=Mount empty CDN directory on volatile memory


  • noatime will disable last access on contained files, reducing write on disk
  • size will reserve 300MB for /mnt/cdn partition on RAM (increase as needed)
  • WantedBy=multi-user.target mount the partition on runlevel 3 (multi-user mode with networking)

Create two units on a local path like /usr/local/share/systemd then create a symlinks on /etc/systemd/system or create directly them on /etc/systemd/system. You can also directly create them on /usr/local/share/systemd.

Create the pull service

When the /mnt/cdn is successfully loaded, pull static files from your repository.


Description=Pull on CDN directory.


  • Clone the git repository with a user on system using a key with an alias
  • Change youruserhere to the user who cloned the repository
  • Add to /root/.ssh/config and toΒ  /root/.ssh/my_private_key the private key to do the pull


  • WantedBy=mnt-cdn.mount copy the files to RAM only after the /mnt/cdn is created
  • After=network-online.targetΒ pull the repository only when the network is ready

On pull, all files will be written by root as youruserhere:youruserhere.

After the pull, to reduce RAM occupation, this script doesn’t download directly to RAM .git directory but copy them with rsync excluding them:


# stop on first error
set -e
cd /srv/cdn-all
git pull
exec rsync -a --exclude=.git --exclude=.gitignore /srv/cdn-all/* /mnt/cdn/

Get systemd to know about the mount and service

To reload systemd units, you have to

systemctl daemon-reload

Then do the mount via the systemd unit:

systemctl start mnt-cdn.mount

Enable on boot

Since the cdn-pull.service is tied to mnt-cdn.mount, both have to be enabled to run:

systemctl enable mnt-cdn.mount
systemctl enable cdn-pull.service
  1. When the system is ready create the tmpfs on /mnt/cdn/
  2. After tmpfs is successfully created by the unit, the file will be automatically synced through cdn-pull.service.

Mount will auto-start sync

Start only the mnt-cdn.mount:

systemctl start mnt-cdn.mount

And then ask for info about both services:

systemctl status mnt-cdn.mount
systemctl status cdn-pull.service
  • mnt-cdn.mount have to be active (mounted)
  • cdn-pull.service should be active (script is running) or inactive (sync is completed). In both cases, it’s ok.

With this set-up, when you restart the mnt-cdn.mount files will be automatically pulled and synced to RAM when system starts and when you start or restart mnt-cdn.mount service.

Next you can serve these files on nginx and the final step could be to auto-detect push to update files automagically.

See also

Create a Windows 10 recovery disk on Linux

In this howto there are the steps to follow when a Windows 10 OS is not bootable anymore and you haven’t a recovery disk. This is a typical case after a new OS will be installed on Dual boot or boot partition was altered.

  1. Download Windows 10 iso:
    1. Download the official Windows 10 image
  2. Prepare USB to be bootable:
    1. Open GParted with
gparted /dev/DEVICE-TO-ERASE
  • Select the USB drive
  • Device > New partition table
  • Select GPT
  • Apply: this will delete any data on the USB
  • Create a new NTFS partition then Apply (do not use FAT32 since some files can be greater than 4GB)
  • Close GParted
  • Write files:
    1. Unplug and plug USB
    2. Copy all Windows files to the empty USB drive using 7zip with:
      7z x -y -o/media/user/path-to-USB/ Win10_1809Oct_Italian_x64.iso
    3. If something goes wrong during copy, you can mount the ISO image then rsync the source with the USB drive (the trailing slash is important):
  • cd path/to/usb/drive
    rsync -avzp /media/myuser/CCCOMA_X64FRE_IT-IT_DV91/ .
  • umount
  • Add boot flag
    1. Open GParted selecting the device just written
    2. Select the new partition then
    3. Select Partition > Manage flags
    4. Select boot flag (esp will be auto-selected)
    1. Use windows tools
      1. Follow this howto by MS to recover MBR, restore BCD or similar actions

    You can follow these steps to write on a USB a recovery ISO from windows the same way.

    Disable password authentication on sshd

    To disallow password authentication on ssh, adduser –disabled-password will not disable openSSH password.

    To disable the password authentication, you have to put these values on /etc/ssh/sshd_config to:

    PasswordAuthentication no
    UsePAM no
    PermitRootLogin no

    Then you’ve to:

    systemctl restart sshd

    to apply changes.

    Connection will not be reset so before logout try to login on a different terminal to check you can login.

    Actually PermitRootLogin disable the root login for any method, but it’s an useful addition. Remember to add at least one user to the sudo group or you will not be able to operate as super-user without using su – root.

    To check if password auth is disabled:

    ssh -o PreferredAuthentications=password USER@HOST

    Exprected output is:

    USER@HOST: Permission denied (publickey).

    Partition a new disk on linux using fdisk, lsblk and mkfs

    First, you’ve to create a new partition.

    You can list all available storage device with:


    If your disk is new, the new device will appear empty (without children on the tree).


    fdisk /dev/sdc

    Press m to show the manual.

    To create a partition larger than 2TB, you’ve to use a GPT partition (g) then create a new extended partition (n) then with (p) it will show you how the partition will like before you write (w) them.

    Then, lsblk will show the device with the new partition, e.g.:

    sdc 8:32 0 3.5T 0 disk 
    └─sdc1 8:33 0 3.5T 0 part

    Then format the new partition /dev/sdc1 with the specified filesystem (e.g. ext4):

    mkfs -t ext4 /dev/sdc1

    If you haven’t take not of the UUID shown by mkfs after format, use blkid command to list the UUID of the device, so if device name change the fstab is still valid.

    And add to /etc/fstab (put the last 0 to 1 to check filesystem on startup):

    UUID=xxxxxxx-xxx-xxxx-xxx-xxxx /mnt/mydata ext4 defaults 0 0

    To get the UUID later:

    sudo blkid /dev/sdc1

    Create the mount directory with:

    mkdir /mnt/mydata

    Then mount the new partition with:

    mount /mnt/mydata

    Get number of files or directory using tree

    Tree is an useful linux command to display a tree representation a full directory structure or a part of it.

    On a Debian based distro like Ubuntu install:

    sudo apt-get install tree

    The last line of tree print a line like this:

    346 directories, 174 files

    If you’re changing files and directories and you want a real-time update of files and directories number, you can use watch.

    watch -n 20 'tree | tail -n 1'

    Tree will print the tree, tail will extract the last line, then watch will refresh the result every 20 seconds.

    Nginx configuration for Django

    Django is a powerful framework for building websites. To run a production website, usually an application server is used. So nginx will do two basic things:

    • Serve your Django application from the application server port to the web port (Reverse Proxy)
    • Serve static and media files

    The application server used in this example is gunicorn, the application server chosen by Instagram of the earlier days, but it can be anything running on port 9999. Change port number as required in the example.

    The following nginx conf was adapted from this, with some additions and it contains:

    • a commented non www to www website redirect
    • gzip for javascript, json, css and proxy routes
    • media files with etag (1 year)
    • static files with etag (1 minute)
    • an host-based favicon distributor (reusable as is)
    • a commented basic auth to make a website private
    • reverse proxy to gunicorn
    • a simple block for a common type of malicious activity

    It works fine with Django 1 and 2.

    # Howto: https://chirale.org/2018/08/23/nginx-configuration-for-django/
    # uncomment for redirect
    # server {
    #    # redirect WITH www from example.com and example.net
    #    listen 80;
    #    server_name example.com example.net;
    #    return 301 http://www.example.com$request_uri;
    # }
    server {
        listen	80;
        # the domain name it will serve for
        server_name www.example.com;
        charset     utf-8;
        # max upload size
        client_max_body_size 75M;
        # enable gzip for proxy requests
        gzip on;
        gzip_proxied any;
        gzip_vary on;
        gzip_http_version 1.1;
        gzip_types application/javascript application/json text/css text/xml;
        gzip_comp_level 4;
        # @see http://uwsgi-docs.readthedocs.org/en/latest/tutorials/Django_and_nginx.html#configure-nginx-for-your-site
        # Django media
        location /media  {
            etag on;
            expires 365d;
            alias /path/to/media_root;  # your Django project's media files - amend as required
        location /static {
            etag on;
            expires 1m;
            alias /path/to/static_root; # your Django project's static files - amend as required
        location /favicon.ico {
            # all favicons inside /path/to/favicons/ this directory
            # notation: www.example.com.ico
           alias /path/to/favicons/$host.ico;
        location / {
            # an HTTP header important enough to have its own Wikipedia entry:
            #   http://en.wikipedia.org/wiki/X-Forwarded-For
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            # enable this if and only if you use HTTPS, this helps Rack
            # set the proper protocol for doing redirects:
            # proxy_set_header X-Forwarded-Proto https;
            # pass the Host: header from the client right along so redirects
            # can be set properly within the Rack application
            proxy_set_header Host $http_host;
            # we don't want nginx trying to do something clever with
            # redirects, we set the Host: header above already.
            proxy_redirect off;
            # set "proxy_buffering off" *only* for Rainbows! when doing
            # Comet/long-poll stuff.  It's also safe to set if you're
            # using only serving fast clients with Unicorn + nginx.
            # Otherwise you _want_ nginx to buffer responses to slow
            # clients, really.
            # proxy_buffering off;
            # Uncomment for maintenance
            ### auth_basic "Insert password here";
            ### auth_basic_user_file /path/to/.htpasswd;
            proxy_connect_timeout       30000;
            proxy_send_timeout          30000;
            proxy_read_timeout          30000;
            send_timeout                30000;
            # @see https://eng.eelcowesemann.nl/linux-unix-android/nginx/nginx-blocking/ and seositecheckup.com
            if ($http_user_agent ~ "libwww-perl") {
              return 403;
            # Try to serve static files from nginx, no point in making an
            # *application* server like Unicorn/Rainbows! serve static files.
            if (!-f $request_filename) {
                proxy_pass http://localhost:9999;

    Run nginx -t to check and then systemctl reload nginx to apply.

    This is a http version, to configure the website for https follow this howto.