Migrate mercurial code hosting from Bitbucket to your server in 9 steps using docker

By now Atlassian is dropping support to Mercurial on the popular Bitbucket service. Here is a proof of concept to use a Docker container as a separate environment where self-host your code using basic mercurial features without bells and whistles.

To do so, a docker container based on popular and lightweight jdeathe/centos-ssh image will be used. In this example, it’s supposed to use a remote server with docker service up and running.

1. Generate public / private pair

Create new keys to authenticate to the new container. Protect it with a password to deploy on external servers safely. In this example, an EdDSA type key is used.

ssh-keygen -t ed25519 -C "Key for xxx at xxx on xxx"

2. Choose keys and passwords

Choose a name for your new container here:

export SSHCONTAINER=mycodehosting.example.org

Create a new file named .env with the following content plus a custom:

  • content of the generated .pub file in the AUTHORIZED_KEYS row
  • a strong password to switch from hg to root via sudo su – root
  • timezone
SSH_AUTHORIZED_KEYS=*******PASTE PUB KEY here ***********
SSH_USER_PASSWORD=*******STRONG PASSWORD HERE (without ")***********
SYSTEM_TIMEZONE=********YOUR TIMEZONE HERE e.g. Europe/Rome***********

This configuration:

  • Allow the connection using the private key generated before
  • Disable password authentication
  • Set default user name to hg
  • Allow all users to switch to sudo (there will be only an hg user)
  • Set server with preferred timezone

3. Create the centos-ssh container

On the same directory where resides the .env file before, create a new container:

docker run -d \
  --name $SSHCONTAINER \
  -p 12120:22 \
  --env-file .env \
  -v /opt/path/to/some/host/dir:/home \
  • create a detached container named $SSHCONTAINER
  • expose the container on port 12120. If you want lo limit to localhost only, use or iptables will be set up to bind because docker mess up with iptables. You can also disable iptables on docker.
  • map the whole container /home directory to a new directory created by root on host /opt/path/to/some/host/dir:

Note: do not use ACL (e.g. setfacl) on /opt/path/to/some/host/dir or .ssh directory will broke (e.g. Bad owner or permissions)

4. Install mercurial on container

Now on container install mercurial and its dependencies. You can login as root using docker:

docker exec -it $SSHCONTAINER bash

or saving this script then chmod a+x it and launch:

set -e
docker exec -it -u root $SSHCONTAINER  yum install -y python36-devel python36-setuptools gcc
docker exec -it -u root $SSHCONTAINER /usr/bin/pip3 install mercurial

Restart the container:

docker container restart $SSHCONTAINER

Then check if mercurial is running for user hg:

docker exec -it -u hg $SSHCONTAINER hg --version

Then if container is running smoothly, you can update it to restart always on reboot or on docker service restart:

docker container update $SSHCONTAINER --restart always

then check if it’s applied:

docker container inspect $SSHCONTAINER | grep -B0 -A3 RestartPolicy

5. Login to container directly

Now on your local machine you can connect directly to the container using SSH without caring about the host.

By default an iptables rule is created by docker to allow connections from outside. Anyway, you have to specify the port and the user name .ssh/config like this:

Host mycodehosting.example.org
    Hostname mycodehosting.example.org
    User hg
    Port 12120
    PreferredAuthentications publickey
    IdentityFile /home/chirale/.ssh/id_ed25519_mycodehosting_example_org

This configuration is useful when you create a subdomain exclusively to host code, then you associate it a port and a username to obtain a mercurial url like this:


where dir and subdir are directly in /home/hg directory of container, on host /opt/path/to/some/host/dir/hg/test/project. Differently from Bitbucket, you can have how manyΒ  directory level you want to host the project.

6. Create a test repo

Create a test repository inside this container. You can access everywhere with the above ssh configuration using:

ssh mycodehosting.example.com

Then you can

cd repo/
mkdir alba
cd alba/
hg init
hg verify
checking changesets
checking manifests
crosschecking files in changesets and manifests
checking files
checked 0 changesets with 0 changes to 0 files
cat > README.txt
hg addremove
adding README.txt
hg commit -m "First flight"
abort: no username supplied
(use 'hg config --edit' to set your username)
hg config --edit
hg commit -m "First flight"

7. Clone the test repo

Then from everywhere you can clone the repo adding :

hg clone ssh://mycodehosting.example.org/repo/alba

You can commit and push now.

If you login to mycodehosting.example.org, no new file was added. You’ve simply to run

hg update

to get it. Note that you haven’t to update every time you push new commits on alba to mycodehosting.example.org. Simply all changes are recorded, but not yet reflected on directory structure inside container.

If this is a problem for you, you can automate the update every time hg has a new changeset using for example supervisor service, shipped with centos-ssh.

Compare these:

parent: 1:9f51cd87d912 tip
 Second flight
branch: default
commit: (clean)
update: (current)
hg summary
missing pager command 'less', skipping pager
parent: 1:9f51cd87d912
 Second flight
branch: default
commit: (clean)
update: 1 new changesets (update)

The first hasn’t change update: (current), the second has update: 1 new changesets (update).

8. Migrate the code from Bitbucket to self-host

From the container, logged as hg user, import temporary your key to download the repository from old bitbucket location following Bitbucket docs, then:

cd ~
mkdir typeofproject
cd typeofproject
hg clone ssh://hg@bitbucket.org/yourbbuser/youroldbbrepo

Then you can alter the directory as you like:

  • edit the .hg/hgrc file changing parameters as you like
  • rename youroldbrepo directory

Remember to temporary store on the container the ssh keys and config to access to Bitbucket if any (permission should be 600). You can remove these keys when migration is done.

After a test clone you can drop the Bitbucket repo.

9. Find your flow

With a self-hosted solution you have to maintain the service. This is a relatively simple solution to set up and maintain.

If you are comfortable with old Bitbucket commit web display, you can use PyCharm to see a nice commit tree like this:

Tested on release 2.6.1 with centos:7.6.1810.


Full backup of PostgreSQL and restore on docker

In this howto I will show how to backup all PostgreSQL databases, roles and permission on a server and restore them on another server on a docker.

1. Backup databases on origin server

To backup all PostgreSQL databases automatically, preserving roles and permission, you can create this cron.d script (this example is from a CentOS):

# Dump postgresql database automatically
40 22 * * 0 postgres /bin/bash -c 'pg_dumpall -Upostgres | gzip > "/var/backup/myexport.sql.gz"'

In order:

  1. Run on at 22:40 on Sunday
  2. Run as postgres user (cron is executed by root, and root can impersonate it without password)
  3. Use the bash interpreter (not sh)
  4. Run pg_dumpall to dump all database
  5. Pipe the output to gzip to compress the SQL into a /var/backup/myexport.sql.gz file

2. Transfer on another server

Transfer the dump using ssh using something like this:

rsync -rltvz --no-o --no-g myuser@originserver:/var/backup/dumpall.sql.gz /opt/backup/myproj/data/

Use .ssh/config to store connection info.

3. Restore on a clean docker container

The following bash script will create a docker container, populating it with

set -e
echo "Remove identical container, keep data on directory"
NOW=`date +%Y-%m-%d`
# create an id based on timestamp plus MAC address 
UNIQUID=`uuidgen -t`
echo "Get postgres docker image"
docker pull postgres:9.4-alpine
echo "Generate 1 password with 48 char length"
PPASS=`pwgen 48 1`
mkdir -p /opt/docker_data/$NAME
echo "Save psql password on /opt/docker_data/$NAME.pwd"
echo $PPASS > "/opt/docker_data/$NAME/psql.pwd"
echo "extract database"
mkdir -p /opt/docker_data/$NAME/psqldata
mkdir -p /opt/docker_data/$NAME/share
gunzip -c /opt/backup/myproj/data/dumpall.sql.gz > /opt/docker_data/$NAME/share/restore.out
echo "Run a clean docker"
docker run --name $NAME -e POSTGRES_PASSWORD=$PPASS -d -p 44432:5432 -v /opt/docker_data/$NAME/psqldata:/var/lib/postgresql/data -v /opt/docker_data/$NAME/share:/extshare --restart always postgres:9.4-alpine
sleep 10
echo "Restore from /extshare/restore.out using user postgres (-upostgres) the database postgres (all dbs)"
docker exec -upostgres $NAME psql -f /extshare/restore.out postgres
echo "Clear the restore.out file"
rm /opt/docker_data/$NAME/share/restore.out

In order, this script:

  1. Download a postgres:9.4-alpine image (choose your version)
  2. Generate a random container name postgresql-myproj-YYYY-MM-DD-random-id based on MAC address and timestamp
  3. Generate a random password and save on a file
  4. Generate a directory structure on host system to keep postgres file and dump outside docker
  5. Create a new postgres container exposed to host on port 44432
  6. Save postgres files on /opt/docker_data/$NAME/psqldata on host
  7. Expose the dump file directory on /extshare on guest
  8. Restore the dump using role postgres
  9. Delete the dump

Resulting directory structure on host will be:

└── postgresql-myproj-2019-28-12-*******
    β”œβ”€β”€ psqldata [error opening dir]
    β”œβ”€β”€ psql.pwd
    └── share
        └── restore.out

Then to connect just use the same user and password of the database on origin server.


Usually on /opt/docker_data/$NAME/psqldata/pg_hba.conf, you’ve to add a line like this:

host all all md5

giving to host (reachable by inside docker) full access to database. But the default image ship a handy, permissive entry:

host all all all md5

So you can connect without any step to the database.


If you connect to the destination server with ssh, if you cannot access the port remember to forward the PostgreSQL on .config like:

Host destinationsrv
    Hostname destinationaddr
    User mydestservuser
    IdentityFile /home/myuser/.ssh/id_rsa_destination_server
    LocalForward 44432

Remember that if you haven’t a firewall, docker container will start with a permissive rule like:>5432/tcp

So will be exposed and reachable using password.