IT and beyond

In this howto there are the steps to follow when a Windows 10 OS is not bootable anymore and you haven’t a recovery disk. This is a typical case after a new OS will be installed on Dual boot or boot partition was altered.

  1. Download Windows 10 iso:
    1. Download the official Windows 10 image
  2. Prepare USB to be bootable:
    1. Open GParted with
gparted /dev/DEVICE-TO-ERASE
  • Select the USB drive
  • Device > New partition table
  • Select GPT
  • Apply: this will delete any data on the USB
  • Create a new NTFS partition then Apply (do not use FAT32 since some files can be greater than 4GB)
  • Close GParted
  • Write files:
    1. Unplug and plug USB
    2. Copy all Windows files to the empty USB drive using 7zip with:
      7z x -y -o/media/user/path-to-USB/ Win10_1809Oct_Italian_x64.iso
    3. If something goes wrong during copy, you can mount the ISO image then rsync the source with the USB drive (the trailing slash is important):
  • cd path/to/usb/drive
    rsync -avzp /media/myuser/CCCOMA_X64FRE_IT-IT_DV91/ .
  • umount
  • Add boot flag
    1. Open GParted selecting the device just written
    2. Select the new partition then
    3. Select Partition > Manage flags
    4. Select boot flag (esp will be auto-selected)
    1. Use windows tools
      1. Follow this howto by MS to recover MBR, restore BCD or similar actions

    You can follow these steps to write on a USB a recovery ISO from windows the same way.

    To disallow password authentication on ssh, adduser –disabled-password will not disable openSSH password.

    To disable the password authentication, you have to put these values on /etc/ssh/sshd_config to:

    PasswordAuthentication no
    UsePAM no
    PermitRootLogin no

    Then you’ve to:

    systemctl restart sshd

    to apply changes.

    Connection will not be reset so before logout try to login on a different terminal to check you can login.

    Actually PermitRootLogin disable the root login for any method, but it’s an useful addition. Remember to add at least one user to the sudo group or you will not be able to operate as super-user without using su – root.

    To check if password auth is disabled:

    ssh -o PreferredAuthentications=password USER@HOST

    Exprected output is:

    USER@HOST: Permission denied (publickey).

    First, you’ve to create a new partition.

    You can list all available storage device with:


    If your disk is new, the new device will appear empty (without children on the tree).


    fdisk /dev/sdc

    Press m to show the manual.

    To create a partition larger than 2TB, you’ve to use a GPT partition (g) then create a new extended partition (n) then with (p) it will show you how the partition will like before you write (w) them.

    Then, lsblk will show the device with the new partition, e.g.:

    sdc 8:32 0 3.5T 0 disk 
    └─sdc1 8:33 0 3.5T 0 part

    Then format the new partition /dev/sdc1 with the specified filesystem (e.g. ext4):

    mkfs -t ext4 /dev/sdc1

    If you haven’t take not of the UUID shown by mkfs after format, use blkid command to list the UUID of the device, so if device name change the fstab is still valid.

    And add to /etc/fstab (put the last 0 to 1 to check filesystem on startup):

    UUID=xxxxxxx-xxx-xxxx-xxx-xxxx /mnt/mydata ext4 defaults 0 0

    To get the UUID later:

    sudo blkid /dev/sdc1

    Create the mount directory with:

    mkdir /mnt/mydata

    Then mount the new partition with:

    mount /mnt/mydata

    Tree is an useful linux command to display a tree representation a full directory structure or a part of it.

    On a Debian based distro like Ubuntu install:

    sudo apt-get install tree

    The last line of tree print a line like this:

    346 directories, 174 files

    If you’re changing files and directories and you want a real-time update of files and directories number, you can use watch.

    watch -n 20 'tree | tail -n 1'

    Tree will print the tree, tail will extract the last line, then watch will refresh the result every 20 seconds.

    Django is a powerful framework for building websites. To run a production website, usually an application server is used. So nginx will do two basic things:

    • Serve your Django application from the application server port to the web port (Reverse Proxy)
    • Serve static and media files

    The application server used in this example is gunicorn, the application server chosen by Instagram of the earlier days, but it can be anything running on port 9999. Change port number as required in the example.

    The following nginx conf was adapted from this, with some additions and it contains:

    • a commented non www to www website redirect
    • gzip for javascript, json, css and proxy routes
    • media files with etag (1 year)
    • static files with etag (1 minute)
    • an host-based favicon distributor (reusable as is)
    • a commented basic auth to make a website private
    • reverse proxy to gunicorn
    • a simple block for a common type of malicious activity

    It works fine with Django 1 and 2.

    # Howto:
    # uncomment for redirect
    # server {
    #    # redirect WITH www from and
    #    listen 80;
    #    server_name;
    #    return 301$request_uri;
    # }
    server {
        listen	80;
        # the domain name it will serve for
        charset     utf-8;
        # max upload size
        client_max_body_size 75M;
        # enable gzip for proxy requests
        gzip on;
        gzip_proxied any;
        gzip_vary on;
        gzip_http_version 1.1;
        gzip_types application/javascript application/json text/css text/xml;
        gzip_comp_level 4;
        # @see
        # Django media
        location /media  {
            etag on;
            expires 365d;
            alias /path/to/media_root;  # your Django project's media files - amend as required
        location /static {
            etag on;
            expires 1m;
            alias /path/to/static_root; # your Django project's static files - amend as required
        location /favicon.ico {
            # all favicons inside /path/to/favicons/ this directory
            # notation:
           alias /path/to/favicons/$host.ico;
        location / {
            # an HTTP header important enough to have its own Wikipedia entry:
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            # enable this if and only if you use HTTPS, this helps Rack
            # set the proper protocol for doing redirects:
            # proxy_set_header X-Forwarded-Proto https;
            # pass the Host: header from the client right along so redirects
            # can be set properly within the Rack application
            proxy_set_header Host $http_host;
            # we don't want nginx trying to do something clever with
            # redirects, we set the Host: header above already.
            proxy_redirect off;
            # set "proxy_buffering off" *only* for Rainbows! when doing
            # Comet/long-poll stuff.  It's also safe to set if you're
            # using only serving fast clients with Unicorn + nginx.
            # Otherwise you _want_ nginx to buffer responses to slow
            # clients, really.
            # proxy_buffering off;
            # Uncomment for maintenance
            ### auth_basic "Insert password here";
            ### auth_basic_user_file /path/to/.htpasswd;
            proxy_connect_timeout       30000;
            proxy_send_timeout          30000;
            proxy_read_timeout          30000;
            send_timeout                30000;
            # @see and
            if ($http_user_agent ~ "libwww-perl") {
              return 403;
            # Try to serve static files from nginx, no point in making an
            # *application* server like Unicorn/Rainbows! serve static files.
            if (!-f $request_filename) {
                proxy_pass http://localhost:9999;

    Run nginx -t to check and then systemctl reload nginx to apply.

    This is a http version, to configure the website for https follow this howto.

    After a failed restart of the nginx server, you can get this error typing journalctl -xe:

    nginx: [emerg] open() “/usr/share/nginx/off” failed (13: Permission denied) [SOLVED]

    This is caused by a misconfiguration of nginx.conf or a conf inside the /etc/nginx/conf.d/ directory where there’s something like:

    error_log off;

    This is the wrong way to disable logs. Nginx is actually trying to write a file called off inside the default folder.

    The right way

    To disable error_log simply do not declare it in your .conf file.

    To stop logging accesses, you can disable access_log writing in your .conf file:

    access_log off;

    mdadm is the utility to check and report failures on RAID disks. The usual way this Linux application send its message is a plain old e-mail. In this howto you’ll find the instruction to use an external mail server with mdadm.

    First, replace sendmail with an external email account. After you’ve configured and tested msmtp you’re ready to configure mdadm.

    Configure mdadm with the new SMTP

    Change /etc/mdadm/mdadm.conf to

    # instruct the monitoring daemon where to send mail alerts
    # MAILADDR root


    • is your FROM e-mail, the email or alias you’re sending emails from.
    • is your recipient TO e-mail. It must be a frequently-used e-mail since alerts of failures are sent there.

    Actually, using /etc/aliases and assigning root to the right recipient should allow you to avoid this step but you’ve to test yourself.

    Send test message with mdadm

    Type this command to emulate a disk failure message from mdadm:

    sudo mdadm --monitor --scan --test -1

    If you receive the message in the this job is finally done!

    This is an automatically generated mail message from mdadm
    A TestMessage event had been detected on md device /dev/md/1.
    Faithfully yours, etc.
    P.S. The /proc/mdstat file currently contains the following:
    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]
    md0 : active raid...
    md1 : active raid...
    unused devices:

    See also

    To use an external SMTP for all system e-mails, you have to install these:

    sudo apt-get install msmtp msmtp-mta

    Where msmtp-mta transform the external reference in the sendmail command usable by any application using sendmail. In this way you haven’t to install and configure Postfix since you’ll rely on an external SMTP service.

    Create the config file for msmtp

    This is an example based on the popular Gmail by Google:

    # Example for a system wide configuration file
    # /etc/msmtprc
    # A system wide configuration file is optional.
    # If it exists, it usually defines a default account.
    # This allows msmtp to be used like /usr/sbin/sendmail.
    account default
    aliases /etc/aliases
    # The SMTP smarthost.
    # host mailhub.oursite.example
    # Construct envelope-from addresses of the form "user@oursite.example".
    #auto_from on
    #maildomain oursite.example
    # Construct envelope-from addresses of the form "user@oursite.example".
    # this fix the error: msmtp: account default from /etc/msmtprc-php: envelope-from address is missing
    ### auto_from on
    # Use TLS.
    tls on
    auth on
    tls_trust_file /etc/ssl/certs/ca-certificates.crt
    # Syslog logging with facility LOG_MAIL instead of the default LOG_USER.
    syslog LOG_MAIL
    port 587

    Replace these with the real data from your e-mail account.

    In this example, the in GMail is the user that created the Gmail app password. He has to have the configured as sender address alias in GMail.

    Add aliases

    To match local users with sender address, create the aliases file:

    nano /etc/aliases
    # See man 5 aliases for format
    # postmaster: root

    If you’ve any process sending emails using a specific username, add to this list with the right email to use. Any occurrence of the original address will be translated to the right address.

    First test

    Type this to test the new configuration:

    Subject: write your subject

    Additional step

    If you need to use the mail command, install mailutils without installing postfix:

    sudo apt-get install --no-install-recommends mailutils

    Then you can use something like:

    echo -e Print a variable here $MYVAR.\n\n– \nSign here | mail -s Type your subject here $EMAIL

    Here some variables are added as you can use them in a custom script.

    Using msmtp command directly

    If you already use the base mail command and you don’t want to replace it with mail, you can use this:

    (echo -e Subject: Type your subject here; echo; echo -e “Print a variable here $MYVAR.\n\n– \nSign here) | msmtp -a default $EMAIL


    Now any application using sendmail will actually use your external SMTP service. Use a mail server supporting TLS to avoid transmitting clear text filled with system information.

    Tested on Ubuntu Linux 18

    With the General Data Protection Regulation (GDPR) enforced by European Union logs have to be cleaned regularly to delete IP addresses and other information about visitors. This can be interpreted as a way to protect an emerging and discussed right, the right to be forgotten.

    This new regulation is impacting every automated log system out of there. Since Sentry is a good open source error monitoring software* and it’s widely used, this guide will show how to clean Sentry logs on Linux systems according to GDPR using the sentry cleanup command line utility.

    Set a time limit for logs

    Before starting discover the maximum time limit a log can be kept according to the service policy you’re working on.

    In the below examples, the max time a log can be kept is 26 months, one of the sizes proposed by Google Analytics on cleanup settings.

    A 26 months limit for stored logs in sentry are set like this:

    env SENTRY_CONF='/usr/local/etc/sentry' sentry cleanup --days 749

    where /usr/local/etc/sentry is the directory where config.yml and are located or

    env SENTRY_CONF='/usr/local/etc/sentry' sentry cleanup --days 749 --project 5

    where 5 is the id of the project you can find in Project settings > Client Keys (DSN) as the very last part of the DSN path (always an integer number).

    749 days are calculated like this:

    30 days × 26 month = 780 days – 31 days = 749

    31 days are a margin to safely delete logs the same day of each month.

    Apparently, sentry cleanup needs to be root to access to postgres user and thus all sentry database tables so we have to put it on the cron for root.

    Schedule the cleanup

    1. Login as root with su – or sudo bash
    2. crontab -e
    3. add a command line like this
    . /usr/local/etc/virtualenvs/sentry/bin/activate && env SENTRY_CONF='/usr/local/etc/sentry' sentry cleanup --days 758 --project 5 && deactivate

    leading dot . is an alternative for source available on /bin/sh (environment of cron) and not only by /bin/bash. This avoid to set the environment variable SHELL=’/bin/bash’ on crontab.

    The resulting cron entry would be:

    20 3 28 * * . /usr/local/etc/virtualenvs/sentry/bin/activate && env SENTRY_CONF='/usr/local/etc/sentry' sentry cleanup --days 749 --project 5 && deactivate

    It isn’t a bad idea to add a fallback cleanup command the day after, so if you forget to cleanup logs for a specific project it will be done automatically:

    20 3 29 * * . /usr/local/etc/virtualenvs/sentry/bin/activate && env SENTRY_CONF='/usr/local/etc/sentry' sentry cleanup --days 749 && deactivate

    Now even your Sentry logs are GDPR compliant. The power of this method is that you can set a different cleanup limit for every project, according to its policies. And you haven’t to use any proprietary software to do this, just free/libre open source software.

    If you are in a hurry to publish privacy policies and you have a dedicated hosting, give a try to JournaKit legalazy on GitHub.

    * Plus it’s written on top of Django.

    When your Wireless interface is working and the ethernet isn’t working on Ubuntu, here’s a quick howto to check and fix a misconfiguration. It doesn’t solve any ethernet issues but you can give a try and on an Asus laptop (with JMicron chipset) I worked on it makes the job done.

    Tested on Ubuntu 16.04 LTS

    First steps

    To detect Ethernet interface:


    To check and configure connection:

    apt-get install ethtool

    To save the current status of network interface:

    ethtool ens5f5 > ethernet_before.txt

    Make ethernet interface works

    ethtool -s ens5f5 speed 1000 duplex full autoneg on


    ethtool -s ens5f5 speed 100 duplex full autoneg on

    Then to check what is the difference between the old non-working configuration and the configuration that works:

    ethtool ens5f5 > ethernet_after.txt
    diff ethernet_before.txt ethernet_after.txt

    If it doesn’t work try other ways, e.g. looking for specific issue on your Ethernet driver:

    lspci | grep Ethernet


    lspci | grep ethernet

    to check your driver.

    If the issue reappears after reboot, to make the command to run on startup do:

    sudo bash
    crontab -e

    And add:

    @reboot /sbin/ethtool -s ens5f5 speed 100 duplex full autoneg on

    Now reboot to check if changes takes effect

    %d bloggers like this: