openvpntech_logo_rounded_antialiased

PuTTY freezes on OpenVPN on Windows

Issue: Opening PuTTY no login information are returned and then the connection is closed.

Solution (Windows 7):

  1. On the taskbar Right click and then Disconnect the VPN
  2. Look for the VPN icon and then Right click on VPN shortcut icon > Properties
  3. Select Compatibility tab and then check Run as administrator
  4. Launch OpenVPN: a message will be prompted to allow the app to run as administrator
kss

Stop to send Microsoft information about your system

These steps will help you to block some of error reporting to Microsoft.

To help you in the task of finding error reporting issues install and run the free Kaspersky Security Scan: you will be notified by a list of issue affecting the PC the other antivirus usually don’t tell.

One of it is the notification to Microsoft of system states in situations like crashes. To stop sending Microsoft information like this you can follow these steps.

  1. On Windows 7, search “Action Center” in Windows > Search (or in your language, e.g. “Centro operativo” in Italian)
  2. Go to the 2nd voice on the left bar “change settings”
  3. Go to 2nd voice of related settings about error notification
  4. Check the very last element (Never check for solutions)
  5. In the previous screen check the first element about software use is disabled

Now take care about Microsoft Internet Explorer:

  1. On Windows 7, Run gpedit.msc
  2. Go to User Configuration > Administrative Templates > Windows Components > Internet Explorer
  3. Double click on Turn off Crash detection and then “Enable”
  4. Under Browser Menu Disable “Turn off the ability to launch report site problems using a menu option”

If you don’t use Internet Explorer as your main browser, disable also this under User Configuration > Administrative Templates > Windows Components > Internet Explorer:

  1. On the starting page option, check disable and set starting page as about:blank
  2. Run Internet Explorer and confirm the dialog about the about:blank as default page
  3. Now standard user cannot change the default starting page

Now go to Internet Explorer:

  1. Go to Gear (top right on IE 11) > Internet Options > Advanced > Security
  2. Select “Do not save crypted pages on disk”

Now go to Kaspersky Security Scan, go to Reports and refresh the list of issue. Note that if you have an antivirus, like Avira, Kaspersky will tell you autorun are active even if Avira block it so you can ignore these warning in this case.

A typical web application

Reduce Time to the First Byte – TTFB on web applications

How to speed up the time to the first byte and what are the causes of a long TTFB? Main causes are network and server-side and I will focus on server-side causes. I’m not covering any CMS here but you can try to apply some of these techniques starting from how to interpret the browser Timing.

Get reliable timing

Take a website with cache enabled: at the 9th visit on a page you can be sure your page is in cache, the connection with the webserver is alive, the SSL/TLS connection is established, the SQL queries are cached and so on. Open the network tab and enjoy your site speed: well, very few real users will experience that speed.

Here a comparison of a first time, no-cache connection to a nginx webserver explored with Chrome (F12 > Network > Timing) and a second request with the same page refreshed right after the first:

performance-01

I got a +420% on a first time request compared with a connected-and-cached case. To obtain a reliable result (1st figure) you should usually:

  • Wait several seconds after a previous call before doing anything, waiting for the webserver to close connection with the client
  • Add a ?string to the url of the page you’re visiting. Change the string every time you want a fresh page.
  • Ctrl+shift+R to reload the page

This technique bypass the Django view cache and similar cache systems on other framework. To check the framework cache impact, do a Ctrl+shift+R just after the first request obtaining a similar result of the 2nd figure. There are better ways to do the same, this is the easiest.

Break up the time report

Unpack the time report of the first-time request:

  • Connection setup (15% of the elapsed time in the example)
    • Queueing: slight, nothing to do.
    • Stalled: slight, nothing to do.
    • DNS lookup: slight, nothing to do.
    • Initial connection: significant, skip for now.
    • SSL: significant, client establish a SSL/TLS connection with the webserver. Disabling ciphers or tuning SSL can reduce the time but the priority here is best security for the visitor, not pure speed. However, take a look at this case study if you want to tune SSL/TLS for speed.
  • Request / response (85% of the elapsed time i.e.)
    • Request sent: slight, browser-related, nothing to do.
    • Waiting (TTFB): significant, time to first byte is the time the user wait after the request was sent to the web server. The waiting time includes:
      • Framework elaboration.
      • Database queries.
    • Content Download: significant, page size, network, server and client related. To speed up content download of a HTML page you should add compression: here an howto for nginx and for Apache webservers: these covers proxy servers, applying directly on a virtualhost is even simplier and the performance gain is huge.

Not surprisingly, the time of a first time request is elapsed most in Request / response than on connection setup. Among the Request / response times is the Waiting (TTFB) the prominent. Luckyly it is the same segment covered by cache mechanics of the framework and consequently is the most eroded passing from the first (not cached) to the second figure (cached by the framework). To erode the TTFB, database queries and elaboration must be optimized.

Optimize elaboration: program optimization

When Google, the web-giant behind the most used web search engine in history, try to suggest some tips to optimize PHP to programmers they react badly starting from daily programmers going up to the PHP team bureau.

In a long response, the PHP team teach Google how to program the web offering unsolicited advice offering “some thoughts aimed at debunking these claims” with stances like “Depending on the way PHP is set up on your host, echo can be slower than print in some cases”, a totally confusing comment for a real-world programmer.

Google put offline the PHP performance page that can be misleading but still contains valid optimization tips, especially if you compare with some of comments on php.net itself. Google have interests to speed and code optimization and the writer has the know-how to talk about it, the PHP team here just want to be right and defend their language and starting from good points crossed the line of scientific dialectic.

Program optimization mottos are:

Look for the best language that suits to your work and the best tools you can and look for programmers from the real-world sharing their approaches to the program optimization.

PHP team’s whining will not change the fact that avoiding SQL inside a loop like Google employee suggested is the right thing to do to enhance performance. This leads to database optimization.

Dude, where is my data?

The standard web application nowadays has this structure:

A typical web application

A typical web application: application server run the application so from now on  – oversimplifying – I will treat application and application servers as synonyms.

After the client requests pass through the firewall, webserver serve static files and ask to Application server the dynamic content.

Cache server can serve application or web server but in this example the earlier has the control: an example of cache controlled by application is on the Django docs about Memcache, an example of cache by web server is the HTTP Redis module or the standard use of Varnish cache.

Database server (DBMS) stores the structured data for the application. DBMS on standard use cases can be optimized with little effort. More difficult is to optimize the way the web application get the data from the database.

Database query optimization: prefetch and avoid duplicates

To optimize database queries you have to check the timing, again. Depending on the language and framework you are using there are tools to get information about queries to optimize:

Since I’m using Python I go with Django Debug Toolbar, a de-facto standard for application profiling. Here a sample of SQL query timing on a PostgreSQL database:

Timing of SQL queries on Django Debug Toolbar.

Timing of SQL queries on Django Debug Toolbar.

The total time elapsed on queries is 137,07 milliseconds, the total number of queries executed are 90. Among these, 85 are duplicates. Below any query you’ll find how many times the same query is executed. The objective is to reduce the number of queries executed.

If you’re using Django, create a manager for your models.py to use like this:

class GenericManager(models.Manager):
    """
    prefetch_related: join via ORM
    select_related: join via database
    """
    related_models = ['people', 'photo_set']
    def per_organizer(self, orgz, **kwargs):
        p = kwargs.get('pubblicato', None)
        ret = self.filter(organizer = orgz)
        return ret

class People(models.Model):
    name = models.CharField(max_length=50)
    ...

class Party(models.Model):
    organizer = models.ForeignKey('People')
    objects   =  GenericManager()

class Photo(models.Model):
    party = models.ForeignKey('Party')
    ...

Then in views.py call your custom method on GenericManager:

def all_parties(request, organizer_name):
    party_organizer = People.objects.get(name=organizer_name)
    all_parties = Party.objects.per_organizer(party_organizer)
    return render(request, 'myfunnywebsite/parties.html', {
        'parties' : all_parties
    })

When you want to optimize data retreival for Party, instead of comb through objects.filter() methods on views.py you will fix only the per_organizer method like this:

class GenericManager(models.Manager):
    """
    prefetch_related: join via ORM
    select_related: join via database
    """
    related_models = ['people', 'photo_set']
    def per_organizer(self, orgz, **kwargs):
        ret = self.filter(organizer = orgz)
        return ret.prefetch_related(*self.related_models)

Using prefetch_related queries are grouped via ORM and all objects are available, avoiding many query duplicates. Here a result of this first optimization:

django_sql_query_debug_toolbar_2

  • Query number is dropped from 90 to 45
  • Query execution time dropped from 137,07 to 80,80 (-41%)

An alternative method is select_related, but in this case the ORM will produce a join and the above code will give an error because photo_set is not accessible in this way. If your models are structured in a way you got a better performance with select_related go with it but remember this limitation. In this use case the results of select_related are worse than prefetch_related.

Recap:

  • TTFB can be a symptom of server-side inefficiency but you have to profile your application server-side to find out
  • Check SQL timing
  • Reduce the number of queries
  • Optimize application code
  • Use cache systems, memory-based (redis, memcached) are the faster

In my experience, inefficient code and a lot of cache are a frail solution compared with the right balance between caching and query + program optimization.

If you’ve tried everything and the application is still slow, consider to rewrite it or even to change the framework you’re using if speed is critical. When any optimization failed, I went from a Drupal 6 to a fresh Django 1.8 installation, and Google understood the difference in milliseconds elapsed to download the pages during indexing:

downloadtime

Since you can’t win a fight with windmills, a fresh start may be the only effective option on the table.

disk-space-analyzer-head

How to find big files on disk

On Windows: WinDirStat

  • Download and install WinDirStat
  • Run WinDirStat on your disks (it will take time)
  • You’ll see a coloured map of file occupation by file type

windirstat

On Linux command line: ncdu

  • On Ubuntu / Debian
    • apt-get install ncdu
    • cd /dir/to/check
    • ncdu
  • On CentOS / Fedora / RedHat
    • yum install ncdu
    • cd /dir/to/check
    • ncdu
ncdu-screenshot

ncdu screenshot by dev.yorhel.nl: Official Website

On Linux with window manager

  • CentOS / Fedora / RedHat
    • apt-get install k4dirstat
  • On Ubuntu / Debian
    • yum install k4dirstat

Again, you’ll see a coloured map of file occupation by file type.

Official website

 

dirstat1-yuenhoe

Screenshot by yuenhoe.com

alimentatore

How to clean the Power Supply Unit of desktop PC

What you need

  • Screwdrivers (at least a Phillips-head)
  • Dust mask
  • Earmuffs (recommended)
  • Air compressor
  • Angle nozzle air blow gun
  • A garage, lab or open space easy to ventilate

What to do

  1. Shutdown and unplug the PC from the electricity supply and remove the computer case
  2. Unplug the power cable from PSU and clean the cable sheat with a dust cloth
  3. Remove the Power Supply Unit using a Phillips-head screwdriver and holding the PSU firmly to avoid to damage motherboard and other components
  4. Wear a dust mask and eventually earmuffs
  5. Power on the air compressor to clean the PSU fan, cables and connector
  6. If there are additional aeration holes clean them with short blows until you’ll see the internal components shine
  7. Clean the external chassis of the PSU with a dust cloth and then insert the PSU back into the computer case keeping it firmly when you tighten the screws
complexlook2x

Django: fundamental tools for new Python developers

To develop in Django can be confusing for a new Python developer. Using Windows to develop in Django can be a major obstacle too. This list aims to give reference points for a new Python developer.

Back in the 1998 I start to develop application for the web using ASP and PHP and dependencies weren’t an issue since these languages are for the web.

Developing in Python is more challenging and really more fun than programming in PHP. You have a powerful multipurpose language with a ton of libraries competing in a far larger arena than the web development. Not surprising, Google use this language extensively as of some popular web services like Pinterest and Instagram: these last two are using Django.

Here some tools to help you to do a better job as Django and Python developer.

Bitnami Django Stack

For developer using Windows, Bitnami Django Stack is a life-saver. It raises you to the need of installing and configuring many libraries and simply create a Python / Django environment on your system.

PyCharm

complexlook2x

Screenshot: official website

JetBrains’ PyCharm is the multiplatform IDE to develop in Python. You can forget about the indentation issue and focus on programming. The autocomplete dropdown, the Python console, the easy management of DVCS systems (Git, Mercurial), the easy access to Python packages repositories will make it the tools for Python programming, especially in Windows where there are few alternatives than Linux. On Windows, rely on the Bitnami Django Stack you’re using to load the right libraries.

PyPI – Cheese Shop

PyPI is the repository of Python packages. Since the PyPI is nearly unpronounceable, you can call it Cheese Shop. Python was named by Guido van Rossum after the British comedy group Monty Python and the Cheese Shop is this sketch:

Contrary on the poor guy in the sketch, you will find all sort of cheese you need in the cheese shop.

Pip

Pip is the definitive tool for installing Python packages from Cheese shop on your environment. pip install package-name and you’ll get the package ready and running. Even more interesting is the pip install -r requirements.txt feature. It will install all the packages listed in the requirements.txt text file usually shipped with a package having some dependencies.

PgAdmin

pgadmin4-properties.png

Screenshot: official website

Django and PostgreSQL DBMS are a powerful couple. If you have to use a PostgreSQL database, the best interface you can use is PgAdmin.

Django Packages

Django Packages is the Hitchhiker guide to the cheese shop. You’ve to choose a REST framework but you don’t want to marry with a unreliable partner? You need a good photo gallery and you want to get the best django app to implement in your django application? Django packages will guide you to the best solution for your needs.

django-packages

Any feature has a comparison matrix, where all projects are listed in columns where these criterion, elaborated from Github, are contemplated:

  • Project status (production, beta, alpha)
  • Commit frequency in the repository
  • How many times the project was forked
  • Who work on the project
  • Link to online documentation
  • Features comparison

If you’re coming from a CMS like Drupal here some tips to how to approach a Model-View-Controller like Django, starting from the Entity-Relationship model.

Read also on the same topic: Django development on Virtualbox: step by step setup

clonezilla

Disk to disk copy to SSD with Clonezilla of a Windows 7 disk with multiple partitions

This howto is also valid for a copy from a larger HDD to a smaller SSD.

It worked fine with me but do this steps at your own risk.

What you need:

  1. GParted live
  2. Clonezilla live
  3. Windows 7 disk

Part 1: Plug SSD to SATA

  1. Leave the old HDD where it is
  2. Place the SSD to a free SATA port (e.g. SATA 2 plug)
  3. Plug the PSU power connector into the SSD

Part 2: Shrink partitions with GParted

  1. Do not trust the claimed capacity of the SSD: calculate the real capacity of your new SSD disk manually or with a tool like this
  2. Boot with the GParted disk
  3. Shrink the last partition on the disk to leave enough unallocated space to fit the SSD real disk capacity: the sum of all the partitions must be less than the real capacity new disk. Keep some margin, you’ll resize the partition later (Part 5).
  4. Remove the Gparted disk and reboot

Part 3: Clone with Clonezilla

  1. Boot from the Clonezilla disk
  2. Select Disk to disk clone: the HDD as source, the SSD as target
  3. Select Expert mode
  4. In the advanced parameters list, look for -icds option (Skip checking destination disk size before creating partition table) and check it with the space bar: without this, you cannot copy a larger disk into a smaller one
  5. Tip: Someone suggests to check the -k1 option to recreate new partitions to fit the disk: in my case it leads to a segmentation fault error of Clonezilla so I suggest to do not use -k1 and fix partitions later
  6. You will get a command like this:
    /usr/sbin/ocs-onthefly -g auto -e1 auto -e2 -j2 -r -icds -f sda -t sdb
  7. Let Clonezilla clone partitions from HDD to SSD
  8. Shutdown when asked
  9. Power down the computer then connect the SSD to SATA 1 in place of the old HDD
  10. Power up the computer
  11. If you get a Blue Screen of Death (BSOD) after boot do the steps on Part 4, if not skip to Part 5

Part 4: Fix boot

  1. Insert the Windows 7 disk and boot (a comprehensive howto here)
  2. Select the Repair computer system recovery (do not do automatically repair)
  3. Select Command prompt from System recovery options
  4. Type these commands:
    1. bootreg.exe /fixmbr
    2. bootreg.exe /fixboot
    3. bootreg.exe /rebuildbcd
  5. Reboot
  6. Wait for the disk scans
  7. Log in to Windows: a new driver for the SSD will be installed
  8. Reboot when asked
  9. Log in again: your system is ready and running on SSD

 

Part 5: Occupy all available space

  1. Now you can do the reverse of Part 2: insert GParted disk and boot from it
  2. Reclaim the unallocated space expanding the last partition to occupy all the unallocated space
  3. Reboot and enjoy your new fast disk