Nginx purge (invalidate) cache

How to purge cache record in ngix cache via http request.
Today we install awesome nginx plugin in production https://github.com/FRiCKLE/ngx_cache_purge/
How to use it – real life example:

1. Our cache location and proxy settings:

1
proxy_cache_path /var/cache/nginx/proxy_cache_quick levels=1:2 keys_zone=quick_cache:300m max_size=2m inactive=7d;

Read more »

How to test cdn delivery speed via curl

Our company use our own CDN based on nginx caching. 7 high loaded (40 000 RPS per server) servers in 2 datacenters.
And periodically I observer some deviations in delivery time. from 0.15 to 7.5 or even 30 seconds.
We have nginx SLA module + Graphics and monitoring. But I need to test all servers for anomaly delivery time.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#!/bin/bash

for l in ip1.x.x.x \
         ip2.x.x.x \
         ....
         ipN.x.x.x; do

echo $l;

    for i in {1..1024}; do
    curl -s -w "%{time_total} -- %{time_connect}\n" -o /dev/null --resolve it.randomthemes.com:$l http://it.randomthemes.com/favicon.ico >> ./$l.txt
    done

done

Then analyse ipN.x.x.x.txt any way you like.

cat | sort -n | tail -n 25
etc.

Have a nice day. I really like curl and hope this will help someone.

How to secure wipe file system

Before you cancel rented dedicated server, it`s good practice to secure wipe disc drives. Reboot to recovery console, and:
Use shred, Luke!

1
shred -n 0 -f -v -z /dev/sda

ext4 perfomance tuning

I use following mount options.
In some projects it gives significant performance boost.

errors=remount-ro – need for hardware problem case. Because if disc remains mounted, further writing attempts can deadly damage file system. And one more case – easy monitoring. Just check via zabbix or nagios that you have no ro file system.

noatime, nodiratime – not fix access time. Double check that your applications doesn`t need this.

discard – use trim for SSD drive. In case SATA or SAS this option ignored by system.
commit, nobarrier – dangerous in case of power outage. But for my infrastructure o.k.

ext4 errors=remount-ro,noatime,nodiratime,commit=100,discard,nobarrier

And SED for fstab fixing (I use puppet, chef, fabric).

1
sed -r -i 's/ext4\s+defaults/ext4 errors=remount-ro,noatime,nodiratime,commit=100,discard,nobarrier/' /etc/fstab

nginx error page depends on user browser language

Task – return different pages depends on user browser language.
i.e. different html if backend return error. And for domain it.randomthemes.com always return english error page on backend error.

nginx.conf

1
2
3
4
map $http_accept_language $lang {
    default en;
    ~ru ru;
          }

Server context:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
set $ep /50x.html; #default error page

if ( $host ~* it.randomthemes.com ) {
set $ep /50x.en.html;
}


if ( $lang ~* en ) {
set $ep /50x.en.html;
}

error_page  503          /dinner.html;
error_page  500 502 504  $ep;
error_page  400          /400.html;

nginx 301 redirect entire domain

Task – redirect all requests from old-domain.com to new-domain.com
use nginx, luke! It`s simple.

1
2
3
4
server {
        server_name old-domain.com www.old-domain.com;
        rewrite ^/(.*)$ http://new-domain.com/$1 permanent;
}

some qute bash compare

Task – need to generate routes file for openvpn from list of our networks.
Check, if file different from current route script, replace one and do some action. This task occurs very often.
Code is very simple, so:

1
2
3
4
5
6
7
cat /etc/ipfw.list | awk '{print "push \"route " $1 "\" "  }' > /root/test1;
if [ `diff /root/test /root/test1 | wc -l ` -eq 0  ];  
  then  
    echo "no difference";
  else echo "differ";
    rm -f /root/test; mv /root/test1 /root/test;
fi

Generate Unique Request ID nginx

Task – Need to add unique ID to each user request. External nginx module such as request ID is very unstable, so I write small perl script to generate UUID and add it to header.
nginx embedded perl is extremely fast, and works very well in high loaded production systems.

Required packages:

1
aptitude install libossp-uuid-perl

/etc/nginx/nginx.conf

1
2
3
4
5
6
7
8
9
10
http {
...
perl_require "Data/UUID.pm";
perl_set $uuid 'sub {

  $ug = new Data::UUID;
  $str = $ug->create_str();
  return $str;
               }'
;
... }

Location config:

1
2
3
4
5
6
    location ~ /data/(.+) {
...
...
            proxy_set_header    X-Request-Id    $uuid;
...
}

Mail 550 filter

If you make a project with huge amount of email notification, you MUST control number of 550 reply from mail servers. Because if you skip this step, and continue sending to deleted mail boxes, big mail providers such as gmail.com, mail.ru, mail.ua, etc. will ban you domain at 0.5 – 1% “user unknown” reply.
So mail.log parsing is only solution.
In our project we add bad email addresses to database table (we use postgress)

1. make database replace rule, if email already added, (email is primary key) it is fastest way to prevent errors on INSERT duplicate email addresses.
Read more »

Some mysql query optimization DISTINCT,GROUP BY, etc :)

Our previous developer use standard SQL guide to create query for selection “top 50 referrers”

1
SELECT referrer  FROM referrers_log WHERE (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123  GROUP BY referrer ORDER BY SUM(views_count) DESC LIMIT 50;

50 rows in set (26.31 sec)

It significantly loads our database;

Rewrite query for using temporary table with distinct referrer:

1
2
3
4
5
6
DROP TEMPORARY TABLE REF;
CREATE TEMPORARY TABLE REF AS (SELECT DISTINCT referrer FROM referrers_log WHERE (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123);
SELECT REF.referrer,SUM(views_count) FROM referrers_log,REF
WHERE referrers_log.referrer=REF.referrer
AND (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123
GROUP BY REF.referrer ORDER BY SUM(views_count) DESC LIMIT 50;

50 rows in set (2.48 sec)

I feel happy :) :) :)