Author Archives: KsI - Page 4

nginx 301 redirect entire domain

Task – redirect all requests from old-domain.com to new-domain.com
use nginx, luke! It`s simple.

server {
        server_name old-domain.com www.old-domain.com;
        rewrite ^/(.*)$ http://new-domain.com/$1 permanent;
}

some qute bash compare

Task – need to generate routes file for openvpn from list of our networks.
Check, if file different from current route script, replace one and do some action. This task occurs very often.
Code is very simple, so:

cat /etc/ipfw.list | awk '{print "push "route " $1 "" "  }' > /root/test1;
if [ `diff /root/test /root/test1 | wc -l ` -eq 0  ];  
  then  
    echo "no difference";
  else echo "differ";
    rm -f /root/test; mv /root/test1 /root/test;
fi

Generate Unique Request ID nginx

Task – Need to add unique ID to each user request. External nginx module such as request ID is very unstable, so I write small perl script to generate UUID and add it to header.
nginx embedded perl is extremely fast, and works very well in high loaded production systems.

Required packages:

aptitude install libossp-uuid-perl

/etc/nginx/nginx.conf

http {
...
perl_require "Data/UUID.pm";
perl_set $uuid 'sub {

  $ug = new Data::UUID;
  $str = $ug->create_str();
  return $str;
               }'
;
... }

Location config:

    location ~ /data/(.+) {
...
...
            proxy_set_header    X-Request-Id    $uuid;
...
}

Mail 550 filter

If you make a project with huge amount of email notification, you MUST control number of 550 reply from mail servers. Because if you skip this step, and continue sending to deleted mail boxes, big mail providers such as gmail.com, mail.ru, mail.ua, etc. will ban you domain at 0.5 – 1% “user unknown” reply.
So mail.log parsing is only solution.
In our project we add bad email addresses to database table (we use postgress)

1. make database replace rule, if email already added, (email is primary key) it is fastest way to prevent errors on INSERT duplicate email addresses.
Read more »

Some mysql query optimization DISTINCT,GROUP BY, etc :)

Our previous developer use standard SQL guide to create query for selection “top 50 referrers”

SELECT referrer  FROM referrers_log WHERE (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123  GROUP BY referrer ORDER BY SUM(views_count) DESC LIMIT 50;

50 rows in set (26.31 sec)

It significantly loads our database;

Rewrite query for using temporary table with distinct referrer:

DROP TEMPORARY TABLE REF;
CREATE TEMPORARY TABLE REF AS (SELECT DISTINCT referrer FROM referrers_log WHERE (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123);
SELECT REF.referrer,SUM(views_count) FROM referrers_log,REF
WHERE referrers_log.referrer=REF.referrer
AND (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123
GROUP BY REF.referrer ORDER BY SUM(views_count) DESC LIMIT 50;

50 rows in set (2.48 sec)

I feel happy 🙂 🙂 🙂

github backup script

Our projects repos hosting on github.
How I make backup (clone all repos using cron + github api) probably there is other way how to make backup, but I don`t find this way.

#!/usr/bin/ruby
require 'rubygems'
require 'octokit'


$git_binary = '/usr/bin/git'
$git_login = "LOGIN"
$git_password = "PASSWORD"
$clone_path = "/home/git"


client = Octokit::Client.new(:login => "#{$git_login}", :password => "#{$git_password}")

repos = client.organization_repositories("ORGANIZATION")

system ("rm -rf #{$clone_path}/ORGANIZATION")

repos.each do |num|

system ("#{$git_binary} clone --mirror --recursive  https://#{$git_login}:#{$git_password}@github.com/#{num.full_name} #{$clone_path}/#{num.full_name}")

end

Mysql can do partitioning on the fly :)

Mysql tuning in action…
Yesterday me, and our development team made some tuning of one old project we have couple of tables with 1-10 million records.
It`s NOT BIGDATA, but application makes huge writes to this table.
Only current date records, but you understand partitioning affects index size.
Table can be partitioned by date field. So how to do it:

  1. make sure that dt not null.
  2. recreate primary key. Field for partition should be part of primary key.
  3. create partition for old years and partition for every month.

Read more »

Mysql slave lag monitoring

Everybody know

SHOW SLAVE STATUS;

Also everybody know that ‘Seconds_Behind_Master’ shows difference in seconds between the slave SQL thread and the slave I/O thread.

Sometime it shows nonsense, and if you build monitoring It is not good practice to use ‘Seconds_Behind_Master’
Example from real life:
If replication becomes stalled due to connectivity problems, Seconds_Behind_Master shows 0 while replica is far away from master, changing timeout values not help. 🙁 i mean:

slave_net_timeout=300

So we implement following monitoring – every 3 second write current timestamp at master.
and check replica delay.
Or You can use percona heartbeat, It do almost the same.
Read more »

How to capture bad email addresses in mass mail.

We have huge project with a great number of registered users. They receive notification via email, when actions occur. (new comment, gift, some other activity). Project targeting – Russia and former USSR In Russia some free email hosting providers like mail.ru delete user account after 2-3 years of inactivity.
So we have now 2-3000000 users with bad email address.
How to find them and remove from mailing list:

According to RFC 5321 (smtp rfc) in case of wrong email in rcpt to, server should return 550 no such user.
Read more »

OS X how to press ins (Insert key) mc.

I use mc as file manager, and suffer from Ins. key absence at MAC keyboard.
+t solves it. Thanks google! Thanks gods! Work becomes more productive!