Monthly Archives: July 2015

How to delete files without big iowait

I know 2 ways, tested in high loaded production.

if scheduler support ionice (on some systems makes LA)

 # ionice -c 3 nice -n 20 find  /DIRECTORY -type f -delete

Just ajust sleep time, according to your system LA

while true; do find /DIRECTORY/ -type f -print  -delete -quit; sleep 0.01; done

mysql 5.6 GTID global transaction identifier

Wow! It`s a really nice feature. Now you can do very easy replication.
i.e. In pre 5.6 you should create replica like this:

1. Turn on binary logs at master

 vi /etc/mysql/my.cnf
 server-id              = 11
 log_bin                 = /var/log/mysql/mysql-bin.log
 # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian!
 expire_logs_days        = 10
 max_binlog_size         = 100M
 binlog_do_db            = mydatabase
 #binlog_ignore_db       = include_database_name
 binlog-format=ROW     #I MIXED and STATEMENT sometimes not good
binlog-checksum=crc32  # 5.6 feature speed up binlog
gtid-mode=on           #Use force, Luke

2. Create replication User

 grant replication slave on *.* to 'repl_user'@'%' identified by 'SecurePassword';

3. Dump all databases

mysqldump --master-data=2 --single-transaction --events --routines --triggers --all-databases  > database.sql

4. On slave after dump restore

 CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='repl_user',MASTER_LOG_FILE=, MASTER_LOG_POS=,
 MASTER_PASSWORD='SecurePassword';
 START SLAVE;
 show slave status;

But at 5.6 On slave

change master to MASTER_HOST='masterHost", MASTER_AUTO_POSITION=1, MASTER_USER=’repl_user’, MASTER_PASSWORD=’SecurePassword';
START SLAVE;
show slave status;

P.S. If you need to skip one request on slave:

SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; START SLAVE;

nginx proxy_pass and cache regexp location.

nginx cannot proxy_pass at regexp location. I made this workaround.
Works great! Now I can cache any static data provided by backend. 🙂 from any location!

location ~* \.(gif|jpg|png|ico)$ {
      rewrite ^.(gif|jpg|png|ico) /$1 break;
      proxy_pass         http://127.0.0.1:8080;
      proxy_redirect     off;
      proxy_set_header    Host             $host;
      proxy_set_header    X-Real-IP        $remote_addr;

      proxy_set_header   X-Forwarded-For  $proxy_add_x_forwarded_for;
      client_max_body_size       150m;
      client_body_buffer_size    128k;
      proxy_connect_timeout      90;
      proxy_send_timeout         90;
      proxy_read_timeout         90;
      proxy_buffer_size          4k;
      proxy_buffers              4 32k;
      proxy_busy_buffers_size    64k;
      proxy_temp_file_write_size 64k;

      proxy_cache cache_common;
      proxy_cache_key "$host|$request_uri";
      proxy_cache_valid 200 302 301 15m;
      proxy_cache_valid 404         10s;
      proxy_cache_valid any          1m;
        }

kill process running longer than (bash)….

At one our advertising server located far beyond in another galaxy 🙂 🙂 rsync over ssh begin to hung sometimes without any reason.
of course, we will try strace and other debug tools but tomorrow. Today we need quick fix sollution.

BTW
removing compression not help, and –timeout option rather not help solve this case
my rsync command:

 rsync --timeout=30 -apvre ssh -o StrictHostKeyChecking=no
       --remove-source-files /opt/logrsync/workdir/clicks/etl/20150707215235-SERVERNAME.pb.gz
       etl@etl2-1.SERVERNAME.it.randomthemes.com:~/etl/data/sources/clicks/

How to find rsync processes running more then 10 seconds, and kill them (bash):

while true; do
      for i in `ps -C rsync -o pid=,etimes= | awk '{if ($2 > 10) print $1}'`; do
       echo $i; kill $i; sleep 10;
      done
done

if somebody solve the same rsync issue – please, please tell me how!

In our case everything start working o.k. without any action, It was connectivity issue, but very very strange.
–timeout should help.