Category Archives: mysql

mysql 5.6 GTID global transaction identifier

Wow! It`s a really nice feature. Now you can do very easy replication.
i.e. In pre 5.6 you should create replica like this:

1. Turn on binary logs at master

 vi /etc/mysql/my.cnf
 server-id              = 11
 log_bin                 = /var/log/mysql/mysql-bin.log
 # WARNING: Using expire_logs_days without bin_log crashes the server! See README.Debian!
 expire_logs_days        = 10
 max_binlog_size         = 100M
 binlog_do_db            = mydatabase
 #binlog_ignore_db       = include_database_name
 binlog-format=ROW     #I MIXED and STATEMENT sometimes not good
binlog-checksum=crc32  # 5.6 feature speed up binlog
gtid-mode=on           #Use force, Luke

2. Create replication User

 grant replication slave on *.* to 'repl_user'@'%' identified by 'SecurePassword';

3. Dump all databases

mysqldump --master-data=2 --single-transaction --events --routines --triggers --all-databases  > database.sql

4. On slave after dump restore

 CHANGE MASTER TO MASTER_HOST='masterHost', MASTER_USER='repl_user',MASTER_LOG_FILE=, MASTER_LOG_POS=,
 MASTER_PASSWORD='SecurePassword';
 START SLAVE;
 show slave status;

But at 5.6 On slave

change master to MASTER_HOST='masterHost", MASTER_AUTO_POSITION=1, MASTER_USER=โ€™repl_userโ€™, MASTER_PASSWORD=โ€™SecurePassword';
START SLAVE;
show slave status;

P.S. If you need to skip one request on slave:

SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; START SLAVE;

Some mysql query optimization DISTINCT,GROUP BY, etc :)

Our previous developer use standard SQL guide to create query for selection “top 50 referrers”

SELECT referrer  FROM referrers_log WHERE (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123  GROUP BY referrer ORDER BY SUM(views_count) DESC LIMIT 50;

50 rows in set (26.31 sec)

It significantly loads our database;

Rewrite query for using temporary table with distinct referrer:

DROP TEMPORARY TABLE REF;
CREATE TEMPORARY TABLE REF AS (SELECT DISTINCT referrer FROM referrers_log WHERE (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123);
SELECT REF.referrer,SUM(views_count) FROM referrers_log,REF
WHERE referrers_log.referrer=REF.referrer
AND (create_date = curdate() OR create_date=curdate()-1) AND site_id = 123
GROUP BY REF.referrer ORDER BY SUM(views_count) DESC LIMIT 50;

50 rows in set (2.48 sec)

I feel happy ๐Ÿ™‚ ๐Ÿ™‚ ๐Ÿ™‚

Mysql can do partitioning on the fly :)

Mysql tuning in action…
Yesterday me, and our development team made some tuning of one old project we have couple of tables with 1-10 million records.
It`s NOT BIGDATA, but application makes huge writes to this table.
Only current date records, but you understand partitioning affects index size.
Table can be partitioned by date field. So how to do it:

  1. make sure that dt not null.
  2. recreate primary key. Field for partition should be part of primary key.
  3. create partition for old years and partition for every month.

Read more »

Mysql slave lag monitoring

Everybody know

SHOW SLAVE STATUS;

Also everybody know that ‘Seconds_Behind_Master’ shows difference in seconds between the slave SQL thread and the slave I/O thread.

Sometime it shows nonsense, and if you build monitoring It is not good practice to use ‘Seconds_Behind_Master’
Example from real life:
If replication becomes stalled due to connectivity problems, Seconds_Behind_Master shows 0 while replica is far away from master, changing timeout values not help. ๐Ÿ™ i mean:

slave_net_timeout=300

So we implement following monitoring – every 3 second write current timestamp at master.
and check replica delay.
Or You can use percona heartbeat, It do almost the same.
Read more »

Mysql check auto_increment values

We get old project, with braindead architecture.
Database structure provides surprises regularly. Yesterday we had a problem with maximum integer value reached at one column.

‘id’ at one table was auto_increment and “INT” not UNSIGNED INT or BIG INT, so after reaching id value 2147483647 application stop working, I change type to UNSIGNED BIGINT.

ALTER TABLE ad_campaigns_rates MODIFY id BIGINT UNSIGNED  not null auto_increment, AUTO_INCREMENT = 2147483648;

Read more »

Mysql How to delete all triggers

I`v got an issue from our development team to dump all triggers from prod db and add them to development db.
It`s trivial, except all triggers deletion. Everyone use magic, procedures, etc.

How to dump only triggers:

/usr/bin/mysqldump --single-transaction  --triggers --routines --no-data  --no-create-db --no-create-info
-u backup -pMEGASECUREPASSWORD
-h dbslave1-1.randomthemes.com database_name > /home/backup/mysql/database_name_triggers.sql

How to delete all triggers:

Fastest way (a little bit hindy ๐Ÿ™‚ ๐Ÿ™‚ but working o.k.): Upload trigger names to file, apply regexp, delete

$mysql -e "SELECT trigger_schema, trigger_name from information_schema.triggers" > ./drop_all_triggers.sql

regexps (use vi, Luke):

:%s/\n/;\r/g
:%s/database_name/DROP TRIGGER IF EXISTS database_name/g
:%s/\t/./g
$mysql mydatabase < ./drop_all_triggers.sql

How I do mysql table aggregation.

We have one huge old project with intensive mysql usage.
Due to some architectural errors, (sometimes development people do very strange things…) application continuously writes to several tables, partitioned by date. And they didn`t think about aggregation at all… Database growth very fast + 30 Gb per day. And it becomes real problem when we migrate data to SSD raid 10. (BTW it gives us 50 times performance boost!)
Our Application and reporting service doesn`t need such huge amount of data. Development of this project currently stalled, so only Operation team can do modifications.
I create archive database at slow SAS raid, and create aggregation sql. Main problem was – unknown partition number for data. There are 2 ways how to get it.

Use EXPLAIN PARTITIONS, and parse output, or just calculate number ๐Ÿ™‚

How table was partitioned:

PARTITION BY HASH (TO_DAYS(`create_date`)) PARTITIONS 10

For Mysql 5.5
Read more »

phpmyadmin #1045 Cannot log in to the MySQL server

Our developers really crazy about phpmyadmin ๐Ÿ™‚ and ask me to setup one.
I really prefer console…
Setup in ubuntu is easy just use aptitude, Luke…
Only change I did is to add Basic Auth to phpmyadmin url, to prevent ugly internet bots from trying to bruteforce my databases.

htpasswd /etc/phpmyadmin/htpasswd.setup developers

But when I try to login to my DB interface I`v got this damned error.
#1045 Cannot log in to the MySQL server
Very very strange! I can login via console with this username…
I create user with following credentials:

mysql> GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER,CREATE ROUTINE,
       CREATE TEMPORARY TABLES,CREATE VIEW,EXECUTE,INDEX,SHOW VIEW, ALTER ROUTINE
       on *.* to 'developer'@'%' IDENTIFIED by 'MEGASECUREPASSWORD';

phpmyadmin refuse this user. I turn on mysql debugging.
Read more »

My mysq server databases backup script with Percona

Simple, but working o.k.

#!/bin/bash
BDIR="/home/backup/mysql"
#DATAPATH=`date +%m%d%Y%H%M%S`

# Run backup
/usr/bin/innobackupex --user=root --password=MEGASECUREPASSWORD --slave-info $BDIR

LASTBACKUP=`ls -1 -t  $BDIR | head -n 1`

/usr/bin/innobackupex --user=root --password=MEGASECUREPASSWORD  --apply-log $BDIR/$LASTBACKUP

How to Install Percona XtraBackup and innobackupex Ubuntu 12

Percona has it`s own repo.
What is XtraBackup – damn, it`s very powerful backup tool for mysql from percona.
http://www.percona.com/doc/percona-xtrabackup/index.html
innobackupex – is a part of percona-xtrabackup package

  apt-key adv --keyserver keys.gnupg.net --recv-keys 1C4CBDCDCD2EFD2A
  echo "deb http://repo.percona.com/apt precise main" >> /etc/apt/sources.list
  cat /etc/apt/sources.list
  vi /etc/apt/sources.list
  aptitude update
  aptitude install percona-xtrabackup