mdadm replace disk

Copy the partition table to the new disk
Dump layout from sda create on sdb

# sfdisk -d /dev/sda | sfdisk /dev/sdb

Add to raid.

# mdadm --manage /dev/md0 --add /dev/sdb1

Verify
# cat /proc/mdstat

Youtrack setup

~ # cat /home/scripts/yourtrack-docker.sh
#!/bin/bash

docker network create –driver bridge youtrack

version=”2022.1.46592″

docker run -it –restart unless-stopped –name youtrack-server-instance
–network youtrack
-v /opt/youtrack/data:/opt/youtrack/data
-v /opt/youtrack/conf:/opt/youtrack/conf
-v /opt/youtrack/logs:/opt/youtrack/logs
-v /opt/youtrack/backups:/opt/youtrack/backups
-p 127.0.0.1:8080:8080
jetbrains/youtrack:$version

+ nginx letsencrypt frontend

root@s14 ~ # cat /home/scripts/nginx-le-docker.sh
#!/bin/bash
docker run
–restart unless-stopped
–name nginx-letsencrypt
–network youtrack
-v /opt/nginx-le/certs:/etc/letsencrypt
-v /opt/nginx-le/conf/youtrack.randomthemes.ru.conf:/etc/nginx/conf.d/default.conf
-e DOMAIN=youtrack.randomthemes.com
-e EMAIL=ksi.sergey@gmail.com
-p MY_EXTERNAL_IP:80:80
-p MY_EXTERNAL_IP:443:443
-d andreilhicas/nginx-letsencrypt


Fail2Ban + nginx access.log

Today morning nagios reports allert that 2 of our small projects inaccessible. HTTP regexp check failed. They related with Caucasian news media and becouse of Armenia and Azerbaijan war someone start DDOS attack.

So what we have to do:
1. Parse nginx logs by eyes :))
2. Determine attack pattern
3. Configure fail2ban
4. Stay allert!

First pattern


117.68.x.x - - [20/Oct/2020:10:28:00 +0000] "GET //ru/search?search_text=qjxk5ENh5IYc HTTP/1.1" 200 10603 "https://it.randomthemes.com//ru/search?search_text=qjxk5ENh5IYc" "Mozilla/5.0 (Linux; Android 9; FIG-LA1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.92 Mobile Safari/537.36"

com//ru/search? – standart DDOS attack type. search usually heavy operations for many engines (use sphynx, Luke!).
Second pattern


191.102.x.x - - [20/Oct/2020:06:25:21 +0000] "GET / HTTP/1.1" 500 603 "https://it.randomthemes.com/" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_3) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/80.0.3987.122 Safari/537.36"

Huge amount of traffic from the same user agent.


# cat /etc/fail2ban/filter.d/nginx-it.randomthemes.com.local
[Definition]
failregex = ^<HOST> -.*AppleWebKit\/537.36*.
       ^<HOST> - .*https://it.randomthemes.com//ru/search*.
ignoreregex =

~# cat /etc/fail2ban/jail.local
[nginx-it.randomthemes.com]
enabled = true
port = http,https
filter = nginx-it.randomthemes.com
logpath = /var/log/nginx/access.log
maxretry = 2

Check regexp:


#fail2ban-regex /var/log/nginx/access.log /etc/fail2ban/filter.d/nginx-it.randomthemes.com.local

#service fail2ban reload
#fail2ban-client status
#fail2ban-client status nginx-it.randomthemes.com
Status for the jail: nginx-it.randomthemes.com
|- Filter
| |- Currently failed: 42
| |- Total failed: 13608
| `- File list: /var/log/nginx/access.log
`- Actions
|- Currently banned: 23
|- Total banned: 136
`- Banned IP list: 46.162.x.x

Stay alert 🙂 Caucasian hackers not 1337 🙂 and ddos was boring. 3000+ botnet used. Good qualified developers and operation already lives in US, Russia, Turkey, Europe and have no time to play stupid games. So DDOS over. Fail2Ban is beautiful 🙂 but better to use ipset instead iptables.

How to convert GPT to MBR partition linux.

MBR is old, but not depricated. It`s post just to remember.

gdisk /dev/sdb
x
r
g

x – enter expert mode
r – recovery and transformation options (experts only)
g convert GPT into MBR and exit

and thats all…

parted /dev/sdb
p
.....
.....
Partition Table: msdos

ssh key forwarding small tip just to remember

ssh-add ~/.ssh/id_rsa
vi ~/.ssh/config
Host *
User myusername
StrictHostKeyChecking no
Compression yes
ForwardAgent yes
UseRoaming no

now you can
ssh sd1.randomthemes.com
then sftp sd2.randomthes.com
and no need to add public key from sd1 to sd2

How to build nginx deb with new modules

Adding third party module to Nginx.
We need several nginx modules which absent in nginx_full ubuntu package.

redis2
and nginx-sla

#Get nginx-sla code
cd /home/build
mkdir nginx-sla
cd nginx-sla
git clone https://github.com/goldenclone/nginx-sla.git

#get nginx-redis code
cd ../
mkdir nginx-redis2
git clone https://github.com/openresty/redis2-nginx-module.git

apt-get install -y dpkg-dev
mkdir /home/build/nginx-redis
cd /home/build/nginx-redis
apt-get source nginx
apt-get build-dep nginx

Search for file in bgin directory
and edit it
/debian/rules
section full_configure_flags should look like this

full_configure_flags := \
            $(common_configure_flags) \
            --with-http_addition_module \
            --with-http_dav_module \
            --with-http_geoip_module \
            --with-http_gunzip_module \
            --with-http_gzip_static_module \
            --with-http_image_filter_module \
                        --with-http_v2_module \
            --with-http_sub_module \
            --with-http_xslt_module \
            --with-stream \
            --with-stream_ssl_module \
            --with-mail \
            --with-mail_ssl_module \
            --with-threads \
            --add-module=$(MODULESDIR)/nginx-auth-pam \
            --add-module=$(MODULESDIR)/nginx-dav-ext-module \
            --add-module=$(MODULESDIR)/nginx-echo \
            --add-module=$(MODULESDIR)/nginx-upstream-fair \
            --add-module=$(MODULESDIR)/ngx_http_substitutions_filter_module \
                        --add-module=/home/build/nginx-redis2 \
                        --add-module=/home/build/nginx-sla

#increase package version
dch -i
#build package

dpkg-buildpackage -us -uc -b

#put into our repo
dput stable ./nginx_1.10.0-0ubuntu0.16.04.5_amd64.changes

And we have new nginx in out wonderful repo 🙂
p.s. It`s better to change package name, and increase version.

How to make vertica backup

In some cases you don`t need 3 node vertica cluster and Ksafety.
We use vertica as very fast column based database + etl and database size only 50 Gb. So we can easy restore vertica from backup and use etl log processing to get actual data.

simple backup commands.

1. Create config file

/opt/vertica/bin/vbr --setupconfig

2. initialize backup storage

$ /opt/vertica/bin/vbr  --task init --config-file /home/dbadmin/leadada_snapshot.ini
Initializing backup locations.
Backup locations initialized.

3. And finally!!

$ vbr.py --task backup --config-file /home/dbadmin/leadada_snapshot.ini
Starting backup of database leadada.
Participating nodes: v_leadada_node0001.
Snapshotting database.
Snapshot complete.
Approximate bytes to copy: 37754604170 of 37754604170 total.
[==================================================] 100%
Copying backup metadata.
Finalizing backup.
Backup complete!

How to get process memory consumption list linux

Pretty easy
for resident memory consumption

ps -e -orss=,args= | sort -b -k1,1n

for virtual memory consumption

ps -e -ovsz=,args= | sort -b -k1,1n

Linux sort is great!
-k1,1n
means sort by 1st column in numeric order

Accordin to official manual:

`--key=POS1[,POS2]'
     Specify a sort field that consists of the part of the line between
     POS1 and POS2 (or the end of the line, if POS2 is omitted),
     _inclusive_.

     Each POS has the form `F[.C][OPTS]', where F is the number of the
     field to use, and C is the number of the first character from the
     beginning of the field.  Fields and character positions are
     numbered starting with 1; a character position of zero in POS2
     indicates the field's last character.  If `.C' is omitted from
     POS1, it defaults to 1 (the beginning of the field); if omitted
     from POS2, it defaults to 0 (the end of the field).  OPTS are
     ordering options, allowing individual keys to be sorted according
     to different rules; see below for details.  Keys can span multiple
     fields.

     Example:  To sort on the second field, use `--key=2,2' (`-k 2,2').
     See below for more notes on keys and more examples.  See also the
     `--debug' option to help determine the part of the line being used
     in the sort.

How to send passive checks to nagios real life example:

First of all – why you need to use passive checks in nagios.
It`s useful for large systems, nagios will not wait for connect timeout during telecom issues.
And it`s easy to configure.

Our case (large social network).
Need to check number of unsubscribers. If no “unsubscribe” letters for 1 hour – something goes wrong… FBL list not working and we need Alert. If we will not process FBL letters for several hours, email providers rise our SPAM rating.

How to fetch letters (I use ruby Imap) – topic for another article :).

1. Nagios Check code:

# cat /home/scripts/fbl.sh
#!/bin/bash

NUM=`/usr/bin/psql -t -h 1.1.1.1 -p 5450 -U cron_user  base3 -c "select count(1) from email_stop_list where (esl_created BETWEEN current_timestamp - interval '1 hour' and current_timestamp) and esl_reason ~ '^fbl'"`

if [ $NUM -eq 0 ]; then
        echo -e "nest\tunsubscribe_fbl\t3\tNo_Unsubscribe"  | /home/scripts/send_nsca -H 2.2.2.2 -p 5667 -c /etc/send_nsca.conf
 else
    echo -e "nest\tunsubscribe_fbl\t0\t$NUM unsubscribes last houer"  | /home/scripts/send_nsca -H 2.2.2.2 -p 5667 -c /etc/send_nsca.conf
 fi

2. Code for send_nsca

Plugin Return Code Service State Host State
0 OK UP
1 WARNING UP or DOWN/UNREACHABLE*
2 CRITICAL DOWN/UNREACHABLE
3 UNKNOWN DOWN/UNREACHABLE

3. Nginx service config

# cat nest.cfg
define service{
  use                            generic-service-template-passive
  host_name                       nest
  service_description             unsubscribe_fbl
  freshness_threshold             3600
  check_command                   volatile_no_information
  contact_groups                  nagios-wheel,nagios-wheel-smsmail
}

4. Service template

define service {
    use                             generic-service-template
    name                            generic-service-template-passive
    active_checks_enabled           0
    passive_checks_enabled          1
    obsess_over_service             0
    flap_detection_enabled          0
    event_handler_enabled           1
    failure_prediction_enabled      1
    is_volatile                     1
    register                        0
    check_period                    24x7
    max_check_attempts              1
    normal_check_interval           5
    retry_check_interval            2
    check_freshness                 1
    freshness_threshold             90000
    contact_groups                  nagios-wheel
    check_command                   volatile_no_information
    notifications_enabled           1
    notification_interval           15
    notification_period             24x7
    notification_options            w,u,c,r
    process_perf_data               1
    retain_status_information       1
    retain_nonstatus_information    1
}

How to tar.gz yesterday logs (some etl magic)

Task: need to tar yesteday logs in one file and gzip it.
Little bash code, just to save my time in future.

#!/bin/bash

src='/var/spool/etl/archive'

dt=`date --date="1 day ago" +"%Y-%m-%d"`
#create empty tar archive
tar cvf $src/$dt.tar --files-from /dev/null

for i in `ls -1 $src/*$dt* | grep -v gz | grep -v tar`; do
  tar -rf $src/$dt.tar $i
  rm -f $i
done
gzip $src/$dt.tar