Exim and its queue: quick HowTo

Today we are posting a quick HowTo for those admins running exim on their server. Typically, this will also be useful for all those who run servers that use cPanel as their hosting platform. We will show the list of command to display the queue as such, how to remove single or all messages from the queue, and how to identify nasties in the queue.

  1. What’s in the queue?
    exim -bp
  2. How many message are in the queue?
    exim -bpc
  3. How do I get rid of a specific message?
    exim -Mrm {message-id}
  4. How do I empty the entire queue?
    exim -bp | awk '/^ *[0-9]+[mhd]/{print "exim -Mrm " $3}' | bash

    or

    exim -bp | exiqgrep -i | xargs exim -Mrm
  5. What is spamming from my server?
    grep cwd /var/log/exim_mainlog | grep -v /var/spool | awk -F"cwd=" '{print $2}' | awk '{print $1}' | sort | uniq -c | sort

We hope this will come in handy for you as much as it does for us at times!

 

This cheatsheet has been compiled from two sources:

http://www.cyberciti.biz/faq/exim-remove-all-messages-from-the-mail-queue/
http://www.inmotionhosting.com/support/email/exim/find-spam-script-location-with-exim

Adding disks under KVM: LVM based LV and partition resizing

With KVM gaining popularity, questions such as how to increase guest system disk space for partitions using LVM surface a lot more often than they used to. Indeed the process is somewhat more complex than for example under OVZ or XEN PV.

We are assuming you have a guest Linux system that is using LVM in between your disk and partitions, i.e. you have a volume group, say “vg_vps”, and several logical volumes such as “lv_root”, etc. in this volume group. We further assume you would like to increase the size of the root partition.

As a prerequisite, you will have to add a second disk file or logical volume to the configuration of your KVM guest, and then reboot the guest in such a way that the config file will be re-read (most often it is just good practice to shut down the guest from inside, and then restart it using the new config file.

Adding disks in first place can be achieved in several ways, depending on your setup. With proxmox or Solus it is usually just a simple click and go – there are options in both control panels to add additional disks to a VM. Proxmox will create a new diskfile, ideally qcow2, whereas Solus will normally create a new logical volume via LVM on the host node.

Once this has been done, you need to ssh or console into the guest and perform the following steps to enlarge a partition in your guest that is based on LVM:

First, check if your guest can see the new disk – if it is the second disk, it will often be /dev/sdb, or /dev/vdb. You can do this using fdisk:

root@vps [~]# fdisk /dev/vdb

Using “p” you can display the current partition table, which should be empty. Create a new primary partition, and use default settings (first partition, and use the entire disk, i.e. first and last block). Then, change the partition type (“t”, and then “8e”) to LVM, write the config via “w” and quit “q”.

You will now have a new partition /dev/sdb1 or /dev/vdb1 in your guest system, which you can use to grow your root partition, but before that, we need to add the new partition to the volume group the root partition is on, let’s have a look at the group first:

root@vps [~]# vgs

This will display the volume groups, whereas

root@vps [~]# lvs

will show the logical volumes. This will let you find out where your root partition is located. Next, we need to extend the volume group the root partition / logical volume is on. Assume our root partition / is in the logical volume “lv_root” that is on “vg_vps”, then we have to:

root@vps [~]# vgextend vg_vps /dev/vdb1

This will add the new disk/partition to our intended volume group “vg_vps”.

root@vps [~]# vgdisplay

… will now display the new parameters, including the number of free PE (physical extents). Now we can increase the size of the logical volume our root partition is on (assuming it is called “lv_root”):

root@vps [~]# lvextend -l +INT_PE /dev/vg_vps/lv_root

… where INT_PE is the number of free physical extents we got from the vgdisplay command.

We are almost done now: we just need to tell the guest that the root partition has increased in size, and this can be done live, i.e. without rebooting or unmounting under many Linux distributions, including CentOS:

root@vps [~]# resize2fs /dev/vg_vps/lv_root

Voilà! When you now do

root@vps [~]# df

you will find your root partition has been increased!

 

NTP Amplification Attack – the gist of it

With recent DDOS attacks increasingly using NTP as an attack vector, and one of Cloudflare’s clients recently having been hit with a DDOS attack just short of 400gbps, we believe it is necessary to summarise what’s been going on, how such attacks are made possible at all, and what the community, and providers can do to prevent or mitigate such attacks as best possible.

A concise overview by means of a CERT alert can be found here: https://www.us-cert.gov/ncas/alerts/TA14-013A.

Essentially, an attacker send a certain command to a vulnerable NTP server, using a spoofed source address. The command itself is very short and produces very little traffic. The response, however, is a lot larger, besides the response is going to be sent back to the spoofed source address. This response is typically about 206 times larger than the initial request – hence the name amplification – a very effective means to quickly fill up even very powerful internet pipes.

Cloudflare published a very interesting article as well, giving a quick overview about the most recent attack and the technology behind it: http://blog.cloudflare.com/technical-details-behind-a-400gbps-ntp-amplification-ddos-attack.

The recommended course of action here is to secure your NTP server (cf.https://isc.sans.edu/diary/NTP+reflection+attack/17300) , as well as ensure that spoofed packets do not leave your network. Sample procedures are explained at BCP38.info.

iftop – or where’s my server’s bandwidth going?!

During the past weeks we gave a small introduction to UNIX and Linux commands that may be nice to have at hand when it comes to administrating a server from the command shell, making some quick changes, or generally assisting a sysadmin with her every day tasks.

Today we want to have a look at iftop – a small program that allows you to check what your dedicated or virtual private server is doing in terms of internet traffic: where packets go to, and where they come from.

This is useful when you want to investigate some process or virtual machine hogging bandwidth on a server, or when you see unsual traffic patterns from your monitoring systems.

The syntax as such is very simple, for a start it should be sufficient to run

# /usr/sbin/iftop -i eth1 -p -P

from the shell (you will typically need root privileges). The -i switch lets you specify which interface to listen on, -p runs iftop in promiscuous mode (necessary for some virtualisation architectures), and -P shows portnumbers/services in addition to hosts.

On a standard CentOS install, iftop needs extra repositories to be installed (or to be compiled from source), and you will need (n)curses and libpcap packages installed as well.

 

Additional and in-depth information can be found here:
http://www.ex-parrot.com/pdw/iftop/ (author, source code)
http://www.cyberciti.biz/faq/centos-fedora-redhat-install-iftop-bandwidth-monitoring-tool/ (overview, examples)
http://sickbits.net/iftop-finding-traffic-hogs/ (overview, examples)

 

Forgotten Unix commands: nice and renice

Today we are going to shed some light onto the way processes can be (re-)prioritised when it comes to scheduling using the nice and renice commands.

Typically (but not always), priority values range from -20 (run with top priority) to +19/20 (run when nothing else runs), let’s have a look at an excerpt of a processlist with “ps axl” on a CentOS server:

ps axl
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
4     0     1     0  20   0   4116   876 poll_s Ss   ?          0:00 init
1     0     2     1  20   0      0     0 kthrea S    ?          0:00 [kthreadd/1049]
1     0     3     2  20   0      0     0 worker S    ?          0:00 [khelper/1049]
5     0   134     1  16  -4  10420   612 poll_s S<s  ?          0:00 /sbin/udevd -d
1     0   560     1  20   0  63596  1216 poll_s Ss   ?          0:00 /usr/sbin/sshd
1     0   740     1  20   0 281748 10376 poll_s Ss   ?          0:59 /usr/sbin/httpd
1   497   756   749  25   5  64836  1056 hrtime SN   ?          0:54 /usr/sbin/zabbix_agentd
1     0   760     1  20   0 116668  1212 hrtime Ss   ?          0:03 crond
5    48  7776   740  20   0 282272  7352 inet_c S    ?          0:00 /usr/sbin/httpd
5    48  8022   740  20   0 282128  6508 inet_c S    ?          0:00 /usr/sbin/httpd
...

Except for udevd, everything is running at priority 20, without any sort of nicing. Unprivileged users can only lower the priority of their processes (so as to not interfere with underlying OS stability), the superuser can also increase the piority of a process, though. Let’s have a look at how this is done for processes that have already started:

# renice -n -4 -p 7776; ps axl
7776: old priority -4, new priority -4
F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
...
5    48  7776   740  16  -4 282656  7580 inet_c S<   ?          0:00 /usr/sbin/httpd

As we can see, the PRI and NI columns have changed for this single process. The arguments here are -n and -p, -n takes an integer value to modify the priority by the value given, and -p takes the process ID as argument.

The nice command works in a similar fashion, it takes an integer for its -n parameter value, followed by the command as such, e.g.:

nice -n 2 ps axl results in:

F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
...
4     0 14503 12698  22   2 105464   892 -      RN+  pts/0      0:00 ps axl
...

whereas nice -n -2 ps axl yields:

F   UID   PID  PPID PRI  NI    VSZ   RSS WCHAN  STAT TTY        TIME COMMAND
...
4     0 14533 12698  18  -2 105468   892 -      R<+  pts/0      0:00 ps axl
...

So…that’s all very NICE, yes, but what does it actually mean? Modifying the priority of processes can be useful when processes need more power than the CPU can actually deliver. In such cases, (re)nicing processes can help to stabilise a system, to avoid (too much) contention for resources, and to keep vital functions up and functioning.

A typical example may be compressing a large file, where it is not important how long that process actually takes, but it should not have any (or at least only very little) effect on everything else on the system. In such cases, one might run the compressing process with nice -n 19 gzip … thus giving it a lower priority. Here, however, it is also important to mention that Linux often features a program called ionice, which is specifically intended for scheduling I/O (as opposed to primarily scheduling CPU).

There is, however, no general rule as to “how much” more priority a process gets based on the differences in integer values.

nice and renice have lost importance over the years with CPU power ever increasing, but nevertheless they might come in handy at times, and we hope this quick intro will be useful to you when it comes to administrating your virtual private server or your dedicated server.

 

Forgotten Unix commands: xargs

In today’s post we will cover a bit about xargs - also very useful to perform rather complex (and repetitive) operations on the Linux shell.

xargs is particularly useful when it comes to piping (“|“) and chaining commands and their arguments together.

xargs reads items from the standard input or from pipes, delimited by blanks or newlines, and then executes the command one or more times with any initial arguments followed by items read from standard input. Blank lines on the standard input are ignored1.

Let’s do some examples:

# echo Where is my tux? | xargs
Where is my tux?

That was easy, especially since echo is the default command anyway.

Now something more useful maybe:

# find /var/log -name 'secure-*' -type f -print | xargs /bin/ls -las
4 -rw------- 1 root root  593 Dec 28 20:02 /var/log/secure-20131229
8 -rw------- 1 root root 6734 Jan  4 14:38 /var/log/secure-20140105
4 -rw------- 1 root root 3793 Jan 10 15:33 /var/log/secure-20140112
4 -rw------- 1 root root 1182 Jan 16 08:40 /var/log/secure-20140119

You do not really need the -print here, and you can do it with find alone as well:

# find /var/log -name 'secure-*' -type f -ls
11903950    4 -rw-------   1 root     root          593 Dec 28 20:02 /var/log/secure-20131229
11903981    8 -rw-------   1 root     root         6734 Jan  4 14:38 /var/log/secure-20140105
11903996    4 -rw-------   1 root     root         3793 Jan 10 15:33 /var/log/secure-20140112
11904057    4 -rw-------   1 root     root         1182 Jan 16 08:40 /var/log/secure-20140119

or:

# find /var/log -name 'secure-*' -type f -exec ls -las {} \;
4 -rw------- 1 root root 593 Dec 28 20:02 /var/log/secure-20131229
8 -rw------- 1 root root 6734 Jan  4 14:38 /var/log/secure-20140105
4 -rw------- 1 root root 3793 Jan 10 15:33 /var/log/secure-20140112
4 -rw------- 1 root root 1182 Jan 16 08:40 /var/log/secure-20140119

But maybe you forgot that syntax? It never hurts to have several approaches at hand!

Some more useful examples, the next one is to clean up things a bit. Say you have a couple of files in several directories:

# ls -las dir1; ls -las dir2
total 8
4 drwxr-xr-x 2 root root 4096 Jan 24 09:45 .
4 drwxr-xr-x 5 root root 4096 Jan 24 09:42 ..
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 1
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 2
total 8
4 drwxr-xr-x 2 root root 4096 Jan 24 09:45 .
4 drwxr-xr-x 5 root root 4096 Jan 24 09:42 ..
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 3
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 4

For some reason, you want them all to be moved to one single, new directory, let’s call it dir3:

# ls -las dir3
total 8
4 drwxr-xr-x 2 root root 4096 Jan 24 09:45 .
4 drwxr-xr-x 5 root root 4096 Jan 24 09:42 ..

You can now embark on a mv orgy, or you can do it a bit faster:

# find ./ -type f -print0 | xargs -0 -I {} mv {} dir3
# ls -las dir*
dir1:
total 8
4 drwxr-xr-x 2 root root 4096 Jan 24 09:46 .
4 drwxr-xr-x 5 root root 4096 Jan 24 09:42 ..

dir2:
total 8
4 drwxr-xr-x 2 root root 4096 Jan 24 09:46 .
4 drwxr-xr-x 5 root root 4096 Jan 24 09:42 ..

dir3:
total 8
4 drwxr-xr-x 2 root root 4096 Jan 24 09:46 .
4 drwxr-xr-x 5 root root 4096 Jan 24 09:42 ..
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 1
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 2
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 3
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 4

Nice, isn’t it? {} acts as placeholder for the input files and argument list, the -0 and -Ioptions are to handle special characters in filenames (which is also why we used -print0in the find command) and to replace a specified string occurrence with one from the standard input.

Now, let’s cp all these files into a single directory (e.g. useful to make a quick “backup” of files to an external drive, etc.):

# find /tmp/xargs/ -type f -print0 | xargs -0 -r -I file cp -v -p file --target-directory=/tmp/xargs/dir3
`/tmp/xargs/dir2/3' -> `/tmp/xargs/dir3/3'
`/tmp/xargs/dir2/4' -> `/tmp/xargs/dir3/4'
`/tmp/xargs/dir1/1' -> `/tmp/xargs/dir3/1'
`/tmp/xargs/dir1/2' -> `/tmp/xargs/dir3/2'
# ls -las dir3
total 8
4 drwxr-xr-x 2 root root 4096 Jan 24 09:58 .
4 drwxr-xr-x 5 root root 4096 Jan 24 09:42 ..
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 1
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 2
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 3
0 -rw-r--r-- 1 root root    0 Jan 24 09:42 4

Looks good, all files copied.

Last, but not least, instead very nifty indeed, is quickly creating an archive of files (and/or directories):

# find /tmp/xargs -type f | xargs tar rvf archive.tar
tar: Removing leading `/' from member names
/tmp/xargs/dir2/3
/tmp/xargs/dir2/4
/tmp/xargs/dir1/1
/tmp/xargs/dir1/2
# ls -las
total 32
 4 drwxr-xr-x  5 root root  4096 Jan 24 10:01 .
 4 drwxrwxrwt. 4 root root  4096 Jan 24 09:42 ..
12 -rw-r--r--  1 root root 10240 Jan 24 10:01 archive.tar
 4 drwxr-xr-x  2 root root  4096 Jan 24 09:52 dir1
 4 drwxr-xr-x  2 root root  4096 Jan 24 09:52 dir2
 4 drwxr-xr-x  2 root root  4096 Jan 24 10:01 dir3
# tar -tf archive.tar
tmp/xargs/dir2/3
tmp/xargs/dir2/4
tmp/xargs/dir1/1
tmp/xargs/dir1/2

As we can see, all files are in the archive we created!

xargs isn’t something one can learn on the fly, especially the more complex operations that it can handle – but the latter fact makes it a very valuable tool for system administrators of both virtual private servers and dedicated servers.

Now it is time to wish you good luck and a lot of fun exploring the world of xargs!

1 from the man page of GNU’s version of xargs

 

Forgotten Unix Commands: awk

In our weekly series of forgotten UNIX commands, we will today give a brief overview overawkawk is extremely useful for manipulating structured files, and displaying and working with information contained in these, so it can come in very handy for any virtual private server admin.

Let’s get started!

Assume we have some sort of logfile of a webserver, with entries like the following:

119.63.193.131 - - [17/Jan/2014:07:01:10 +0000] "GET / HTTP/1.1" 302 211 "-" "Mozilla/4.0 (...)"
211.129.81.174 - - [17/Jan/2014:07:01:12 +0000] "GET /robots.txt HTTP/1.1" 200 40 "-" "siclab (...)"

awk works as you’d expect it from a shell prompt: it takes stdin as input, and writes to stdout by default. Now, we want to have a look at the IP addresses accessing our webserver:

root:> awk '{print $1}' logfile
119.63.193.131
211.129.81.174

Ok, that was easy, right? awk assigns each field per line, separated by default by whitespaces, to variables starting with $1, and going up to the number of fields in a line. $0 is the entire line, and NF is the “number of fields” count and can print the last field or any field going backwards from the last. To see how NF works, here is an example:

root:> awk '{print $(NF-12)}' logfile
119.63.193.131
211.129.81.174

Another useful variable is NR, which is the current row number, so let’s go a step further: display row numbers, IP addresses, and the status code, and let’s also format the output a bit:

root:> awk '{print NR " : " $1 " : " $9}' logfile
1 : 119.63.193.131 : 302
2 : 211.129.81.174 : 200

You could also add up fields, for example the total number of bytes transferred:

root:> awk '{ total += $10; print $10 " bytes in this line -> current total: " total}' logfile
211 bytes in this line -> current total: 211
40 bytes in this line -> current total: 251

You could also just display the output after processing the last line:

root:> awk '{ total += $10; print $10 " bytes in this line." } END { print "final total: " total }' logfile
211 bytes in this line.
40 bytes in this line.
final total: 251

There is more to it of course. On most linux systems, ps aux will display a nice processlist of the underlying system, including memory and cpu time used, etc. Column 6 contains the resident set size, a very useful indicator. Let’s sum it up quickly:

root:> ps aux | awk '{ rss += $6 } END { print "total rss: " rss }'
total rss: 132256

Faster than using a calculator, right?

A final one before I let you embark on your awk explorations on your own: assume you have a runaway/zombie/whatever httpd that you need to get rid of as fast as possible , and you want to just kill all processes that have httpd in their command column:

root:> ps aux | grep httpd | awk '{ print "kill -9 " $2}'
kill -9 740
kill -9 4629
kill -9 9365
kill -9 9366
kill -9 9368
kill -9 10589
kill -9 19518
kill -9 19689
kill -9 20126
kill -9 21925
kill -9 23486
kill -9 24635

NB: this just prints, but does not do anything. To make it happen, you need to pipe that output through the shell, i.e. use the same command line as above, but add “ | sh ” at the end.

Have fun exploring, and possibly use Wikipedia to get started: http://en.wikipedia.org/wiki/AWK

 

Forgotten Unix Commands: lsblk

UNIX and its various flavours have a lot of commands that every admin uses on a near permanent basis, such as lscpcatgrepmvrmgziptar, and so on.

Today, however, we are starting a series titled ‘Forgotten Unix Commands‘. These can come in very handy, and they often produce effects like “oh, I didn’t know you could do that on Linux!”. Such commands can brighten up the day of every dedicated server or virtual private server administrator. One of these commands is lsblk.

lsblk lists information about all or the specified block devices. The lsblk command reads the sysfs filesystem to gather information.

The command prints all block devices (except RAM disks) in a tree-like format by default.

This command can be very useful to check how the different partitions and/or disks are mounted in the system, following is an example of a desktop computer:

$ lsblk
NAME    MAJ:MIN RM   SIZE RO TYPE   MOUNTPOINT
sda       8:0    0 465.8G  0 disk
├─sda1    8:1    0  46.6G  0 part   /
├─sda2    8:2    0     1K  0 part
├─sda5    8:5    0   3.7G  0 part   [SWAP]
└─sda6    8:6    0 415.5G  0 part
sdb       8:16   0 465.8G  0 disk
└─sdb1    8:17   0 465.8G  0 part
  └─md0   9:0    0 465.7G  0 raid10 /data
sdc       8:32   0 465.8G  0 disk
└─sdc1    8:33   0 465.8G  0 part
  └─md0   9:0    0 465.7G  0 raid10 /data
sdd       8:48   1  14.9G  0 disk
└─sdd1    8:49   1  14.9G  0 part   /media/mnt/KINGSTON
sr0      11:0    1  1024M  0 rom

From this output you can see that on this desktop we have the following disks:

SDA is the fist “scsi” disk, in our case a SATA disk. On this disk the first partition is used for the / filesystem, the second partition is an extended partition, the third is a swap partition, and the last one (sda6) is used for /home, it’s a BTRFS filesystem, which the command doesn’t recognise it, though.

SDB and SDC are the two disks that are used in a RAID 10 setup, and the filesystem is mounted as /data.

SDD is a 16GB USB stick mounted under the directory /media/mnt/KINGSTON

Another example, this time output from a server with LVM:

$ lsblk
NAME                       MAJ:MIN RM   SIZE RO MOUNTPOINT
sda                          8:0    0 298.1G  0
├─sda1                       8:1    0   500M  0 /boot
└─sda2                       8:2    0 297.6G  0
  ├─vg_main-lv_swap (dm-0) 253:0    0   5.8G  0 [SWAP]
  ├─vg_main-lv_root (dm-1) 253:1    0    50G  0 /
  └─vg_main-lv_home (dm-2) 253:2    0 241.8G  0
    └─home (dm-3)          253:3    0 241.8G  0 /home
sr0                         11:0    1  1024M  0

As you can see, the command is easy to use, and the output is rather nifty with its tree style: on a glance you can see the partition layout, logical volumes, and other, additional useful information about your disks.

As opposed to lsblk, the better known fdisk -l gives similar data, however, it requires root privileges and does not recognise dm or lvm partitions.

 

This article was originally published on linuxaria. Castlegem has permission to republish. Thank you, linuxaria!

Needle in a haystack, or grep revisited: tre-agrep

Probably everyone who uses a terminal knows the command grep, cf. this excerpt from its man page:

grep searches the named input FILEs (or standard input if no files are named, or if a single hyphen-minus (-) is given as file name) for lines containing a match to the given PATTERN. By default, grep prints the matching lines.

So this is the best tool to search in a big file for a specific pattern, or a specific process in the complete list of running processes, but it has its limitations: it searches for the exact string that you search for, but sometimes it could be useful to do an “approximate” or “fuzzy” search instead.

For this goal the program agrep was firstly developed, from wikipedia we can gain some details about this software:

agrep (approximate grep) is a proprietary approximate string matching program, developed by Udi Manber and Sun Wu between 1988 and 1991, for use with the Unix operating system. It was later ported to OS/2, DOS, and Windows.

It selects the best-suited algorithm for the current query from a variety of the known fastest (built-in) string searching algorithms, including Manber and Wu’s bitap algorithm based on Levenshtein distances.

agrep is also the search engine in the indexer program GLIMPSE. agrep is free for private and non-commercial use only, and belongs to the University of Arizona.

So it’s closed source, but luckily there is an open source source alternative: tre-agrep

Tre Library

TRE is a lightweight, robust, and efficient POSIX compliant regexp matching library with some exciting features such as approximate (fuzzy) matching.

The matching algorithm used in TRE uses linear worst-case time in the length of the text being searched, and quadratic worst-case time to the length of the used regular expression. In other words, the time complexity of the algorithm is O(M^2N), where M is the length of the regular expression and N is the length of the text. The used space is also quadratic to the length of the regex, but does not depend on the searched string. This quadratic behaviour occurs only in pathological cases which are probably very rare in practice.

Approximate matching

Approximate pattern matching allows matches to be approximate, that is, allows the matches to be close to the searched pattern under some measure of closeness. TRE uses the edit-distance measure (also known as the Levenshtein distance) where characters can be inserted, deleted, or substituted in the searched text in order to get an exact match. Each insertion, deletion, or substitution adds the distance, or cost, of the match. TRE can report the matches which have a cost lower than some given threshold value. TRE can also be used to search for matches with the lowest cost.

INSTALLATION

Tre-agrep it’s usually not installed by default by any distribution but it’s available in many repositories so you can easily install it with the package manager of your distribution, e.g. for Debian/Ubuntu and Mint you can use the command:

apt-get install tre-agrep

BASIC USAGE

The usage is best demonstrated with some simple example of this powerfulcommand, given the file example.txt that contains:

Résumé
RÉSUMÉ
resume
Resümee
rèsümê
Resume
linuxaria

Following is he output of the command tre-agrep with different options:

 mint-desktop tmp # tre-agrep resume example.txt
resume

mint-desktop tmp # tre-agrep -i resume example.txt
resume
Resume

mint-desktop tmp # tre-agrep -1 -i resume example.txt
resume
Resümee
Resume

mint-desktop tmp # tre-agrep -2 -i resume example.txt
Résumé
RÉSUMÉ
resume
Resümee
Resume

As you can see, without any option it returned the same result as a normal grep, the -i option is used to ignore case sensitivity, with the interesting options being -1 and -2: these are the distances allowed in the search, so the larger the number the more results you’ll get since you allow a greater “distance” from the original pattern.

To see the distance of each match you can use the option -s: it prints each match’s cost:

mint-desktop tmp # tre-agrep -5 -s -i resume example.txt
2:Résumé
2:RÉSUMÉ
0:resume
1:Resümee
3:rèsümê
0:Resume
5:linuxaria

So in this example the string Resume has a cost of 0, while linuxaria has a cost of 5.

Further interesting options are those that assign a cost for different operations:

-D NUM, –delete-cost=NUM – Set cost of missing characters to NUM.
-I NUM, –insert-cost=NUM – Set cost of extra characters to NUM.
-S NUM, –substitute-cost=NUM – Set cost of incorrect characters to NUM. Note that a deletion (a missing character) and an insertion (an extra character) together constitute a substituted character, but the cost will be the that of a deletion and an insertion added together.

CONCLUSIONS

The command tre-agrep is yet another small tool that can save your day if you work a lot with terminals and bash scripts.

 

This article was originally published on linuxaria. Castlegem has permission to republish. Thank you, linuxaria!

Updating CentOS (RHEL, Fedora)

This is just a very concise summary to guide you through the typical update process of a CentOS based Linux server that has no control panel installed on top of it. This post will also appear in our dedicated server hosting BLOG:

  1. run yum check-update from the shell.
    This will give you a list of newly available packages for your distribution based on the repositories you have defined. This list will typically not be too long for a well maintained server, unless the distribution itself has just undergone a major update (such as from CentOS 5.7 to 5.8 recently).
  2. check the packages listed and ensure that your currently running applications will still be compatible with the new versions of any packages updated.
  3. make backups of any individual settings you have made for any packages that are going to be updated (httpd.conf, php.ini, etc.). Usually, these will not be touched, but it doesn’t hurt to make sure you have a copy (in addition to the regular backups you should be doing!).
  4. once you have confirmed that everything should still be fine after the update, from the shell, run yum update.
    This will start the update process, and you will actually have to confirm the update before it is really being processed (last chance to say “no”!).
  5. once complete, restart affected services (such as httpd, for example), or reboot your server if vital system packages have been updated (kernel, libc, …).