Showing posts with label File Systems. Show all posts
Showing posts with label File Systems. Show all posts

Tuesday, February 25, 2014

Manage your backup retentions policies with retdo

You have folders with all your backups but storage capacity is starting to become low ? Need to clean up all these old files but still want to keep some of them just in case ? Then retdo is the perfect tool !

Retdo is a little script I wrote that allows administrators to clean up files on a custom retention basics.
Retdo can be used to implement production's backups retention plans.

retdo can resolve the following queries :

- I want to keep only one file per week if files are older than 3 months up to 6 months.
- I want to keep only one file per month if files are older than 6 months up to 1 year.
- I want files older than 1 year to be moved to another machine.
- I want a cup of tea (feature in progress)

Code and instructions are available for free at https://github.com/gcharot/retdo

Example : 

 Let's say I have my January daily backups in /data/backup/db/dbname :

#  ll /data/backup/db/dbname/   
 -rw-r--r-- 1 root root 0 Jan 1 12:00 jan01.tgz  
 -rw-r--r-- 1 root root 0 Jan 2 12:00 jan02.tgz  
 -rw-r--r-- 1 root root 0 Jan 3 12:00 jan03.tgz  
 -rw-r--r-- 1 root root 0 Jan 4 12:00 jan04.tgz  
 -rw-r--r-- 1 root root 0 Jan 5 12:00 jan05.tgz  
 -rw-r--r-- 1 root root 0 Jan 6 12:00 jan06.tgz  
 -rw-r--r-- 1 root root 0 Jan 7 12:00 jan07.tgz  
 -rw-r--r-- 1 root root 0 Jan 8 12:00 jan08.tgz  
 -rw-r--r-- 1 root root 0 Jan 9 12:00 jan09.tgz  
 -rw-r--r-- 1 root root 0 Jan 10 12:00 jan10.tgz  
 -rw-r--r-- 1 root root 0 Jan 11 12:00 jan11.tgz  
 -rw-r--r-- 1 root root 0 Jan 12 12:00 jan12.tgz  
 -rw-r--r-- 1 root root 0 Jan 13 12:00 jan13.tgz  
 -rw-r--r-- 1 root root 0 Jan 14 12:00 jan14.tgz  
 -rw-r--r-- 1 root root 0 Jan 15 12:00 jan15.tgz  
 -rw-r--r-- 1 root root 0 Jan 16 12:00 jan16.tgz  
 -rw-r--r-- 1 root root 0 Jan 17 12:00 jan17.tgz  
 -rw-r--r-- 1 root root 0 Jan 18 12:00 jan18.tgz  
 -rw-r--r-- 1 root root 0 Jan 19 12:00 jan19.tgz  
 -rw-r--r-- 1 root root 0 Jan 20 12:00 jan20.tgz  
 -rw-r--r-- 1 root root 0 Jan 21 12:00 jan21.tgz  
 -rw-r--r-- 1 root root 0 Jan 22 12:00 jan22.tgz  
 -rw-r--r-- 1 root root 0 Jan 23 12:00 jan23.tgz  
 -rw-r--r-- 1 root root 0 Jan 24 12:00 jan24.tgz  
 -rw-r--r-- 1 root root 0 Jan 25 12:00 jan25.tgz  
 -rw-r--r-- 1 root root 0 Jan 26 12:00 jan26.tgz  
 -rw-r--r-- 1 root root 0 Jan 27 12:00 jan27.tgz  
 -rw-r--r-- 1 root root 0 Jan 28 12:00 jan28.tgz  
 -rw-r--r-- 1 root root 0 Jan 29 12:00 jan29.tgz  
 -rw-r--r-- 1 root root 0 Jan 30 12:00 jan30.tgz  
 -rw-r--r-- 1 root root 0 Jan 31 12:00 jan31.tgz  

Now I need to free some space up so I'd like to keep only one file per week :

 # retdo -p /data/backup/db/dbname -r "*.tgz" -b 1 -e 92 -d 7  
 26 file(s) processed - 0 file(s) in error  
 # ll /data/backup/db/dbname  
 total 0  
 -rw-r--r-- 1 root root 0 Jan 5 12:00 jan05.tgz  
 -rw-r--r-- 1 root root 0 Jan 12 12:00 jan12.tgz  
 -rw-r--r-- 1 root root 0 Jan 19 12:00 jan19.tgz  
 -rw-r--r-- 1 root root 0 Jan 26 12:00 jan26.tgz  
 -rw-r--r-- 1 root root 0 Jan 31 12:00 jan31.tgz  

As you can see only one file per week (7 days) has been kept, 26 files were deleted.

This commands means  : "find all files matching regexp *.tgz in /data/backup/db/dbname which are older than 1 days up to 92 days (3 months) and keep only one file every week (7 days)"

Hope that helps !

Tuesday, May 21, 2013

Yum stuck/hangs at "Running Transaction Test"

If yum is stuck at the "Running Transaction Test" step, double check that you don't have a stalled network mount (NFS,SMB,etc) somewhere.

Umount it and retry your yum/rpm command.

More info on how to umount a stalled NFS share :
http://sysnet-adventures.blogspot.fr/2013/05/umount-stalledfrozen-nfs-mount-point.html

Umount a stalled/frozen NFS mount point

NFS is known to be a little nasty when it comes to umount stalled shares.

Most of the time a simple umount doesn't work, which is a bit frustrating specially when it comes to production servers; The process just hangs and there is no way to interrupt...

Below two procedures to umount stalled  NFS shares. You should try method one before method two as it requires some network "hacks".

Method 1 :

Use a forced lazy umount, this method works 90% of the time :
 # umount -f -l /mnt/nfs_share
Note : Don't use bash auto-completion !!!  


Method 2:

This method is to be used only if method one failed.

The trick is to temporarily steal the NFS server IP adress on the NFS client (the one with stalled mount) so this client thinks that the NFS server is still alive.

Warning : Use method 1 above if your NFS server is still reachable from the NFS client. Otherwise you'll have an IP conflit and trust me you really don't want that to happen.

Let's assume the NFS server IP is 192.168.0.1
  1. Double check that the NFS server is down with ping or nmap.
  2. If your NFS client has very restrictive IPTables rules shut them down temporarily
  3. On the NFS client, set the NFS server IP as secondary address
  4.  # ifconfig eth0:0 192.168.0.1  
     Note : Adjust interface to your own needs  
    
  5. Umount the NFS with a forced lzay umount
  6.  # umount -f -l /mnt/nfs_share  
     Note : Don't use bash auto-completion !!! 
    
  7.  Check that the NFS mount is gone
  8. Remove secondary interface
     # ifconfig eth0:0 down  
     Note : Adjust interface to your own needs  
  9. Restart IPTables if needed
  10. Be happy
  11. Go to sleep, it's been a long day (or night)
 If you have multiple NFS clients stalled, you can set the secondary IP on one client only.
  • Client 1 : Step 1 to 5
  • Client 2 to n : Step 4 and 5
  • Client 1 : Step 6 to 9

This will only work if your NFS client can communicate between each others (watch for IPTables or any other filtering softwares/devices).

Hope that helps ! (that helped me a lot :)

Wednesday, April 3, 2013

Create large partitions on Linux / Bypass the 2TB partition Limit

The default partition schema (MBR based) limits partition to 2.2TB. With new hardrives this limit is easily reached.

In order to create partition bigger than 2.2TB you need to switch from MBR to GUID (GPT) partition table.
This can be done with the "parted" utility on Linux.

For exemple if you want to create a single big partition on /dev/sdb :

 # parted /dev/sdb  
 (parted) mklabel GPT  
 (parted) mkpart partition_name fstype 1 -1  
 (parted) print  
 Model: DELL PERC H700 (scsi)  
 Disk /dev/sdb: 4000GB  
 Sector size (logical/physical): 512B/512B  
 Partition Table: gpt  
 Number Start  End   Size  File system Name Flags  
  1   1049kB 4000GB 4000GB        data  

Note : I found out that partition name and fstype are quite useless.

You can then format the partition with the filesystem of your choice or create a LVM PV.

More info on GUID / MBR Limits :
http://en.wikipedia.org/wiki/GUID_Partition_Table

Parted official website :
http://www.gnu.org/software/parted/

More parted exemples :
http://www.thegeekstuff.com/2011/09/parted-command-examples/

Hope that helps !