Showing posts with label Kernel. Show all posts
Showing posts with label Kernel. Show all posts

Friday, February 7, 2014

Smartctl : Linux disk I/O scheduler is reseted back to default's CFQ

Got a weird issue recently, I'm monitoring my SSD's life time with smartctl + Zabbix and realized that my scheduler settings are reseted each time smartctl was executed !

 # echo noop > /sys/block/sda/queue/scheduler  
   
 # cat /sys/block/sda/queue/scheduler  
 [noop] anticipatory deadline cfq  
   
 # smartctl -A --device=sat+megaraid,0 /dev/sda  
 smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-358.23.2.el6.x86_64] (local build)  
 Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net  
 === START OF READ SMART DATA SECTION ===  
 ...  
 ...  
   
 # cat /sys/block/sda/queue/scheduler  
 noop anticipatory deadline [cfq]  


There is no real solution, but you can work around by specifying  the generic SCSI name i.e "sgX "instead of sdX.

 # echo noop > /sys/block/sda/queue/scheduler  
   
 # cat /sys/block/sda/queue/scheduler  
 [noop] anticipatory deadline cfq  
   
 # smartctl -A --device=sat+megaraid,0 /dev/sg0  
 smartctl 5.43 2012-06-30 r3573 [x86_64-linux-2.6.32-358.23.2.el6.x86_64] (local build)  
 Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net  
 === START OF READ SMART DATA SECTION ===  
 ...  
 ...  
   
 # cat /sys/block/sda/queue/scheduler  
  [noop] anticipatory deadline cfq 

And voila ! Problem not really solved but that does the job !

You can use sg_map (part of the sg3_utils package) to check the sdX -> sgX mappings :

 # sg_map -a  
 /dev/sg0 /dev/sda  
 /dev/sg1 /dev/sdb  
 /dev/sg2 /dev/scd0  

Wednesday, February 5, 2014

Omreport fails : object not found

If you get the following message while using omreport :
 $ omreport chassis memory  
 Memory Information  
 Error : Memory object not found  
 $ omreport chassis hwperformance  
 Error! No Hardware Peformance probes found on this system.  

The first thing to do is to restart the srvadmin services :
 # srvadmin-services.sh restart  
 # service ipmi restart  

Check that the services are properly started.

If that doesn't solve the problem, you might have a semaphore issue. In my case Zabbix agent/scripts became nuts and didn't close its semaphores.

To list the current semaphore's arrays use the following command :
 # ipcs -s  

To show the current system limits
 # ipcs -sl  

You can use the following command to count the current number of semaphore's arrays
 # ipcs -us  

If you reached the system limit, it will certainly explain the omreport issue. From now on, you have two possibilities :

  • You've reached the limit because there is an issue on your system (semaphores not closed or whatever reason). You need to cleanup your semaphores with the following command :
 # ipcrm -s semaphore_id  
 To clean all semaphores from a particular user :  
 # ipcs -s | awk '/username/ {system("ipcrm -s" $2)}'   

Important : You need to stop attached process before removing the semaphores.
  • All your semaphores are legit, you need to increase the system limits :
https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Tuning_and_Optimizing_Red_Hat_Enterprise_Linux_for_Oracle_9i_and_10g_Databases/sect-Oracle_9i_and_10g_Tuning_Guide-Setting_Semaphores-Setting_Semaphore_Parameters.html

Hope that helps !

Friday, October 11, 2013

Linux server sends SYNACK packet only after receiving 8 SYN

Got a really weird issue recently, in some rare case (mostly Mac and Mobile phone clients), the connection to a linux server was really really slow (about 12s).

The issue was not only impacting Apache but all TCP services like SSH, hence it was not a particular service issue/misconfiguration.

The Chrome console on a MacBook Pro showed that the initial connection took about 10s, on the other hand a Win7 client in the same LAN had no problem at all.

After some digging on the client and server side, I found out that the client needs to send 8 SYN packets before the server replies with a SYNACK which explain why the connexion is so slow. Once the SYNACK is send back to the client, the communication speed is back to normal.

One hour headache later, it turn out that I enabled some Sysctl TCP tunning values that somehow introduced the issue.

I disabled the net.ipv4.tcp_tw_recycle and net.ipv4.tcp_tw_reuse features and everything went back to normal.

I think the problem comes from the net.ipv4.tcp_tw_reuse option, but as the issue impacted a production service (and is really hard to reproduce) I didn't try to re-enable tcp_tw_recycle.

Some posts advice to disable window scaling, I strongly disencourage this as it would result in poor network performances.

Hope that helps !

Below the tcpdump output that shows the 8 client's SYN packets before the SYNACK is sent back. Test was performed on SSH service as you can see, the TCP handshake took 10 secondes.

 # SYN 1  
 15:57:26.303076 IP (tos 0x0, ttl 53, id 9488, offset 0, flags [DF], proto TCP (6), length 64)  
   client_ip.49316 > server_ip.ssh: Flags [S], cksum 0xdf5f (correct), seq 2356956535, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 835124724 ecr 0,sackOK,eol], length 0  
 # SYN 2  
 15:57:27.306416 IP (tos 0x0, ttl 53, id 37141, offset 0, flags [DF], proto TCP (6), length 64)  
   client_ip.49316 > server_ip.ssh: Flags [S], cksum 0xdb71 (correct), seq 2356956535, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 835125730 ecr 0,sackOK,eol], length 0  
 15:57:28.315804 IP (tos 0x0, ttl 53, id 2415, offset 0, flags [DF], proto TCP (6), length 64)  
 # SYN 3  
   client_ip.49316 > server_ip.ssh: Flags [S], cksum 0xd785 (correct), seq 2356956535, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 835126734 ecr 0,sackOK,eol], length 0  
 15:57:29.330233 IP (tos 0x0, ttl 53, id 62758, offset 0, flags [DF], proto TCP (6), length 64)  
 # SYN 4  
   client_ip.49316 > server_ip.ssh: Flags [S], cksum 0xd398 (correct), seq 2356956535, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 835127739 ecr 0,sackOK,eol], length 0  
 15:57:30.335779 IP (tos 0x0, ttl 53, id 29003, offset 0, flags [DF], proto TCP (6), length 64)  
 # SYN 5  
   client_ip.49316 > server_ip.ssh: Flags [S], cksum 0xcfa9 (correct), seq 2356956535, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 835128746 ecr 0,sackOK,eol], length 0  
 15:57:31.345254 IP (tos 0x0, ttl 53, id 5246, offset 0, flags [DF], proto TCP (6), length 64)  
 # SYN 6  
   client_ip.49316 > server_ip.ssh: Flags [S], cksum 0xcbba (correct), seq 2356956535, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 835129753 ecr 0,sackOK,eol], length 0  
 15:57:33.382242 IP (tos 0x0, ttl 53, id 5958, offset 0, flags [DF], proto TCP (6), length 64)  
 # SYN 7  
   client_ip.49316 > server_ip.ssh: Flags [S], cksum 0xc3dc (correct), seq 2356956535, win 65535, options [mss 1460,nop,wscale 4,nop,nop,TS val 835131767 ecr 0,sackOK,eol], length 0  
 15:57:37.881881 IP (tos 0x0, ttl 53, id 21274, offset 0, flags [DF], proto TCP (6), length 48)  
 # SYN 8  
   client_ip.49316 > server_ip.ssh: Flags [S], cksum 0x5c3d (correct), seq 2356956535, win 65535, options [mss 1460,sackOK,eol], length 0  
 15:57:37.881907 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto TCP (6), length 48)  
 # SYNACK (at last !!!)  
   server_ip.ssh > client_ip.49316: Flags [S.], cksum 0x7a12 (correct), seq 3228952474, ack 2356956536, win 14600, options [mss 1460,nop,nop,sackOK], length 0  
 15:57:37.885362 IP (tos 0x0, ttl 53, id 62772, offset 0, flags [DF], proto TCP (6), length 40)  
 # ACK  
   client_ip.49316 > server_ip.ssh: Flags [.], cksum 0xdfde (correct), seq 1, ack 1, win 65535, length 0  

Monday, July 15, 2013

Emulate bad or WAN network performances from a particular IP on a Gigabit LAN network

If you're developing Web or mobile applications, you'll certainly be confronted to poor network conditions.

The problem now is "how can I test my application under bad network conditions". Well you could rent a forein internet connection or use tools that reports performance from various remote countries however this is not a good debugging environment.

The solution is to use TC and NetEM on your front development server (typically Web or reverse proxy server), then use filters so only one client station (the debugging station) is impacted.
Don't forget to use filter otherwise all your clients will be impacted.

Below an example on how to emulate a network with :
  • 1Mbps bandwidth
  • 400ms delay
  • 5% packet loss
  • 1% Corrupted packet
  • 1% Duplicate packet
The debugging client IP is 192.168.0.42  (i.e the IP impacted by the bad network performance);
The following commands need to be executed on the front developement server, please set the appropriate NIC for you environment (eth0 used below) :
 # Clean up rules  
   
 tc qdisc del dev eth0 root  
   
 # root htb init 1:  
   
 tc qdisc add dev eth0 handle 1: root htb  
   
 # Create class 1:42 with 1Mbps bandwidth  
   
 tc class add dev eth0 parent 1:1 classid 1:42 htb rate 1Mbps  
   
 # Set network degradations on class 1:42  
   
 tc qdisc add dev eth0 parent 1:42 handle 30: netem loss 5% delay 400ms duplicate 1% corrupt 1%  
   
 # Filter class 1:42 to 192.168.0.42 only (match destination IP)  
   
 tc filter add dev eth0 protocol ip prio 1 u32 match ip dst 192.168.0.42 flowid 1:42  
   
 # Filter class 1:42 to 192.168.0.42 only (match source IP)  
   
 tc filter add dev eth0 protocol ip prio 1 u32 match ip src 192.168.0.42 flowid 1:42  

To check that the rules are properly set use the following commands :
 tc qdisc show dev eth0  
 tc class show dev eth0  
 tc filter show dev eth0  

Once you're done with the testing, cleanup the rules with the command :
 tc qdisc del dev eth0 root   


There is many other options you can use (correlation, distribution, packet reordering, etc), please check the documentation available at :

http://www.linuxfoundation.org/collaborate/workgroups/networking/netem

If this setup fits your requirements, I advice you to create a shell script so you can start/stop the rules with custom values. Be aware that you can also make filters based on source/destination ports, etc.

If you have more complex requirements, you can try WANem, which is a live Linux Distribution with a graphical interface on top of NetEM. Please be aware that this requires route modifications on your client and server (or any other routing tricks).

http://wanem.sourceforge.net/
http://sourceforge.net/projects/wanem/files/Documents/WANemv11-Setup-Guide.pdf

I didn't had the opportunity to try it, please let me know if you have any feedback.

Tuesday, March 26, 2013

Dell Openmanage/Omreport failed after updating to CentOS 6.4

After updating a testing machine from CentOS 6.3 to 6.4, the Dell OpenManage tools stopped working AT ALL.
It seems that with the lastest CentOS kernel (2.6.32-358.2.1.el6.x86_64), they moved away some IPMI drivers from kernel modules to "built-in"

The result is :

 # omreport chassis  
 Health   
 # srvadmin-services.sh start  
 Starting Systems Management Device Drivers:  
 Starting dell_rbu:                     [ OK ]  
 Starting ipmi driver:                   [FAILED]  
 Starting Systems Management Device Drivers:  
 Starting dell_rbu: Already started             [ OK ]  
 Starting ipmi driver:                   [FAILED]  
 Starting DSM SA Shared Services:              [ OK ]  
 /var/log/messages reports :   
 instsvcdrv: /etc/rc.d/init.d//dsm_sa_ipmi start command failed with status 1  

Solution : 

 # yum install OpenIPMI  

Note : There is no need to start or chkconfig the service.

You can check that the IPMI components are seen with the following command :

 # service ipmi status  
 ipmi_msghandler module in kernel.  
 ipmi_si module in kernel.  
 ipmi_devintf module loaded.  
 /dev/ipmi0 exists.  

Then start Openmanager services :
 # srvadmin-services.sh start