Quantcast
Channel: Booking.com dev blog
Viewing all articles
Browse latest Browse all 114

Troubleshooting: A journey into the unknown

$
0
0

Troubleshooting is a journey. It’s a long, unpredictable trek, one where you know the start and the end points but have zero knowledge about the actual path you need to take in order to get there, to find the root cause of the problem. In your backpack you have knowledge, past experience, and various troubleshooting techniques. For a systems engineer, the enjoyable task of finding the root cause of a problem often feels exactly like this journey into the unknown.

This particular journey relates to an issue we had on servers in our Distributed Load Balancing (DLB) service. The issue itself had nothing to do with load balancing, but the applicable knowledge gained was invaluable. The journey was worthwhile, as always.

The start

Here is the start point of our journey. What you see is that there is a sudden drop of incoming traffic to a single member of the DLB group. Other members instantly picked up the traffic in an equal manner thanks to the Equal-cost Multi-Path routing we have deployed.

Drop in Requests

The rate of incoming traffic dropped to zero within a sampling interval (we pull and store metrics every 10 seconds) and traffic recovered after ~50 seconds. In our systems there are five possible reasons for these kinds of traffic drops:

  1. Switches on the north and south sides stop selecting the server as the next hop for incoming traffic due to a configuration change
  2. The Bird Internet Routing Daemon stops running on DLB server
  3. anycast-healthchecker withdraws routes for all services as they fail their health check.
  4. The BFD protocol on switch side detects that the server isn’t sending hello messages and stops routing traffic to it
  5. Network cable or network card glitch

We examined the log of anycast-healthchecker and found out that all services were successfully responding to health checks. We then looked at the bird log and found the following:

08:35:42.981081+01:00lb-101bird:**BGP1:Received:Otherconfigurationchange**08:35:42.981362+01:00lb-101bird:BGP1:BGPsessionclosed08:35:42.981474+01:00lb-101bird:BGP1:Statechangedtostop08:35:42.991510+01:00lb-101bird:BGP1:Down08:35:42.991775+01:00lb-101bird:bfd1:Sessionto10.248.16.254removed08:35:42.991883+01:00lb-101bird:BGP1:Statechangedtodown08:35:42.991985+01:00lb-101bird:BGP1:Starting08:35:42.992090+01:00lb-101bird:BGP1:Statechangedtostart08:35:42.992191+01:00lb-101bird:bfd1:Sessionto10.248.16.254added08:35:42.992299+01:00lb-101bird:BGP1:Started08:35:42.992399+01:00lb-101bird:BGP1:Connectdelayedby5seconds08:35:42.992502+01:00lb-101bird:BGP2:**Received:Otherconfigurationchange**.......

All DLB servers are dual-home and they establish BGP peering with the switches on the north and south sides. According to RFC4486, the messages in bold indicate that Bird daemon received a BGP message to reset the BGP peering due to a configuration change on the switch side.

We looked at the Bird code and switch logs, and we found out that the switch asked for resetting the BGP peering due to three consecutive missing BFD hello messages. Such messages are exchanged over UDP protocol with an interval of 400 milliseconds and a tolerance of no more than three missed packets (after which the session is declared down).

The DLB server hadn’t sent BFD hello messages for a period of 1.2 seconds! The most interesting part from the above log is that the failure happened concurrently with both BGP peering, which are established over 2 different network cards to different physical switches.

This made us believe that something on the host caused the loss of 3 consecutive BFD messages; it’s very unlikely to have hardware issues at the same time on two different network cards, cables, or switches.

Several occurrences of the issue

The exact same issue was happening on multiple servers at random times across the day. In all occurrences we saw the same lines in the bird log. So, we knew the end of our journey, we just needed to find what makes the system to not send three consecutive UDP packets every 400 milliseconds. We store logs to ElasticSearch and created a kibana dashboard to visualize those errors and started investigating each occurrence.

Our servers are directly connected to the Internet, therefore we looked at possible small duration attacks on the TCP layer. UDP traffic is not allowed, thus we excluded the possibility of an attack with UDP 80/443 traffic. We didn’t notice any sudden increase of incoming TCP, ICMP, and HTTP traffic before the occurrence of the issue.

We also looked at the haproxy log for possible SSL attacks, but we didn’t notice any unusual traffic pattern. So we knew that there was nothing external to the system that could explain the problem.

The first interesting find

The next stage of our journey was haproxy itself. We use collectd for collecting system statistics and haproxystats for haproxy statistics. Both tools help us to gather a lot of performance metrics about haproxy and the system as well. Furthermore, haproxy emits log messages, which contain very useful information that can help figure out what‘is going on in the server.

haproxy exposes CPU usage per process (we run 10 processes): we noticed a spike to 100% utilization around the same time Bird received the messages to reset the BGP peering. In the following graph we can see that all of the haproxy processes had 100% CPU utilization for at least 1 data point.

haproxy CPU usage

The sudden increase of CPU usage wasn’t always followed by BGP peering resets. In some cases, BFD issues were reported by Bird before those CPU spikes. Nevertheless, we continued to investigate the CPU spikes as they were very suspicious.

The CPU utilization of a process is the sum of User Level and System Level CPU usage. Thus, we needed to know if haproxy was spending all this CPU power for performing its tasks (SSL computation, business logic processing, etc.) or for asking the system to do something like dispatching data to various TCP sockets or handling incoming/outgoing connections. The two graphs below suggest that haproxy was spending CPU cycles at the system level.haproxy cpu user level

haproxy cpy system level

This gave us a really good starting point for doing more in-depth analysis on what was causing those CPU spikes. We reviewed the haproxy configuration several times and there was nothing suspicious there. haproxy software hadn’t been upgraded recently, so we excluded a possible software regression which could have caused this behaviour.

We contacted HAPROXY TECHNOLOGIES, INC for assistance. They asked us to collect more information about sessions and TCP connections as there was a bug that could cause high number of TCP connections in CLOSE-WAIT state – but, according to them, that specific bug couldn’t cause CPU spikes.

We also looked at memory utilization of haproxy and there wasn’t anything suspicious there either. But, in all the occurrences we saw a sudden increase of free memory. The system freed ~600MB of memory around the same time as we’d been seeing those CPU spikes.

Free memory

It wasn’t very clear to us if those two observations (CPU spikes and the sudden increase of free memory) were the cause or the symptom of our issue. Moreover, this sudden increase of memory could be related with garbage collector being invoked by some other services. So, more digging was required to clear up the fog in our path.

(s)Tracing the unknown factor

We run many daemons and cron jobs on our servers. In some occurrences of our problem we saw a puppet run happening at the same time. We decided to look at what was executed on every puppet run.

We set up some scripts that were running pidstat against puppet and few other daemons. Since the issue was happening at random times across a lot of servers, we had to pick few servers to run those scripts and wait for the problem to appear.

After a few days of waiting, we had several traces to analyze. Puppet really loves CPU and it can easily lock a CPU for 4-5 seconds. But it wasn’t causing our problem. Other daemons were hungry for memory and CPU resources at a level that couldn’t explain the sudden increase of free memory.

HAProxy support department suggested deploying a script which could run strace and dump sessions when haproxy CPU usage at system level went beyond 30%. The script below was deployed on a single server and was manually invoked for all haproxy processes.

#! /bin/bash## hapee_tracing.shKILL_FILE="/tmp/kill_hapee_tracing"BASE_DIR="/var/log/pidstats/"SOCKET_DIR="/run/lb_engine/"PROCESS_NUMBER="$1"PIDS=($(cat /run/hapee-lb.pid))PID_INDEX=$(($PROCESS_NUMBER-1))PID=${PIDS[${PID_INDEX}]}


mkdir -p "${BASE_DIR}"whiletrue;doif[ -f "${KILL_FILE}"];thenexit 0fi
        timeout 60 pidstat -u -p "${PID}"1| stdbuf -i0 -o0 -e0 egrep 'hapee-lb'| stdbuf -i0 -o0 -e0 awk '{print $6}'|whileread linedoif[ -f "${KILL_FILE}"];thenexit 0fisystem_cpu=$(echo"${line}"| awk -F. '{print $1}')if["${system_cpu}" -gt 30];thenecho'show sess all'| socat ${SOCKET_DIR}process-${PROCESS_NUMBER}.sock stdio > ${BASE_DIR}sessions_$(date +%F:%H:%M:%S)_${PROCESS_NUMBER}_${PID}&
                timeout 5 strace -ttvs200 -p "${PID}" -o ${BASE_DIR}strace_$(date +%F:%H:%M:%S)_${PROCESS_NUMBER}_${PID}fidonePIDS=($(cat /run/hapee-lb.pid))PID=${PIDS[${PID_INDEX}]}done

We deployed a script to dump the number of connections, using the ss tool, and the sar tool was adjusted to capture CPU, memory, and network statistics. All those tools were gathering information every second. Since it only takes 1.2 seconds for a BFD session to be detected as down, we had to gather information on such a small interval.

While we were waiting for the problem to appear on the target machine, we decided to move the Bird daemon to a CPU core which wasn’t used by haproxy. Those 10 haproxy processes are pinned to the last 10 CPUs of the system, so we pinned Bird daemon to CPU 1 and assigned -17 nice level to it. We did that in order to make sure it had enough resources to process BFD messages while haproxy was spinning at 100% CPU utilization. We also changed the CPU priority on puppet agent to utilise less CPU resources.

Light at the end of the tunnel

The issue appeared on the target server and our tracing tools ended up collecting 150MB of data (something very close to 10 millions of lines to read!). We analysed pidstat, sar, ss and strace outputs and we made the following observations together with Willy Tarreau, the author of HAProxy and Linux Kernel developer (note that the high CPU utilization started at 12:41:21 and lasted till 12:41:30):

No change of incoming requests per second prior the problem:

12:38:00PMactive/s passive/s    iseg/s    oseg/s12:41:18PM4403.00836.0048416.0066313.0012:41:19PM4115.00819.0048401.0067910.0012:41:20PM1417.00786.0043005.0057608.0012:41:21PM4225.00824.0035247.0049883.0012:41:22PM1198.00814.0021580.0025604.0012:41:23PM3446.00768.0024229.0033893.0012:41:24PM4269.00773.0030462.0046604.0012:41:25PM2259.00821.0024347.0033772.0012:41:26PM994.06880.2013207.9215813.8612:41:27PM4878.00802.0032787.0050708.0012:41:28PM2988.00816.0036008.0053809.0012:41:29PM3865.00883.0034822.0053514.00

haproxy stopped for ~500 milliseconds in the middle of some operations, indicating that it was interrupted:

12:41:21.913124read(932,"\25\3\3\0\32",5)=512:41:21.913140read(932,"\0\0\0\0\0\0\0\1\244=\241\234Jw\316\370\330\246\276\220N\225\315\2333w",26)=2612:41:21.913171sendto(55,"<a href=\"&#47;general.sv.html?label=gen173nr-1FCAEoggJCAlhYSDNiBW5vcmVmaAKIAQGYAS6412:41:21.913199 sendto(55, ";form.append(input).append(check);}});}</script>\n<scriptsrc=\"https://r-e12:41:22.413476 recvfrom(1143, "T[16],r.MB,r.MN,null),T[17],r.ME(T[18],r.MB,r.MN,null),T[12],r.ME(T[1912:41:22.413512sendto(55,"T[16], r.MB, r.MN, null), T[17], r.ME(T[18], r.MB, r.MN, null), T[12], r.ME(T[19], r12:41:22.413539 connect(2665, {sa_family=AF_INET, sin_port=htons(80), sin_addr=inet_addr("10.198.156.38")},16)

High rate of page free / cache free activity, system was freeing 600MB of RAM per second for a period of ~6 seconds:

12:38:00PMfrmpg/s   bufpg/s   campg/s12:41:16PM-5423.000.00-754.0012:41:17PM-1784.001.001504.0012:41:18PM-1868.001.00337.0012:41:19PM-1110.002.00416.0012:41:20PM16308.00-27.00-10383.0012:41:21PM77274.00-56.00-71772.0012:41:22PM154106.00-147.00-121659.0012:41:23PM121624.00-253.00-93271.0012:41:24PM109223.00-238.00-84747.0012:41:25PM140841.00-384.00-116015.0012:41:26PM142842.57-573.27-121333.6612:41:27PM83102.00-263.00-59726.0012:41:28PM118361.00-1185.00-80489.0012:41:29PM168908.00-558.00-103072.00

System had 92% of the memory already allocated prior to the issue and started freeing a second later:

12:38:00PMkbmemfreekbmemused%memusedkbbufferskbcachedkbcommit%commitkbactivekbinactkbdirty12:41:13PM74174249145618492.502384806075134492230129.2334595324285017047200012:41:14PM74099689146364092.512384886075244892230129.2334595720285017567295612:41:15PM73826849149092492.532384886076226492366049.2434620324285018528274012:41:16PM73609929151261692.562384886075924892555009.2634638704285019847972412:41:17PM73538569151975292.562384926076526492555009.2634640664285021688574812:41:18PM73463849152722492.572384966076661292528249.2634642028285022487214812:41:19PM73419449153166492.572385046076827692543569.2634644128285023287352812:41:20PM74071769146643292.512383966072674492528289.2634593160285093687905212:41:21PM77162729115733692.202381726043965692585169.2734253456285641208024812:41:22PM83326969054091291.572375845995302092699969.2833694136286460368107212:41:23PM88191929005441691.082365725957993692868249.2933269068286988968438812:41:24PM92560848961752490.642356205924094892903369.3032890520287448209425612:41:25PM98194488905416090.07234084587768881049113610.50324034722879642810030812:41:26PM103965328847707689.492317685828670093648729.37318955562883875610344412:41:27PM107289408814466889.152307165804779693870529.39316473482885060412141212:41:28PM112023848767122488.672259765772584093589969.37312867962887182412725212:41:29PM118780168699559287.992237445731355293608449.37309554682880007213859612:41:30PM125997808627382887.262218365696162093500569.36306884242871364815348812:41:31PM126389248623468487.222218405698332893452689.35307065962871497617597212:41:32PM126064608626714887.252218405701604093501129.36307375562872145620868412:41:33PM125912848628232487.272218405703470493459049.353075139628724348217184

CPU saturation at system level while the system was freeing memory:

12:38:00PMCPU%user%nice%system%iowait%steal%idle12:41:20PM011.220.008.160.000.0080.6112:41:20PM114.290.006.120.000.0079.5912:41:20PM218.000.0011.000.000.0071.0012:41:20PM316.000.0012.000.000.0072.0012:41:20PM430.610.0015.310.000.0054.0812:41:20PM511.340.003.090.000.0085.5712:41:20PM628.570.009.180.000.0062.2412:41:20PM716.160.0012.120.000.0071.7212:41:20PM820.830.006.250.000.0072.9212:41:20PM919.390.003.060.000.0077.5512:41:20PM1014.290.0012.240.000.0073.4712:41:20PM1116.160.004.040.000.0079.8012:41:21PMall13.290.0041.880.000.0044.8312:41:21PM06.060.0017.170.000.0076.7712:41:21PM114.140.0022.220.000.0063.6412:41:21PM214.290.0045.920.000.0039.8012:41:21PM318.180.0046.460.000.0035.3512:41:21PM410.000.0051.000.000.0039.0012:41:21PM514.140.0046.460.000.0039.3912:41:21PM620.000.0041.000.000.0039.0012:41:21PM715.310.0038.780.000.0045.9212:41:21PM814.140.0045.450.000.0040.4012:41:21PM911.000.0047.000.000.0042.0012:41:21PM1010.000.0052.000.000.0038.0012:41:21PM1111.110.0050.510.000.0038.3812:41:22PMall9.580.0084.180.000.006.2412:41:22PM04.080.0070.411.020.0024.4912:41:22PM12.020.0062.630.000.0035.3512:41:22PM216.000.0077.000.000.007.0012:41:22PM314.000.0086.000.000.000.0012:41:22PM47.000.0093.000.000.000.0012:41:22PM53.000.0097.000.000.000.0012:41:22PM612.000.0085.000.000.003.0012:41:22PM715.000.0083.000.000.002.0012:41:22PM813.860.0084.160.000.001.9812:41:22PM99.090.0090.910.000.000.0012:41:22PM1010.000.0090.000.000.000.0012:41:22PM119.000.0091.000.000.000.0012:41:23PMall17.730.0075.750.000.006.5212:41:23PM024.000.0067.000.000.009.0012:41:23PM15.050.0055.560.000.0039.3912:41:23PM214.140.0080.810.000.005.0512:41:23PM326.730.0073.270.000.000.0012:41:23PM414.000.0086.000.000.000.0012:41:23PM524.000.0075.000.000.001.0012:41:23PM616.000.0076.000.000.008.0012:41:23PM713.270.0079.590.000.007.1412:41:23PM818.000.0075.000.000.007.0012:41:23PM919.610.0079.410.000.000.9812:41:23PM1016.000.0083.000.000.001.0012:41:23PM1121.000.0078.000.000.001.0012:41:24PMall16.990.0070.140.080.0012.7812:41:24PM011.340.0049.480.000.0039.1812:41:24PM113.130.0045.450.000.0041.4112:41:24PM219.000.0066.000.000.0015.0012:41:24PM320.410.0071.430.000.008.1612:41:24PM419.000.0079.000.000.002.0012:41:24PM517.170.0079.800.000.003.0312:41:24PM621.210.0067.680.000.0011.1112:41:24PM720.200.0067.680.000.0012.1212:41:24PM819.390.0063.270.000.0017.3512:41:24PM97.220.0090.720.000.002.0612:41:24PM1014.140.0083.840.000.002.0212:41:24PM1120.000.0079.000.000.001.0012:41:26PM921.780.0078.220.000.000.00

Some low-rate activity for page swapping:

12:39:40PMpswpin/s pswpout/s12:39:43PM0.000.0012:39:44PM0.0078.2212:39:45PM0.0075.0012:39:46PM0.000.00(...)12:41:20PM0.009.0012:41:21PM0.0043.0012:41:22PM0.0084.0012:41:23PM0.0070.0012:41:24PM0.0053.0012:41:25PM0.0074.00

haproxy was writing data, which is odd: it shouldn’t know how, considering that when the service starts, it closes all the file descriptors related to all the files that can cause I/O operations to the filesystem:

12:41:21PMUIDPIDkB_rd/s   kB_wr/s kB_ccwr/sCommand12:41:22PM498288490.000.000.00hapee-lb12:41:22PM498288920.005964.000.00hapee-lb12:41:22PM498288940.000.000.00hapee-lb12:41:22PM498288950.000.000.00hapee-lb12:41:22PM498288960.000.000.00hapee-lb12:41:22PM498288970.006276.000.00hapee-lb12:41:22PM498288990.0020.000.00hapee-lb12:41:22PM498289010.000.000.00hapee-lb12:41:22PM498289020.000.000.00hapee-lb12:41:22PM498289040.000.000.00hapee-lb12:41:22PM498289050.000.000.00hapee-lb

All the haproxy processes started to do some minor page faults. They’d touched a free memory area for the first time since that area was last reclaimed:

12:41:20PMUIDPIDminflt/s  majflt/sVSZRSS%MEMCommand12:41:21PM052060.000.004634018240.00hapee-lb-system12:41:21PM498288490.000.00177540852960.09hapee-lb12:41:21PM49828892102.000.002132041211640.12hapee-lb12:41:21PM49828894179.000.002165921243240.13hapee-lb12:41:21PM49828895116.000.002133601226760.12hapee-lb12:41:21PM49828896153.000.002118401225440.12hapee-lb12:41:21PM49828897106.000.002102361218160.12hapee-lb12:41:21PM4982889955.000.002101961180000.12hapee-lb12:41:21PM49828901140.000.002121921202880.12hapee-lb12:41:21PM49828902125.000.002146161232120.12hapee-lb12:41:21PM4982890481.000.002159881171960.12hapee-lb12:41:21PM49828905110.000.002119841126920.11hapee-lb

Memory usage of haproxy processes remained stable and didn’t change one second later, showing that it was just touching memory that was aggressively reclaimed by the system:

12:41:21PMUIDPIDminflt/s  majflt/sVSZRSS%MEMCommand12:41:22PM052060.000.004634018240.00hapee-lb-system12:41:22PM498288490.000.00177540852960.09hapee-lb12:41:22PM49828892284.000.002133561216600.12hapee-lb12:41:22PM49828894231.000.002171441249000.13hapee-lb12:41:22PM49828895231.000.002139361229920.12hapee-lb12:41:22PM498288968.000.002118401225440.12hapee-lb12:41:22PM49828897223.000.002103801221320.12hapee-lb12:41:22PM49828899311.000.002124921187520.12hapee-lb12:41:22PM49828901223.000.002124601206400.12hapee-lb12:41:22PM49828902214.000.002146161235160.12hapee-lb12:41:22PM49828904219.000.002159881174040.12hapee-lb12:41:22PM498289052.000.002119841126920.11hapee-lb

Willy Tarreau also inspected session information as they were dumped from haproxy memory and didn’t find anything unusual. He finished his investigation with the following:

  1. virtual machines using memory ballooning to steal memory from the processes and assign it to other VMs. But from what I remember you don't run on VMs (which tends to be confirmed by the fact that %steal is always 0)
  2. batched log rotation and uploading. I used to see a case where logs were uploaded via an HTTP POST using curl which would read the entire file in memory before starting to send, that would completely flush the cache and force the machine to swap, resulting in random pauses between syscalls like above, and even packet losses due to shortage of TCP buffers.

Given the huge amount of cache thrashing we're seeing (600 MB/s), I tend to think we could be witnessing something like this. The fact that haproxy magically pauses between syscalls like this can be explained by the fact that it touches unmapped memory areas and that these ones take time to be allocated or worse, swapped in. And given that we're doing this from userspace without any syscall but consecutive to a page fault instead, it's accounted as user CPU time.

I also imagined that one process could be occasionally issuing an fsync() form (after a log rotation for example), paralyzing everything by forcing huge amounts of dirty blocks to the disks; that didn't seem to be the case and There wasn’t ever any %iowait in sar reports, implying that we weren’t facing a situation where a parasitic load is bugging us down in parallel.

Another point fueling the theory of memory shortage is sar's output (again) showing that the memory was almost exhausted (92% including the cache) and that it started getting better at the exact same second the incident happens.

To sum up: memory shortage led to a sudden and high-rate freeing of memory which locked all CPUs for ~8 seconds. We knew that high CPU usage from haproxy was the symptom and not the cause.

Finding the memory eater(s)

What triggered our system to free memory at such high rate (600MB/s) and why was this so painful for our system? Why did the kernel use so much memory (~92%) for caches while active memory was always below ~8GB? There were many questions to answer, which brought us back to tracing mode.

Willy suggested to issue echo 1 > /proc/sys/vm/drop_caches upon log rotation, which we did in all servers. We also issued once echo 2 > /proc/sys/vm/drop_caches in two DLB groups. Both of these actions calmed our issue down but only for a small amount of time.

From the many processes running on our servers, we picked 5 with the highest resident memory (RSZ) and started monitoring them very closely with pidstat. We also started monitoring memory activities, noticing a high number of entries for dentry objects in the cache:

Active / Total Objects (% used)        : 388963608 / 415847504 (93.5%)
Active / Total Slabs (% used)          : 19781213 / 19781213 (100.0%)
Active / Total Caches (% used)         : 69 / 101 (68.3%)
Active / Total Size (% used)           : 73098890.88K / 78163097.48K (93.5%)
Minimum / Average / Maximum Object : 0.01K / 0.19K / 15.88K


OBJS      ACTIVE     USE  OBJ SIZE  SLABS     OBJ/SLAB CACHE SIZE  NAME
414578178 387795876   0%  0.19K     19741818  21       78967272K   dentry
244998    244998    100%  0.10K     6282      39       25128K      buffer_head
160344    158020     98%  0.64K     6681      24       106896K     proc_inode_cache
158781    149401     94%  0.19K     7561      21       30244K      kmalloc-192
119744    94951      79%  0.06K     1871      64       7484K       kmalloc-64
59616     48444      81%  0.25K     1863      32       14904K      kmalloc-256

atop was also reporting ~100% of SLAB memory as reclaimable memory:

MEM | tot        94.3G |  free        9.2G | cache   5.4G | dirty  18.1M  | buff   15.1M | slab   75.7G |  slrec  75.5G | shmem   4.1G | shrss   0.0M  | shswp   0.0M |

The output of tracing tools we had put in production didn’t provide much useful indicators about which process(es) could cause that high memory consumption for caches. haproxy log (which is rotated every hour) had ~3.5GB of data and dropping page caches upon log rotation excluded rsyslogd from the investigation as well.

We started to read documentation about memory management and realized that our system may not be tuned correctly, considering that our servers have 96GB of total memory, only ~8GB of active memory and have the following memory settings in place:

  • vm.vfs_cache_pressure set at 100
  • vm.dirty_background_ratio set at 3
  • vm.dirty_ratio set at 10

So, the system had a lot of free memory to use for caches – which it did, and it wasn't aggressively reclaiming memory from caches even when dentry objects in cache occupied 80GB. That led the system to have around 800MB in free memory in some cases.

We changed vm.vfs_cache_pressure to 200 and freed reclaimable slab objects (includes dentries and inodes) by issuing echo 2 > /proc/sys/vm/drop_caches. We started to see more free memory available (~7GB) after 2 days and then we increased vm.vfs_cache_pressure to 1000. That made the system to reclaim memory more aggressively – and the issue was almost entirely resolved.

We continued our investigation in the area of dentry caches and found this bug report for curl tool. The bug report states that, when curl makes a HTTPs request there were many access system calls to files that don’t exist, have random names and are unique per invocation:

me at node1 in ~
strace -fc -e trace=access curl 'https://foobar.booking.com/' > /dev/null
Process 2390 attached
% time         seconds  usecs/call         calls        errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00        0.003180               0          6647          6643 access
------ ----------- ----------- --------- --------- ----------------
100.00        0.003180                      6647          6643 total


me at node1 in ~
strace -f -e trace=access curl 'https://foobar.booking.com/' 2>&1 |head -10
(...)
access("/etc/pki/nssdb", W_OK)              = -1 EACCES (Permission denied)
access("/home/me/.pki/nssdb/.3219811409_dOeSnotExist_.db", F_OK) = -1 ENOENT (No such file or directory)
access("/home/me/.pki/nssdb/.3219811410_dOeSnotExist_.db", F_OK) = -1 ENOENT (No such file or directory)
access("/home/me/.pki/nssdb/.3219811411_dOeSnotExist_.db", F_OK) = -1 ENOENT (No such file or directory)
access("/home/me/.pki/nssdb/.3219811412_dOeSnotExist_.db", F_OK) = -1 ENOENT (No such file or directory)
access("/home/me/.pki/nssdb/.3219811413_dOeSnotExist_.db", F_OK) = -1 ENOENT (No such file or directory)

We knew that we use curl in the check_cmd for each service check in anycast healthchecker daemon and that check runs every 10 seconds for ~10 services. So, we fired up a one-liner to plot the number of dentry objects in cache per second:

while(true);doecho"$graphite_name_space.$(hostname|sed -e 's/\./_/g').dentry $(sudo slabtop -o | egrep 'dentry'|awk '{print $1}')$(date '+%s')"| nc 127.0.0.1 3002;
sleep 0.9;done

In the following graph we can see that the number of dentry objects was increasing at a high and constant rate:

dentries

Bingo! We found the tool which was polluting dentry cache. Finally, we see our destination; time to prepare the cake.

The fix was very easy – just setting the environment variable NSS_SDB_USE_CACHE to YES was enough:

me at node1 in ~
NSS_SDB_USE_CACHE=YES strace -fc -e trace=access curl 'https://foobar.booking.com/' > /dev/null
Process 14247 attached
% time         seconds  usecs/call         calls        errors syscall
------ ----------- ----------- --------- --------- ----------------
100.00        0.000009               0            32            30 access
------ ----------- ----------- --------- --------- ----------------
100.00        0.000009                        32            30 total

We adjusted check_cmd for each service check in anycast-healthchecker which utilized the curl tool, and we also modified a cron job, which was running curl many times for HTTPs site. In the following graph, we can clearly see that the pollution was stopped as the number of dentry objects in the cache wasn’t increasing:

fix curl

Conclusions

  • Your sampling intervals for statistics may hide problems. In our case, we collect metrics every 10 seconds and we were able to (eventually) see the issue clearly. If you collect metrics every minute on systems that receive ~50K requests per second, you won’t be able to see problems that last less than minute. In other words, you fly blind. Choose the metrics to collect and pick the intervals very wisely.
  • Abnormal system behaviors must be investigated and the root cause must be found. This secures the stability of your system.
  • Reshuffling TCP connections when a single member disappears and appears in ECMP group didn’t impact our traffic as bad as we initially thought it would do.

I would like to thank Marcin Deranek, Carlo Rengo, Willy Tarreau and Ralf Ertzinger for their support in this journey.


Viewing all articles
Browse latest Browse all 114

Trending Articles