Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all articles
Browse latest Browse all 13053

Possible memory leak in 4.3.6

$
0
0

Hello,

I have an environment with 2 search heads and 2 indexers. There are 70ish forwarders which send around 50 MB data a day.

lsof -i :port | wc -l # shows established connections
70

On one search head there are 6 realtime searches, which can be seen on 'ps' screen

ps -Lef
(...) splunkd search --id=rt_1373011410.1218 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1218 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1219 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1219 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1218 --maxbuckets=0
(...) splunkd search --id=rt_1373011410.1219 --maxbuckets=0

However I see increasing number of splunkd threads, now sitting at number 39

ps -Lef | grep -v grep | grep "splunkd -p 8089" | wc -l
39

Furthermore there are couple of threads for mrsparkle

python -O /opt/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/root.py restart

The problem is that Splunk starts using the whole memory. Mem Used Percentage graph can be seen here

alt text

( edit: For your information Indexers have 34 GB memory each )

You can see manual restarts, and forced ones when memory usage gets to 100% and splunk is killed because of oom.

All splunk instances have been updated to 4.3.6 and have Deployment Monitor App disabled.

Is there something else I can do to check what causes the memory leak ?


Viewing all articles
Browse latest Browse all 13053

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>