We are currently using Event 45 to calculate the average load for Outlook: Microsoft KB & Sample Data
What we have for a search is this using Splunk 6:
index=win_desk EventCode=45 sourcetype="WinEventLog:Application" SourceName=Outlook | rex field=_raw "Boot Time \(Milliseconds\)\: (?<BootTime_ms>\d+)" max_match=0 | streamstats sum(BootTime_ms) as Evt_sum_BootTime window=1 | eval Evt_BootTime_sec = Evt_sum_BootTime / 1000 | bucket _time span=1d | stats avg(Evt_BootTime_sec) by _time
When I run this search for the following days (10/1/13 12:00:00.000 am to 10/6/13 12:00:00.000 am) I come up with the following results:
_time avg(Evt_BootTime_sec)
2013-10-01 24.834010
2013-10-02 7.831655
2013-10-03 7.796068
2013-10-04 4.842439
2013-10-05 4.59200
So far it all looks good! The problem seems to be when we adjust the the time frame to 30 days. My results go crazy!
10/1/13 12:00:00.000 AM to 11/1/13 12:00:00.000 AM
_time avg(Evt_BootTime_sec)
2013-10-01 772.931010
2013-10-02 755.928655
2013-10-03 755.893068
2013-10-04 752.939439
2013-10-05 752.689000
2013-10-06 756.884800
2013-10-07 719.525329
2013-10-08 687.182311
As you can see in the data10/1/2013 has jumped from 24 seconds to 12 Minutes by just changing the date range. The goal is to track our progress month by month showing a steady progress downwards in a chart.
What am I doing wrong here?