Hello,
I am trying to set up WMI on a universal forwarder, however, I am only getting WMI:CPUTime. The WMI:WinEventLog:Security is not working though. I tried following http://docs.splunk.com/Documentation/Splunk/6.2.4/Data/MonitorWMIdata but that is for all Windows servers, and not Linux.
My setup
Search head and main UI on Linux
2 distributed indexers also on Linux
Servers to monitor are on Windows
My wmi.conf file is on a Windows server that has universal forwarder installed. (All other logs being sent from this server are coming in)
[WMI:CPUTime]
interval = 10
disabled = 0
server = localhost
wql = SELECT PercentProcessorTime, PercentUserTime FROM Win32_PerfFormattedData_PerfOS_Processor WHERE Name = "_Total"
[WMI:WinEventLog:Security]
interval = 10
disabled = 0
server = localhost
event_log_file = Security
Do I need to set something else up for security to work? What can I check to verify the event_log_file is being created? Is there a way I can use the wql parameter with security instead, since that works for the CPUTime?
Thank you
↧
How to forward WMI:WinEventLog:Security data from a Windows universal forwarder to a Linux search head?
↧
Does Splunk and Elastic Map Reduce work Together?
I have a few indexes which have around 2.5 billion events each. Unfortunately we don't have a lot of CPU to sort through this massive data and make it meaningful in a dashboard. We're currently in the process of setting up a summary index, but the requirements/fields can change at anytime which mean's we'd have to re-summerize that data.
So my question is, can we use Amazon EMR as a temporary boost in horsepower to Map and Reduce this data back into the summary index? How difficult would this be to do?
↧
↧
Throttle alerts based on field value
Is it possible to throttle alerts by field value?
For example: I want to alert when the value of field "action" is "delete" and throttle any subsequent results for 10 minutes unless the value of the field "username" changes.
↧
how to format date and time in searches
In my logs that is pulled into Splunk the time is recorded as datetime="2015-08-13 01:43:38" . So when I do a search and go to the statistics tab, the date and time is displayed with the year first, then the month and the date and the time. How can I format the field so that it will be in the following format
MM-DD-YYYY 00:00 AM or PM (08-13-2015 01:43 AM)
↧
What kind of visualization or dashboard should I use to represent my data?
The final data from Splunk I have is in the form of a CSV file with about 180 rows (product) and columns that records the change in sale.
Product mean variance
Apples increase no change
Oranges decrease decrease
bananas no change no change
water increase increase
soda increase increase
I want to create a dashboard to get a sense of the above table. What is the best visual representation data of this nature?
Any leads are appreciated!
Thanks
↧
↧
How to customize the UI of an app?
Hi,
I've created a sample application using Splunk Web.
I would like to customize a top level navigation bar (AccountBar) "Admonistrator | Messages | ...".
My investigations led me to the Master.html, but it is in the core of Splunk and I'd link to do this on the App level.
Can you please clarify the proper way to do this? And it would be helpful to have an overall vision of the App architecture:
- How to extend views.
- How to replace modules.
- How to extend a module with a custom logic.
↧
After defining an automatic lookup in Splunk Web on the search head, why is the lookup not working at all?
Hi
I have separate machines for a Search Head and Indexer. In Splunk Web on the Search Head, I went through the different steps as shown in the Splunk tutorial to define automatic lookup based on a single lookup table uploaded as .csv file.
For example, lets assume, I have city_code, city_name in the csv file.
In my events for different sourcetypes, I have the city_code field (named in different ways depending on the sourcetype). All I need is for Splunk to look for this field "city_code" and then output the field "city_name" in the matching events.
I only did the config on Search Head as my web interface is disabled on the Indexer.
Its not working at all. Is there some manual steps I need to follow like manually editing transforms.conf file?
-Olavo
↧
How to find the difference between timestamps in a number format that isn't based on dates?
Hi all,
I'm trying to calculate the difference between two dates my search regarding this looks as follows (forgive the messiness, i know it seems a bit redundant)-
| eval it = strptime(Load_Time, "%Y/%m/%d")
| eval ot = strptime(Time, "%Y/%m/%d %I:%M:%S")
| eval it2 = strftime(it, "%Y/%m/%d")
| eval ot2 = strftime(ot, "%Y/%m/%d")
| eval it3 = strptime(it2, "%Y/%m/%d")
| eval ot3 = strptime(ot2, "%Y/%m/%d")
| eval TimeDiff= ot3-it3
| eval TimeDiff2=strftime(TimeDiff, "%j")|
| dedup it2, MTMS
| dedup MTMS, Time
| sort -Load_Time
| stats list(LIC) as LIC count list(MTMS) AS MTMS , list(it2) AS LoadTime, list(ot2) As EventDate, list(TimeDiff2) AS Time_Difference ,sum(TimeDiff2) AS Sum, by Bundle
| eval Machine_Months=Sum/30.4
| sort -Bundle
The problem with this is that the results (difference) I'm given is in Epoch format which I can't use to do the calculations that follow (dividing the sum of all differences by 30.4) and if I use strftime to convert it, then I can only go up to 365 days (using %j ) which isn't a big enough number as some of these differences go beyond that, and for the dates that are the same, lets say 2015/07/09 and 2015/07/09 I get 365 instead of 0
So my overall goal is to get a difference between these dates in a number format that isn't based on dates (so 0 to infinity NOT 0 to 365 or 0 to 31 etc...)
Thank you!
↧
XML tag extraction
I have a datasource that reads in events in XML format. Could someone please help me build a props.conf that will extract all fields and show the events in treeview. Sample event below:
Fri Aug 07 13:42:37 EDT 2015 name="QUEUE_msg_received" event_id="ID:414d51204d514942513032202020202055bdd7d620016441" msg_dest="QA.EA.ELOG.BUSINESSEVENT1" msg_body="<?xml version="1.0" encoding="UTF-8"?><v1:BusinessEventRequest xmlns:soapenv="http://schemas.xmlsoap.org/soap/envelope/" xmlns:v1="http://schemas.humana.com/Infrastructure/Utility/Logging/BusinessEventRequest/V1.1"><v1:BusinessEvent><v1:BusinessEventMetaData><v1:BusinessEventTypeCode>BUSINESS_EVENT</v1:BusinessEventTypeCode><v1:BusinessEventDateTime>2015-08-07T01:43:47Z</v1:BusinessEventDateTime></v1:BusinessEventMetaData><v1:SourceApplicationInformation><v1:EAPMId>66666</v1:EAPMId><v1:HostMachineName>MQIBQ01</v1:HostMachineName><v1:HostEnvironmentName>QA</v1:HostEnvironmentName><v1:AppEventCorrelationId/><v1:Component><v1:ComponentId/><v1:ComponentName/></v1:Component></v1:SourceApplicationInformation><v1:BusinessProcessInformation><v1:ProcessName/><v1:EventModelXSDPath/><EventInformation><mstns:BAMEvent xmlns:mstns="http://enrollmentservices.humana.com/Schema/BAMSchema/v1.0"><mstns:EventSource>FileIntake</mstns:EventSource><mstns:Activity>FileIntakeActivity</mstns:Activity><mstns:EventTransactionId>40efe7da-4ef2-46b6-bea6-911a74db898e</mstns:EventTransactionId><mstns:EventCorrelationID>354805729</mstns:EventCorrelationID><mstns:Milestone><mstns:MilestoneEvent>File upload requested</mstns:MilestoneEvent><mstns:MilestoneState>Begin</mstns:MilestoneState><mstns:DataElements><mstns:FileName/><mstns:FileSize>9008</mstns:FileSize><mstns:AdditionalInfo>File upload requested</mstns:AdditionalInfo></mstns:DataElements></mstns:Milestone></mstns:BAMEvent></EventInformation></v1:BusinessProcessInformation></v1:BusinessEvent></v1:BusinessEventRequest>"
↧
↧
Using a checkbox to add to the search
Hello,
I have a working dash that searches our web logs returns the results based off of a Category, which is chosen from a drop down. The search results should not include any URL's that end in image extensions. However there is a request to have just a single checkbox that allows images extensions in the results. Is there an option to have a single check box add/remove a variable?
Right now I've cheated by adding a second drop down that is defaulted to remove the images, and when changed, excludes an imaginary file extension, basically "only exclude a file extension you'll never find". As shown below.
Anything that pulls that off with a single checkbox?
↧
Timewrap: How to reverse results to show oldest to newest from left to right, not right to left?
Timewrap is reading oldest to newest from right to left. I want it to be the opposite newest to oldest from left to right.
The pic below shows that may 7th is the oldest but it is showing on the right and aug 13(yesterday) on the left.
<a href="http://tinypic.com?ref=11vk4mr" target="_blank"><img src="http://i58.tinypic.com/11vk4mr.jpg" border="0" alt="Image and video hosting by TinyPic"></a>
In case the pic is not showing above here it is again: ![alt text][1]
this gives an idea of my search:
index=core ... earliest=-100d@d latest=+d@d | timechart span=d sum(c140509343) as "Incoming downlink user traffic in KB (SGSN/SGW PLMN)" | eval wday = strftime(_time, "%a") | where wday = "Thu" | fields - wday | timewrap d series=exact
Is there a way I can somehow reverse this?
[1]: /storage/temp/51209-timewrap-lat-100-days.png
↧
Search Head Clustering: Why is the Deployer not deploying my email settings in alert_actions.conf properly to search heads?
I've got an app called configuration. This app pushes authentication, outputs, and web conf files successfully to the 3 search heads. However alert_actions.conf, when deployed with the deployer in the same configuration app, it does not appear to deploy my email settings for alerting. The search heads continue to use the default settings (which are unconfigured) and email fails to send.
The alert_actions.conf file works properly on our stand alone search head which we are replacing so I know it's functional.
Does anyone know how to properly deploy this using the deployer?
↧
How to backfill a summary index with a restricted time for each day?
I would like to backfill my index up by 2 months. The query however, is time sensitive and requires the day span to be only between 7am-9pm. Currently, my only method is to manually change the earliest and latest times in both the search and the summary index settings to `earliest=-1d@d+7h latest=-1d@d+20h`, and then to `earliest=-2d@d+7h latest=-2d@d+20h`, etc. etc.. you can see just how tedious and time-consuming this can become.
Is there any way that I would not have to insert any relative day into my period, to be able to run my overall index search for 30 days with days only involving data between 7am-9pm everyday? if there were an earliest=7h latest=20h kind of deal, that would be great, but I have not found any yet.
Thanks in advance
↧
↧
i want to have inputs in their own seperate row in a dashboard
this is my skeleton of a dashboard with a fieldset with a sample input followed by some panels with charts
so this looks like:
input
row with chart
row with chart
row with chart
What I want to do is have something like below, where I want the 2nd input to be in its own row. I do not want it to be in the same panel because this way the chart looses space to make way for the input.
input
row with chart
row with chart
input
row with chart
below is an example of me putting the input inside the panel, but as I said above I do not want this, I want a way to have the input in its own row. **Can anyone advise how this can be done?**
<row>label1 label2 label1
...
↧
SPLICE Preloaded feeds - config error
Hi there,
I may have noticed an error / typo with one of the preloaded feeds in Splunk and in the app documentation and just wanted to check with the author.
The feed is configured as guest.EmergineThreats_rules
Should it be
guest.EmergingThreats_rules as per the hailataxii.com site.
Many thanks
↧
How do I fix "Error in 'rex' command: Invalid argument: '(' The search job has failed due to an error. You may be able view the job in the Job Inspector."
Why does this rex query work fine in a simple search, but then fail when used in both a primary and a subsearch? I need to parse fields in both places. I built an initial query that worked fine alone, then created a subsearch and copied/pasted the identical rex into it. It now fails with the error "Error in 'rex' command: Invalid argument: '(' The search job has failed due to an error. You may be able view the job in the Job Inspector." This doesn't make sense to me since it worked alone, but now with two copies of them it fails.
What do you think is going on, and how do I fix it? The purpose is to find Devices with Tasks that failed at one time, but where a later Task succeeded. Thanks so much.
Here is the code, although for some reason the * asterisks after each dot (.) in the regexes don't seem to come through in the preview window:
source="File1.csv" index="inventory-legacy" | regex Notes="^Succ.*" | transaction Description | rex field=Description "^(?<TaskID>[^-]+).*" | rex field=Description "^[^-]+-(?<DeviceName>.*)" [ search source="File1.csv" index="inventory-legacy" | regex Notes="^Fail.*" | transaction Description | rex field=Description "^(?<TaskID>[^-]+).*" | rex field=Description "^[^-]+-(?<DeviceName>.*)" | dedup DeviceName, TaskID | fields DeviceName ] |sort -_time, +TaskID, +DeviceName | table _time, TaskID, DeviceName, Description, Notes
More background: Initially I tried a simple query using (Notes="Succ*" OR Notes="Fail*") [thank you RickGalloway for your input] which does indeed pull all records, both successes and failures, but it's not quite what I want. I created the subsearch to first identify Devices associated with a particular TaskID that attempted an action at one time and failed. Once we have that pool of devices, the primary search looks to see which of those devices subsequently ran with a new TaskID that did succeed. Using a subsearch should greatly reduce the events returned, and will provide the answer I need to the question: "Which TaskID (a set of tests run on a Device) subsequently succeeded after a previous TaskID (different tests) had failed?" Thanks!
↧
maps visualisation + getting started + format accepted? + adding lookups
Some sample data for creating a maps visualisation in splunk
countries_lat_long_int_code.csv
code,name,country,latitude,longitude
61,Australia,AU,-25.274398,133.775136
86,China,CN,35.86166,104.195397
49,Germany,DE,51.165691,10.451526
33,France,FR,46.227638,2.213749
64,New Zealand,NZ,-40.900557,174.885971
685,Samoa,WS,-13.759029,-172.104629
41,Switzerland,CH,46.818188,8.227512
1,United States,US,37.09024,-95.712891
678,Vanuatu,VU,-15.376706,166.959158
If I add this to `Lookups » Lookup table files` in splunk I can generate something a map visualisation.
Then if I put something like this in the search bar it will generate a maps visualisation
`| inputlookup countries_lat_long_int_code.csv | fields + latitude longitude | eval field1=100`
the stats tab will look like this:
latitude longitude field1
-25.274398 133.775136 100
35.86166 104.195397 100
51.165691 10.451526 100
46.227638 2.213749 100
-40.900557 174.885971 100
-13.759029 -172.104629 100
46.818188 8.227512 100
37.09024 -95.712891 100
-15.376706 166.959158 100
what I would like to know is what parameters/format the data has to be in for a maps visualisation.
For example it looks like latitued and longitude must be the first 2 columns, and possible y in that particular order.
Can anyone explain what other formats are accepted, or point me in the right direction? For example I am just playing doing something like this
`| inputlookup countries_lat_long_int_code.csv | fields + latitude longitude | eval field1=100 | eval field2=200 | eval field3="country name"`
↧
↧
lookups in splunk + can you lookup a value and use the corresponding value to its left
How do lookups in splunk work
I presume it works like this, `lookupA` is the value you are looking for and `ValueToReplaceLookup` is the value that is returned.
lookupA,ValueToReplaceLookup
A,America
B,Beijing
C,Columbia
But can it also work this way; looking up a value and the value is retruned is to the left of it. E.g. `lookupA` is the value you are looking for and `ValueToReplaceLookup` is the value that is returned, but `ValueToReplaceLookup` will be on the left as opposed to the right?
ValueToReplaceLookup,lookupA,
America,A
Beijing,B
Columbia,C
Just wondering if I should be formatting my data accordingly before uploading it to splunk for doing lookups.
↧
Can I set an app automatically install addon
Hi,
So I have an app version1 with sourcetype definitions and eventypes etc,
later on in version2 I moved those definition to a separate addon, so now without the addon, version2 won't work
can I do something in the app version2, so that when user upgrading, it automatically install that addon?
Thanks
↧
Why am I getting "Error connecting to servicesNS/-/system/authentication/users" when I select Users under Access Controls in Splunk Web?
I am getting this error:
Timed out while waiting for splunkd daemon to respond (Splunkd daemon is not responding: ('Error connecting to /servicesNS/-/system/authentication/users: The read operation timed out',)) Splunkd may be hung
in Splunk Web when I select access controls and then Users. I have restarted Splunk and it shows that it is running. Any suggestions would be appreciated.
↧