Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

How do I setup a field extract, field transform to change sourcetype?

0
0
I am struggling with the relationship between the field extract and the field transformation with regards to sourcetype. Given a basic line: `Nov 1 host service[1001]` I would like to take this and assign it the sourcetype "service" index is john sourcetype is john_service props.conf [john] TRANSFORM-sourcetype = john_service transforms.conf: [john_service] REGEX =\s(\w+)\[ DEST_KEY = MetaData:Sourcetype FORMAT = sourcetype::john_service The initial input gets set to: index=john, sourcetype=john

How to allow users other than the admin to search the default "os" index for Splunk App for Unix and Linux?

0
0
Hello. I created a dashboard using some of the data from the Splunk App for Unix. Its in the default index called "os." I notice that only the admin user of splunk can query info out of that index. Is there something I need to do to allow other users to do that? The issue I have is when I shared the dashboards, nobody can see the data within the panels except the admin user.

Splunk Cloud: Why are event timestamps not being extracted from JSON data, using the event's index time instead?

0
0
Per: http://docs.splunk.com/Documentation/Storm/Storm/User/Sourcesandsourcetypes I've tried sending JSON events to Splunk Cloud using all of the following JSONy sourcetypes, none of which seem to result in an accurate timestamp being extracted from my event (instead Splunk Cloud is using the event indexed time): **json_predefined_timestamp** With field: `"timestamp": "2014-11-04T20:45:43.000"` **json_auto_timestamp** With fields: "created": 1415133943 or "time": 1415133943 All to no avail...

How to calculate the number of different eventtypes in a transaction?

0
0
Hi All this is my data on one transaction Nov 4 13:55:51 10.236.33.22 Nov 4 13:55:51 LPD-ZF5-001 notice tmm3[19702]: 01490505:5: decbdf41: RD: Connect to 10.148.2.142 port 2598 err ERR_OK Nov 4 14:51:20 10.236.33.22 Nov 4 14:51:20 LPD-ZF5-001 notice tmm[19699]: 01490505:5: decbdf41: RD: Connect to 10.148.2.142 port 2598 err ERR_OK Nov 4 14:51:33 10.236.33.22 Nov 4 14:51:33 LPD-ZF5-001 notice tmm2[19701]: 01490505:5: decbdf41: RD: Connect to 10.148.2.142 port 2598 err ERR_OK Nov 4 15:29:17 10.236.33.22 Nov 4 15:29:17 LPD-ZF5-001 notice tmm3[19702]: 01490505:5: decbdf41: RD: Connect to 10.148.2.142 port 2598 err ERR_OK Nov 4 15:29:26 10.236.33.22 Nov 4 15:29:26 LPD-ZF5-001 notice tmm[19699]: 01490505:5: decbdf41: RD: Connect to 10.148.2.142 port 2598 err ERR_OK Nov 4 15:29:33 10.236.33.22 Nov 4 15:29:33 LPD-ZF5-001 notice tmm2[19701]: 01490505:5: decbdf41: RD: Connect to 10.148.2.142 port 2598 err ERR_OK i defined eventtype who match each line and when i try to calculate the occurrence of the eventtype i have always 1 So how calculate the occurrence of this eventtype ? Regards Tony

Is it possible to forward filtered data from one indexer to another? If yes, how?

0
0
We have a business need that requires a filtered set of data from one indexer be shipped offsite to another indexer. Is that possible and if so how?

How to search for the Min, Max, Average of several fields of the same event?

0
0
I have events with several fields and the fields have a common portion and a variable portion: i.e. aaaaa0500 = 234, aaaaa0501 = 432, aaaaa0502 = 302, etc. I want to find the Min, Max, and Average of the values of these fields, within each event, over time. Ive found/tried several examples but nothing achieved the desired results. The closest I came was: ...| stats max(aaaaa05*), min(aaaaa05*), avg(aaaaa05*) by _time But this yields the stats for each field within each event which is not what I am looking for. Using the example data above, if the above data were in the same event, then I would want to see: Min=234, Max=432, Avg=322.6 Any ideas would be appreciated

Why am I getting "ERROR IndexedExtractionsConfig - Tried to set INDEXED_EXTRACTIONS but it already had a value!" causing Splunk daemon to crash?

0
0
Splunk daemon is crashing an I see this is in the log 11-03-2014 17:33:17.056 -0800 ERROR IndexedExtractionsConfig - Tried to set INDEXED_EXTRACTIONS but it already had a value! (was: 8, wanted: 0) What could be the problem ?

extracting fields from horrible json events

0
0
So I have some ugly things to deal with. We will eventually fix the logging, but until that time I am left holding the bag dealing and reporting on this stuff. I have example events like the following. What I need to do is extract each of the "json" elements. However these events are not valid son due to the escape characters in the json. Splunk's new field extractor took away the ability to identify multiple values and intelligently try and create a regex to match, so that option is gone. Seeing that a regex for each of these extractions is probably a bit easier to develop than landing someone on Mars, I come to the community for help. Ultimately I would like to be able to search for all the events like the following example and click "table" view and have columns for each "json" element. 2014-10-29T19:20:36+00:00 DEBUG (7): ERP_SERVICE_CALL:POST:RESPONSE: "{\/"status\/":\/"success\/",\/"code\/":400,\/"data\/":{\/"batch_id\/":\/"M-1331\/",\/"order_total\/":4,\/"success_total\/":0,\/"orders\/":[{\/"order_id\/":\/"1272749\/",\/"status\/":\/"error\/",\/"message\/":\/"order_id: 1272749 \/\/nCode: INVALID_KEY_OR_REF\/\/nDetails: Invalid item reference key. Item value provided: ASB-000219 \/\/nforEach(EC_Libs-4.0.6.js:70),forEach(EC_Libs-4.0.6.js:70),restletwrapper(null$lib:4) \/\/n[no stack trace]\/",\/"customer_internal_id\/":\/"16873\/",\/"customer_id\/":1301051},{\/"order_id\/":\/"1272750\/",\/"status\/":\/"error\/",\/"message\/":\/"order_id: 1272750 \/\/nCode: INVALID_KEY_OR_REF\/\/nDetails: Invalid item reference key. Item value provided: ASB-000219 \/\/nforEach(EC_Libs-4.0.6.js:70),forEach(EC_Libs-4.0.6.js:70),restletwrapper(null$lib:4) \/\/n[no stack trace]\/",\/"customer_internal_id\/":\/"16873\/",\/"customer_id\/":1301051},{\/"order_id\/":\/"1272751\/",\/"status\/":\/"error\/",\/"message\/":\/"order_id: 1272751 \/\/nCode: INVALID_KEY_OR_REF\/\/nDetails: Invalid item reference key. Item value provided: ASB-000219 \/\/nforEach(EC_Libs-4.0.6.js:70),forEach(EC_Libs-4.0.6.js:70),restletwrapper(null$lib:4) \/\/n[no stack trace]\/",\/"customer_internal_id\/":\/"16873\/",\/"customer_id\/":1301051},{\/"order_id\/":\/"1272752\/",\/"status\/":\/"error\/",\/"message\/":\/"order_id: 1272752 \/\/nCode: INVALID_KEY_OR_REF\/\/nDetails: Invalid item reference key. Item value provided: ASB-000219 \/\/nforEach(EC_Libs-4.0.6.js:70),forEach(EC_Libs-4.0.6.js:70),restletwrapper(null$lib:4) \/\/n[no stack trace]\/",\/"customer_internal_id\/":\/"16873\/",\/"customer_id\/":1301051}]}}"

How to change time format in my search?

0
0
I am using search ...|timechart sum(x) by y but _time is showing as 2014-4-3-T 00:00, but I want the format of _time on the x axis to be 2014-4-3 only. How do I do this?

Is it possible to configure staggered input polling on a universal forwarder?

0
0
As I understand it; A forwarder can be configured to only poll or report on a timer. Say once every 60 seconds, or once every hour. Can this polling be staggered? I am asking because we plan to have several thousand endpoints, and want to avoid network collisions of all machines checking in at once.

For top 10 values, I need a dashboard/search for each value separately. Can this be done dynamically?

0
0
Hello I have a table with the top 10 values for an ip sorted by occurrence. Place ip count 1 ip1 100 2 ip2 90 3 ip3 80 4 ip4 70 5 ip5 60 6 ip6 50 7 ip7 40 8 ip8 30 9 ip9 20 10 ip10 10 But now, i need a dashboard for each value separately: A search only for the first ip, another search only for the second ip, and so on. How can I do this dynamically? . Do you know some function to have something like this: function(1) = ip1 (the max value) function(2) = ip2 (the second max value) function(3) = ip3 (the third max value) I'll ve very grateful for your answer

Why are we receiving "Results Error: Error #2032" on all advanced XML dashboards after upgrade from Splunk 6.1 to 6.2?

0
0
We recently upgraded from Splunk 6.1 to 6.2 and noticed that our **Advanced XML** dashboards are failing to render with the error below: `Results Error: Error #2032` This is true for all dashboards, not just some. The odd thing is that we upgraded our DEV environment just fine - it only seems to be happening in our PROD server. The browser keeps trying to connect to a job URI that constantly returns a 404... `Failed to load resource: the server responded with a status of 404 (Not Found) ` ####URI: `en-US/splunkd/search/jobs/1415220452.660/results_preview?offset=0&segmentation=raw&show_offset=1&output_mode=csv&count=1000` Any idea why we are getting 404s? Our Simple XML dashboards work just fine.

How can I find out which systems are generating the most output

0
0
About a week ago, daily usage jumped SIGNIFICANTLY. I was no where near the license capacity, now i'm exceeding it and I'm not sure what is generating it. What can I do to find out what is generating most of the output.

How to configure props.conf and transforms.conf to append elements onto syslog output?

0
0
I am looking to see if i can append the following data elements to a syslog output message via a heavy forwarder: 1) Date & Time Stamp 2) Hostname 3) Source 4) SourceType I believe all these elements are already part of the unparsed datatype being sent from our universal forwarder. Any help would be appreciated.

If passing a parameter from main dashboard to next view, how to show this parameter as read only in the label or input type field?

0
0
I am passing a parameter from main dashboard to next view. I want to show that parameter to the user in the label or input type field. If i use Input type="text" , it becomes editable. I want to make this field read only. i have used readonly="readonly" in, but it's not working . please help.

Why is the metadata type=hosts command for *nix search heads showing incorrect lastTime and recentTime?

0
0
I am using the metadata type=host command to alert me when a forwarder goes down and am now wanting to extend it to search heads. The command works great for *nix forwarders but for *nix search heads it is showing me that 2/3 SH heads haven't reported in 82 days. These are both up and forwarding their _internal logs to the indexers. Any ideas why this is reporting incorrectly?

Universal forwarder 5.0.3

0
0
Hi, One of the log file stopped forwarding with this error; forwarding was working for years..atleast when I see some issue like this if I restart it used to work but today no matter what, it is not working. BatchReader - Removed from queue file I tried: 1) restarting splunk forwarder; splunk search head 2) Using REST API; http://blogs.splunk.com/2011/01/02/did-i-miss-christmas-2/ -- I don't see the file in this FileOpen list 3) forwarder was set with these: followTail = 0 (I changed this to 1 and tried but did not work.) crcSalt = What else can I do to make this work? can some one please help? Thank You.

Transaction to Correlate Events Occuring Close to One Another

0
0
Apologies if this has already been answered...I can't seem to find a way to get Splunk to correlate events into a single transaction. Here is an example of the use case: 2014-10-22 11:40:32,596 INFO in.ABC_123 -

How to find the earliest date in a multivalue field

0
0
I have a multivalue field which contains date strings. I would like to find the earliest one of the field and set a new variable to that value. Foreach seems to choke on multivalue fields. Any ideas would be grand.

How to configure props.conf to recognize the exact timestamp format hh:mm.ss,sss in our data?

0
0
events from a particular source have timestamps formatted as follows: hh:mm.ss,ssss - example 02:07.21,0241 this is a strange format to be sure, and splunk does a pretty good job at "guessing" and yeilds 02:07:00.000 but our security guys aren't satisfied with this and they'd like splunks timestamp to match the event timestamp in looking at strptime() documentation (by the way we're at splunk version 4.2.5), I see examples that suggest that using strptime() will limit me to precision year/month/day. I'm not seeing how to specifiy hour, minute, second, decimal second. any ideas to help. thanks so much MichaelS
Viewing all 13053 articles
Browse latest View live




Latest Images