Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Splunk 6 Ciso ips error

$
0
0

I upgraded Splunk to version 6 and data stopped flowing from our CiscoIPS. My sdee_get.log shows this error: Wed Oct 16 09:16:53 2013 - ERROR - Connecting to sensor - MY IP: URLError: <urlopen error="" [errno="" 8]="" _ssl.c:521:="" eof="" occurred="" in="" violation="" of="" protocol="">

I dug in deeper and I think its barking at the negotiation of SSL? /splunk/lib/python2.7/ssl.py
I changed ssl.py ssl_version=PROTOCOL_SSLv23 to ssl_version=PROTOCOL_TLSv1 and still did not work. I hope to get this online ASAP.


Create scripted input for Splunk to pull list of installed Firefox Add-Ons from user profiles

$
0
0

Hi Guys,

I want to be able to create a powershell scripted input that would list all the Firefox Add-Ons installed in a user profile on a windows machine. I currently have forwarders on all the machines I would like to accomplish this on. Would anyone know how I can start this ?

Menus broken in new version of Splunk App for Windows Infrastructure?

$
0
0

Just installed version 1.0.1 that was updated today, and it looks like all the Dashboard menu items under the "Windows" section aren't working properly. I had to go in modify the URLs for all of them in the "default" navigation panel in order to get them all working again. These were working fine under 1.0. Can anyone else confirm?

Dashboard PDF generation adds table row numbers

$
0
0

I have a number of panels in my dashboard. When generating PDF, Splunk adds a new first column to almost every panel containing a table and adds the row number although row numbers are off for those panels.
It does NOT add the extra column to existing tables where row number is already on.

Seems like a bug, but anyone seen this and found a solution? It is bug?

regex stops at first match

$
0
0

I've got a regex that seems to stop at first occurence per line. I am using the 'field extraction' function. My regexes are: ("(?P<task>w{1,3}),d{8},d{6}.?")+ ("((w+)?,){4}(?P<wmendpt>w),.?")+

Sample data: ["PE,20140512,234402,,X,0.00,0,0", "PE,20140512,234402,W4325,H,0.00,0,0"]

Actual results: First regex captures first match which is 'PE' . I see count of one in field discover. Second regex captures first match which is 'X'. I see a count of one in field discovery.

Expected: Capture PE and show count of 2. Caputre X and show count of 1. Capture H and show count of 1.

How to handle fieldname=name, fieldvalue=value

$
0
0

I currently have data that I want to extract fields from that looks like this

fieldname1=name1, fieldvalue1=value1, fieldname2=name2, fieldvalue2=value2

I want to extract the fields and make it look like this.

name1=value1 name2=value2

Is this possible with Splunk through modifications to the props.conf and transforms.conf?

Thanks.

Free Limitation ?

$
0
0

I was wondering what were the free license limitation ? Is there a limitation for the configuration for the free edition?

Inner Search between values

$
0
0

Hi Folks, I have a problem with the search

source="source" | 
rex field= ...|
eval value=  (part of regex command)|
eval result= [ | inputcsv CSV_DATA.CSV |
eval x=if (minvalue <= value AND maxvalue >= value, returnstuff, "Nothing") | 
return $x] | 
stats count by result

minvalue And maxvalue are fields from the CSV. "value" is from the outer search. "returnstuff" is a field from the CSV. Can Somebody please tell me what I do wrong that I don't get some results. This would be very Helpfull


Calculate the "month" after first appearance

$
0
0

Hi,

I'm doing an analysis about users whose first event was in January 2014. I want to know, what they did in month 1,2,3 after their first appearance. For these users the January would be month "0". Feb would be "1", March "2" and so on...

So the goal is to add a field to every event: "month afer first appearance" with a numeric value.

Is it possible to calculate to month after the first appearance? I already did something like this for the "day after first appearance". It looked like this:

| bucket span=1d timestamp_of_first_appearance
| bucket span=1d timestamp
| eval day=(timestamp-timestamp_of_first_appearance)/86400

But I can't do this for a monthly perspective, because the duration for the months varies

BG

Heinz

Chart grouped event over time with specific matching events

$
0
0

I am currently trying to show a graphical representation of the number of times an a specific thing happens x number of times. When ever an event in our system is processed and fails we retry another 15 times, so if it completely fails there will be 16 entries in splunk. This all happens within a couple of seconds. The log entry contains the guid of the event and can be identified in splunk.

What I have done so far:

1 - Created a custom field that identifies the guid in the log entry, lets call it "eventid" 2 - Created a search that filters based on source and event type, it groups by "eventid" and filters where there are 16 of those events. Finally it shows that in a time chart.

sourcetype="mysource" "IdentifyCorrectEvent" | stats values(_time) as _time, count by eventid  | search count = 16 | timechart count | timechart per_hour(count)

This works so far as to show a visual representation of the number of times that this happens. For example if we had one failure (16 errors) in an hour it would show a count of 16, 2 in an hour would show a count of 32 and so on.

How do I get the chart to show the number of time there were 16 errors for a single event? This is my first effort with Splunk so feel free to say it is all wrong and I should have done xyz.

Distributed search scheduled alerts on SH

$
0
0

We have an indexer indexing events with _time 5 hours head and we have Distributed search from SH which looks at _index time earliest and latest 10 minutes...although events with _time + 5 hours and matching index time exist ..they dont show up in Splunk SH scheduled searches ? why ?

Does the scheduler (SH) introduce some filter when they run to prevent them from searching events that have timestamps later than the local runtimes of the queries Kindly clarify

XMLでAttributeを指定して検索する方法について

$
0
0

props.confに以下の設定をして、XMLを取り込んでいます。  KV_MODE = xml  pulldown_type = 1  NO_BINARY_CHECK = 1  SHOULD_LINEMERGE = true このとき、XMLでAttributeを指定して検索する方法を教えて下さい。

具体的には以下のようなデータの検索となります。 <aaa no="8"> <bbb>ABC</bbb> </aaa> <aaa no="9"> <bbb>DEF</bbb> </aaa> 上記のようなXMLを取り込んだときにaaa no="9"のbbbの値(この場合DEF)を 取得したい場合の検索方法を教えて下さい。

PerfmonMk: support in Splunk App for Windows Infrastructure

$
0
0

Hello,

Is there any information about a timeline for PerfmonMk: support within the Splunk App for Windows Infrastructure? We upgraded half our environment to Splunk 6 and the more efficient new inputs only to find that they fell off the performance dashboard within the new application.

UPDATE:

Essentially, what needs to happen for this to work correctly is that the Performance saved searches in palettesearches.conf need to be tweaked to include the PerfmonMk: counters. It's quite tricky though because the $Counter$ token has spaces instead of underscores. Perhaps if another token is created something like this could work:

search eventtype="windows_performance" $PerfmonHost$ Host="$PerfmonHostWildcard$" object="Processor" counter="$Counter$" OR $CounterWithUnderscores$=* instance="$Instance$" | rename $CounterWithUnderscores$ as Value | stats sparkline(avg(Value)) as Trend avg(Value) as Average, max(Value) as Peak, latest(Value) as Current, latest(_time) as "Last Updated" by Host | convert ctime("Last Updated") | sort - Current | eval Average=round(Average, 2) | eval Peak=round(Peak, 2) | eval Current=round(Current, 2)

I'm not sure the best way to tackle this, and I don't want to hack the app up wildly only to have a proper fix come out soon. Any advise would be greatly appreciated. Thanks,

-Frank

Splunk App for Windows Infrastructure not Detecting All feeds

$
0
0

The forwarder and app has been installed and configured as shown in the instructions. The apps configuration settings cannot detect all the logs currently coming into Splunk. It detects some of the data inputs but not all. Is there a specific location I need to check to update this? Or am I missing some configuration step? The forwarder is monitoring the relevant location where all the logs sit.

input.conf concat host to host_regex value

$
0
0

Hi trying to work out if I can prefix the value returned by host_regex with the actual server name as some of the logs do not identify which server they came from.

I am currently using host_regex to extract the host name from the log path, but that only works for some logs, other have no way to identify the host.

So looking to combine host + host_regex values.

Thanks.


Use lookup to retrieve query value

$
0
0

Good evening.

I have a query that currently does what I need it to do, searching on a particular value, "foo". This is tied to a form view, so users can simply enter "foo" in a box and the fairly intricate search retrieves what they need. Great. The log events in Splunk reference the value "foo", but it turns out the users actually don't have access to the values for "foo". They only know things by a different value, "bar". There's a backend database somewhere that creates a unique value "bar" for every unique value "foo". Thankfully, we have a CSV extract from the database with two columns, "foo" and "bar" ~2100 of them.

I've been going through the lookup documentation in the Splunk KnowledgeBase as well as here on Splunk>answers, but I'm still at a loss. I don't think using the subsearch as I've seen is what I want, or if it is, I'm not sure how to use it. I need to have the user enter "bar" and lookup the corresponding value for "foo" in the CSV Lookup so the search query is actually referencing the value for "foo" (the value for "bar" doesn't appear in any of our events).

I'm thinking what I need is something like:

[inputlookup lookup.csv | fields foo,bar | where bar=$bar$ | fields foo]

At least, conceptually, that's what I'm thinking, I guess ...

Questions on a Splunk Forwarder

$
0
0

Good day Splunkers,

Would like to ask some questions on the universal forwarder.

  1. What is the minimum CPU usage? Given that we configured the forwarder as default with a minimum of 3 log files being monitored and forwarded.
  2. What is its disk I/O requirements?

Thanks,

REGEX for authentication logs

$
0
0

I have authentication logs like below:

,AUTHN_METHOD_FOO,123!@#123!@#123!@#asdfgdvfd,123!@#123!@#123!@#asdfgdvfd,123!@#123!@#123!@#asdfgdvfd,123!@#123!@#123!@#asdfgdvfd,username,FIRST,LAST,

I want to use regex to build transform for a field called username and first and last

I had this but it didn't work

rex "(?i)AUTHN_METHOD_*+,*+,*+,*+,*+,(?P<username>[^,]+)"

Any help would be great!!!

bundle replication taking too long

$
0
0

I have messages like the one below on a regular basis on my deployment server. Is there any way to determine what bundle is taking too long to replicate? Are there any other searches that will help me to determine which bundle to investigate?

10-05-2011 08:32:26.359 -0400 WARN DistributedBundleReplicationManager - bundle replication to 5 peer(s) took too long (18482ms), bundle file size=29110KB, replication_id=1317817927

Scheduled search for events indexed with _time one day ahead

$
0
0

How do we put a schedule for events that are indexed with event _time one day ahead without using _indextime ...?

if we use _indextime in search also the scheduler seems to disallow events with _time great than now ...how does this work ?..how do we get see such events using scheduler ?

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>