Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Error: Unable to stop splunk helpers.

$
0
0

I have an indexer that froze and the server was rebooted. When I try to start, stop or even status splunk I get the following error. Has anyone run into this problem and has an answer?

[root@eulpol06 local]# /opt/splunk/bin/splunk stop splunkd 25332 was not running. Stopping splunk helpers... still shutting down helpers...Timed out waiting for splunk helpers to stop. [FAILED] Error: Unable to stop splunk helpers. [root@eulpol06 local]#

Thanks, Jaime


Browscap TA not working in 6.1

$
0
0

All,

We just updated our instance to Splunk 6.1 and our browscap add-on started returning an error:

Script for lookup table 'browscap_lookup' returned error code 1

I'm not the one who installed the add-on, but it looks like the add-on is only installed on the search head. Does this add-on also need to be included on the indexers? Is this something new with the way Splunk handles searches across search peers in 6.1?

Thanks!

Missing Deplyment client

$
0
0

Could you please anyone help me to write a query to find the missing deployment client? There are many forwarders contacting deployment servers and they are sending logs to different indexes. So I guessed anyhow all forwarders are going to send internal logs and taking internal index in my query. But for some hosts splunk internal logs are missing but they are sending other logs to other indexes. If I use all the indexes in my query using OR, it took much time. Please help me in this.

And we have received internal logs like below for the host which are not sending internal logs sometimes 08-06-2014 09:55:46.224 +0100 INFO WatchedFile - Will begin reading at offset=24999957 for file='/opt/splunkforwarder/var/log/splunk/metrics.log.1'. 08-06-2014 09:55:46.215 +0100 INFO WatchedFile - Will begin reading at offset=0 for file='/opt/splunkforwarder/var/log/splunk/metrics.log'. 08-08-2014 03:10:01.674 +0100 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='/var/log/syslog'.

Search Query : |metasearch index=_internal NOT("tag::sourcetype"=syslog_sourcetype OR "tag::sourcetype"=xfbsourcetype)| stats count by host | eval type="current" | table host, type | append [|inputlookup univfwdlist.csv | eval type="existing"] | stats values(type) as type by host | where mvcount(type) =1 | eval reason=if(type="current","New Host","Missing Host") | table host reason | search reason="Missing Host"

Is there any alternate query to find the missing deployment client? If so could you please expain in detail.

Thanks in advance

Mvfilter Error using NULL

$
0
0

Hi,

I'm having problems using mvfilter to filter out NULL strings. This is my search:

index=nmap* | eval state=mvfilter(match(dest_port_state, "open")) | eval state=mvfilter(state!=NULL) | table dest, dest_port, transport, state, app

I've looked at examples that others are using to achieve the same thing and they appear to be the same as the search I am using, however Splunk is returning the following error:

"Error in 'eval' command: The arguments to the 'mvfilter' function are invalid. "

When I enter a string in quotes such as state!="test" or values such as state!=123 I get no error... Splunk isn't recognising NULL

Any thoughts?

Thanks.

Update

So it seems that my approach is wrong, as taking out the NULL eval shows the open port as port 7, however looking at the RAW event, the open port is in fact 23 (telnet).

I have the following event:

Nmap scan report for 10.10.10.10 Host is up (0.0024s latency). Scanned at 2014-07-10 17:08:07 BST for 42s PORT STATE SERVICE 7/tcp closed echo 9/tcp closed discard 13/tcp closed daytime 21/tcp closed ftp 22/tcp closed ssh 23/tcp open telnet

After stripping my incorrect eval statements I'm back to:

index=nmap* dest_port_state="open" | table dest, dest_port, transport, dest_port_state, app

I want to write a search that will output a table showing open ports by host. I'm having problems filtering this correctly though. Any help would be appreciated!

Thanks Again.

Windows props.conf source stanza - Can't use the embedded Python interpreter

$
0
0

Hi,

I have a developed an App "Nmon performance Monitor for Unix and Linux Systems" (http://apps.splunk.com/app/1753/)

I am currently working on a new release which will allow Windows OS to achieve the conversion step required by the App, this is being achieved by a third party Python script.

Currently, for *nix hosts, the props.conf source stanza is:

# Source stanza for nmon2csv.py script
# This source stanza will be called by the archive processor to convert NMON raw data into csv files
# SPlunk can manage. See inputs.conf for the associated monitor
# The standard nmon App and PA-nmon App will use the embedded splunk interpreter

[source::.../*.nmon]

invalid_cause = archive
unarchive_cmd = $SPLUNK_HOME/bin/splunk cmd python $SPLUNK_HOME/etc/apps/nmon/bin/nmon2csv.py
sourcetype = nmon_processing
NO_BINARY_CHECK = true

I have tried with no sucess to transpose this to Windows, like:

[source::...\.*.nmon]

invalid_cause = archive
unarchive_cmd = $SPLUNK_HOME\bin\splunk.exe cmd python $SPLUNK_HOME\Splunk\etc\apps\nmon\bin\nmon2csv.py
sourcetype = nmon_processing
NO_BINARY_CHECK = true

It does not work, giving errors in splunkd.log like:

 ERROR ArchiveContext - archive writer failure: errno=Le canal de communication est sur le point d’être fermé.

I have tried to use the full path of Splunk and i have tried to protect the "Program Files" directory name with no luck (with " () ' )

[source::...\.*.nmon]

invalid_cause = archive
unarchive_cmd = C:\Program Files\Splunk\bin\splunk.exe cmd python C:\Program Files\Splunk\etc\apps\nmon\bin\nmon2csv.py
sourcetype = nmon_processing
NO_BINARY_CHECK = true

I could be tempted to think that the problem comes from the directory name and its white space, but if i install as a dependency a Python 2x package in Windows (so i can later call any python script from Windows) and if i simply use:

[source::...\.*.nmon]

invalid_cause = archive
unarchive_cmd = C:\Program Files\Splunk\etc\apps\nmon\bin\nmon2csv.py
sourcetype = nmon_processing
NO_BINARY_CHECK = true

Everything works perfectly and my lib script will convert my data as exepected.

My goal is to use the embedded Python interpreter not to have to ask users to install to be able to use the App.

As it is, i am currently unable to use the embedded Python interpreter in Windows where i do not have any issue with any other *nix OS.

Does anyone has some idea ?

Thanks !

dynamic index assignment based on event or log prefix

$
0
0

We do have the sample event like below and these event logs are coming from syslog-ng forwarder. Can you please tell us, how to assign the index based on event prefix value.

Example Event :

nbcutve bal-8080 1.2.3.4 - - [11/Aug/2014:15:10:04 +0000] "GET http://test.com HTTP/1.1" nbcubravo bal-8079 1.2.3.5 - - [11/Aug/2014:15:10:04 +0000] "GET http://test.com HTTP/1.1" nbcubravo bal-6339 1.2.3.6 - - [11/Aug/2014:15:10:04 +0000] "GET http://test.com HTTP/1.1" nbcubravo bal-6339 1.2.3.7 - - [11/Aug/2014:15:10:04 +0000] "GET http://test.com HTTP/1.1" nbcutve bal-8079 1.2.3.4 - - [11/Aug/2014:15:10:04 +0000] "GET http://test.com HTTP/1.1"

In the above example, nbcutve and nbcubravo are the different brand, how to assign different brand name to appropriate index name (nbcutve / nbcubravo ) dynamically in transform.conf and inputs.conf. Can you please share the config details.

joining two index by non unique fields

$
0
0

Index1 with fields (name, "team id", surName) Index2 with fields (userId, correlationId, operation)

Questions1: I want to join two indexes which are having completely different sets of information. There is a joining field but the field names are not unique but the values are same

both name and userId's holding the same set of values. e.g name=John, userId=John How do I combine these two indexes by name and userId fields to get results from both indexes

Question2: we have created two form fields in splunk dashboard. one is "userId" and second one is "team id". "team id" is available only in Index1 and userId is available in Index2. As I mentioned in the question1, the only joining condition is using name and userId. Please suggest best search query to combine the indexes filtered by "team id" and userId

Answering to Somesoni2, Ayn

Thanks a lot for your quick responses. Please find the sample logs from the index1 and index2. There are lot more fields in addition to the one I mentioned but I am not in a position to reveal them as they are sensitive.

Index1 2014-08-10 21:34:12,558 INFO TeamReportImpl - {name=John, "team id"=Team 1, surname=Wright}

Index 2 2014-08-10 22:24:11,668 INFO OperationReportImpl - {userId=John operation=Create, correlationId=021C0E78-65D2-AF4F38A93D7E}

The requirement is, we have dashboard with three fields, 1. Date range 2. officer name 3. team drop down I have to create several panel to display Total counts. e.g 4. Total count of Create operation 5. Total count of correlation id by team (even though the team is not provided in index2) thanks again for prompt response -Velu

What is the queue named "aeq" and how to increase its max_size_kb?

$
0
0

We have an older RedHat 5.6 box running the Splunk Universal Forwarder 5.0.2 processing a few directories with many *.gz files. The system seems to be keeping up well enough, but we've noticed the metrics.log has started noting a lot of "blocked=true" showing up, mainly from the "aeq" queue. Here's a sample:

[root@linux1621 splunk]# grep "name=aeq" metrics.log | tail
08-08-2014 20:01:03.681 +0000 INFO  Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=63, smallest_size=0
08-08-2014 20:01:34.683 +0000 INFO  Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=61, smallest_size=0
08-08-2014 20:02:05.562 +0000 INFO  Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=61, smallest_size=5
08-08-2014 20:02:36.564 +0000 INFO  Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=61, smallest_size=22
08-08-2014 20:03:07.565 +0000 INFO  Metrics - group=queue, name=aeq, max_size_kb=500, current_size_kb=482, current_size=15, largest_size=61, smallest_size=0
08-08-2014 20:03:38.564 +0000 INFO  Metrics - group=queue, name=aeq, max_size_kb=500, current_size_kb=482, current_size=15, largest_size=15, smallest_size=15
08-08-2014 20:04:09.402 +0000 INFO  Metrics - group=queue, name=aeq, max_size_kb=500, current_size_kb=0, current_size=0, largest_size=61, smallest_size=0
08-08-2014 20:04:40.403 +0000 INFO  Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=61, smallest_size=0
08-08-2014 20:05:11.403 +0000 INFO  Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=61, smallest_size=1
08-08-2014 20:05:42.404 +0000 INFO  Metrics - group=queue, name=aeq, blocked=true, max_size_kb=500, current_size_kb=499, current_size=61, largest_size=61, smallest_size=7

These seem to correlate with errors seen in the splunkd.log file:

08-08-2014 20:05:42.836 +0000 INFO  BatchReader - Continuing...
08-08-2014 20:05:43.044 +0000 INFO  BatchReader - Could not send data to output queue (parsingQueue), retrying...
08-08-2014 20:05:43.708 +0000 INFO  BatchReader - Continuing...
08-08-2014 20:05:44.057 +0000 INFO  BatchReader - Could not send data to output queue (parsingQueue), retrying...
08-08-2014 20:05:44.394 +0000 INFO  BatchReader - Continuing...
08-08-2014 20:05:45.363 +0000 INFO  BatchReader - Could not send data to output queue (parsingQueue), retrying...
08-08-2014 20:05:46.339 +0000 INFO  BatchReader - Continuing...
08-08-2014 20:05:47.939 +0000 INFO  BatchReader - Could not send data to output queue (parsingQueue), retrying...
08-08-2014 20:05:48.251 +0000 INFO  BatchReader - Continuing...
08-08-2014 20:05:48.459 +0000 INFO  BatchReader - Could not send data to output queue (parsingQueue), retrying...

I tried increasing the maxKBps in limits.conf (doubled it from 1024 to 2048), but the errors returned right after restart.

The CPU and RAM on this system are doing quite well - system load is below 1.00 most of the time, and RAM is mostly buffers and not swapping.

What is "aeq" and where are it's parameters adjusted? Can we increase the max_size_kb (presumably to 1024)?

Or is this a red herring and we need to look elsewhere?


Distinct count of all unique email addresses which do not end with certain domain names

$
0
0

I want to count all all unique email addresses in a multi-value "to" field which do not end with certain domain names.

stats dc(to) by mid

should count the number of unique to recipients per email message (mid). Correct me if it only counts 1 per event instead of per value in the multi-value field.

: SplunkAppForCiscoUCS Version 1.0. Errors

$
0
0

When I use the app I get the following errors:

[subsearch]: Eventtype 'ucs-perf' does not exist or is disabled.

Eventtype 'ucs-inv' does not exist or is disabled.

[subsearch]: The specified search will not match any events

How to get stats command to calculate with precision for numbers in scientific notation?

$
0
0

I have found that the stats command's output doesn't use scientific notation. This means that if I need to calculate some statistics on a set of very small numbers then stats just reports 0.00000.

You can see the effect with this search:

| stats count | eval small="1.23456e-306" | stats min(small)

It will always return 0.00000.

Is there a way to have the stats command in the above search keep the precision and return 1.23456e-306?

(I know in this example the stats command is pointless because there's only one value, but the real case is that I have many events containing very small numbers and I want to calculate various statistics of the data set without all the small numbers getting changed to 0.)

Streamstats by-clause

$
0
0

I am trying to use streamstats to calculate the amount of downtime for several of our services based on sequences of log messages that state that a given service is unavailable.

I first tried doing this using transaction using the field "service" to identify the events that belong to the same transaction and using the startswith and endswith keywords to indicate the event sequence for a given transaction. Each transaction would start with the first log message for a service with a specific error code, and the transaction ends with the first log message for the same service with a successful error code. This did not work out for me since it did not correctly detect the transaction edges. Cut the transaction in self would only include but it would span a much longer interval than required. Also, a single event would spawn multiple transactions which is not correct since any start of a fault will be followed by the end of a fault hence giving only a single transaction. I then turned to streamstats.

Using streamstats I look at consecutive events for a given service to see when the error code changes from successful to failed and back again. Since I want to do this by service I enter the "by service" clause at the end of the command, and then perform different calculations to mark the edges of the downtime and find the duration. The problem is that when I do the search for a specific service I get a different result from when I rely on the by-clause to separate the events. I suspect that this may be because streamstats separates the events after seeing them in the stream, so to events that are adjacent when I search for a specific service will not be adjacent in the stream when streamstats takes care of the splitting since it will see the entire event stream. Let me give you two examples of the commands I try to use.

sourcetype=TM | streamstats current=f window=1 last(errorgroup) as last_error last(_time) as last_time by service | eval end_transaction=if(last_error!="datakilde" AND errorgroup="datakilde", 1, 0) | eval start_transaction=if(last_error ="datakilde" AND errorgroup!="datakilde", 1, 0) | eval start_time=if(start_transaction=1, last_time, 0) | eval end_time=if(end_transaction=1, last_time, 0)

The above command identifies the transition from one error group to another and marks the appropriate events. I also get the time of the adjacent event to correctly calculate the downtime since the log is indexed in the reverse order. My problem is that this search for a given service use different results from this one where I specify the service in the initial search:

sourcetype=TM service=TF | streamstats current=f window=1 last(errorgroup) as last_error last(_time) as last_time by service | eval end_transaction=if(last_error!="datakilde" AND errorgroup="datakilde", 1, 0) | eval start_transaction=if(last_error ="datakilde" AND errorgroup!="datakilde", 1, 0) | eval start_time=if(start_transaction=1, last_time, 0) | eval end_time=if(end_transaction=1, last_time, 0)

As I mentioned, I suspect that the problem is with how the events reach the streamstats function. If this is correct, is there any way to separate the event stream into the before it reaches the streamstats function?

Form Input - Add submit button for each panel

$
0
0

Hi,

I would like to add a seperate submit button for each panel of a form. At the moment there is only one button at the top to rerun all searches. Is this possible?

BR

Heinz

Table highlight in Splunk 5.0.2

$
0
0

Hi,

Will it possible to do table higlight using javascript in Splunk 5.0.2 version?

Let me know what are all the possibilites available without using Sideview Utils to perform table highlight?

How to remove a field from WMI search query results in Splunk?

$
0
0

I have configured below query in wmi.conf

wql = select Caption,State from Win32_Service where Name like '%BlackBerry%'

Splunk is pulling up status of all Blackberry services correctly

8/11/14 11:48:57.079 AM
20140811114857.079470 Caption=BlackBerry Synchronization Service State=Running wmi_type=BlackBerryService

Only problem is when i table the output like | table host, Caption, State the result is coming up this way, MCLCOVBB61VWIN BlackBerry Running

The service name is incomplete. What i found out was that, if i expand the fields in the original query, i see another field there with the name Caption which has only 'Blackberry' as the value.

How do i get rid of the second Caption field?


Rex command: Help with regex to extract fields containing credit card numbers

$
0
0

Hello,

I have a problem with splunk search. What I need to do is to do a search from the fields containing CC numbers. I have tried the example from the Splunk tutorial:

| rex field=ccnumber mode=sed "s/(\d{4}-){3}/XXXX-XXXX-XXXX-/g"

And I modified it as:

| rex field=kreditnakatica mode=sed "s/(\d{4}){3}/XXXXXXXXXXXX/g"

As to accommodate my field name and the CC format with no hyphens, but it does not work. Overall, I seem to have a problem understanding what kind of regex would Splunk accept, as e.g. it does not accept regexes such as \d{16}.

Thank you and cheers!

How to write a search to output a table showing open ports by host?

$
0
0

Hi,

I'm having problems using mvfilter to filter out NULL strings. This is my search:

index=nmap* | eval state=mvfilter(match(dest_port_state, "open")) | eval state=mvfilter(state!=NULL) | table dest, dest_port, transport, state, app

I've looked at examples that others are using to achieve the same thing and they appear to be the same as the search I am using, however Splunk is returning the following error:

"Error in 'eval' command: The arguments to the 'mvfilter' function are invalid. "

When I enter a string in quotes such as state!="test" or values such as state!=123 I get no error... Splunk isn't recognising NULL

Any thoughts?

Thanks.

Update

So it seems that my approach is wrong, as taking out the NULL eval shows the open port as port 7, however looking at the RAW event, the open port is in fact 23 (telnet).

I have the following event:

Nmap scan report for 10.10.10.10 Host is up (0.0024s latency). Scanned at 2014-07-10 17:08:07 BST for 42s PORT STATE SERVICE 7/tcp closed echo 9/tcp closed discard 13/tcp closed daytime 21/tcp closed ftp 22/tcp closed ssh 23/tcp open telnet

After stripping my incorrect eval statements I'm back to:

index=nmap* dest_port_state="open" | table dest, dest_port, transport, dest_port_state, app

I want to write a search that will output a table showing open ports by host. I'm having problems filtering this correctly though. Any help would be appreciated!

Thanks Again.

documentation of token filter like "|s" (Simple XML)

$
0
0

In the Simple XML Reference (Drilldown element "set") a " |s token filter" is mentioned, which should put quotes around a token value.

Example: <set token="Token Name">sourcetype=$click.value|s$</set>

How does this token filter work exactly? Does it also escape double quotes in the token value?

Is there any documention for token filters?

Are there more token filters (in Simple XML)?

Splunk user roles

$
0
0

Dear All,

Can anyone Guide me in understanding the functionality of Splunk Users. when we define users in splunk we can assign 5 roles

1) Admin

2)user

3)can_delete

4) power

5)Splunk-System-Role

Can anyone tell me what are the functionality of these roles

Thanks

Gajnan Hiroji

abnormal Field extraction after UCS upgrade?

$
0
0

Hello, Has anyone seen trouble with field extraction in UCS 2.2(2c)?? What do you recommend? (should i re-create all the regex?)

long story : My "UCS app" lab worked great 15 days ago, and the dashboards doesn't show anything. The error might be related to an upgrade of UCS in 2.2(2c) performed by other lab admins.

The field extraction use to be correct, and is now weird for every log (fault , inventory, perf,...) Example : the logs (extracted via python) coming from this search {eventtype="ucs-perf" } 15 days ago was : sys/chassis-1/slot-1/host/port-1/tx-stats|0|0|58982460|0|0|0|0|0|0|0|2014-06-26T16:07:36.394|0|0|--|71749|0|0|no|91631|0|0|0|0|196621|0|0|7498|0|31151|33882118|0|0|0|194531|0

Now the same log look likes this : dn|load|memAvailableAvg|memCachedMax|memCachedMin|loadMin|loadMax|update|timeCollected|memAvailableMax|intervals|thresholded|memCached|memCachedAvg|memAvailable|loadAvg|memAvailableMin|suspect sys/switch-B/sysstats|0.250000|13183|1515|1514|0.000000|0.340000|262155|2014-07-11T13:31:06.463|13185|58982460|--|1515|1514|13184|0.107273|13184|no sys/switch-A/sysstats|0.020000|13287|1499|1499|0.000000|0.150000|262155|2014-07-11T13:30:57.511|13288|58982460|--|1499|1498|13288|0.082727|13287|no

Consequence : field extraction isn't correct=>dashboard doesn't show anything

Regards,

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>