Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Distributed search scheduled alerts on SH

$
0
0

We have an indexer indexing events with _time 5 hours head and we have Distributed search from SH which looks at _index time earliest and latest 10 minutes...although events with _time + 5 hours and matching index time exist ..they dont show up in Splunk SH scheduled searches ? why ?

Does the scheduler (SH) introduce some filter when they run to prevent them from searching events that have timestamps later than the local runtimes of the queries Kindly clarify


Splunk 6.1.1 RPM is not relocatable, unlike previous versions

$
0
0

On our servers we need to use the --prefix option when installing Splunk from RPM packages. This has been working with 6.0.x but it does not work with 6.1.1:

$ rpm --prefix=/home/foo -U splunk-6.1.1-207789-linux-2.6-x86_64.rpm
warning: splunk-6.1.1-207789-linux-2.6-x86_64.rpm: Header V3 DSA/SHA1 Signature, key ID 653fb112: NOKEY
error: package splunk is not relocatable

I would prefer to continue using the RPMs, so is there a way this capability can be added back in the latest version?

Adding a drop-down list in the form

$
0
0

Hi, I am wondering how to add a drop-down list in a dashboard/form.

The situation is the following: I have a bunch of queries that I would like to run against a specific customer. Let's say that I have: {Customer1, Customer2, Customer3,...Customer n} and the queries: - Query 1: ... | search customerName="Customer1"; - Query 2: ... | search customerName="Customer1"; - .... - Query m: ... | search customerName="Customer1";

I would like these queries to run against the CustomerX selected from the drop-down list {Customer1, Customer2, Customer3,...Customer n}

Is possible to get this done using Splunk? Any help will be more than welcome.

I'm using Splunk 6.0

Many thanks.

Getting transaction without using transaction command

$
0
0

Hi

I am using Hunk and I am looking for a way to get transaction (grouping events by userid with start transaction and stop transaction event).

For example, I have events something like this:

2014/05/01 00:00:01 userid=u01 action=start
2014/05/01 00:00:02 userid=u02 action=start
2014/05/01 00:00:03 userid=u01 action=stop
2014/05/01 00:00:04 userid=u03 action=start
2014/05/01 00:00:05 userid=u03 action=stop
2014/05/01 00:00:06 userid=u01 action=start
2014/05/01 00:00:07 userid=u01 action=stop
2014/05/01 00:00:08 userid=u02 action=stop

Search result with transaction command is :

index=main sourcetype=transtest
| transaction userid startswith=action=start endswith=action=stop
| table _time userid duration'

           _time            userid duration
--------------------------- ------ --------
2014-05-01 00:00:06.000 JST u01           1
2014-05-01 00:00:04.000 JST u03           1
2014-05-01 00:00:02.000 JST u02           6
2014-05-01 00:00:01.000 JST u01           2

My try without transaction command is like this :

index=main sourcetype=transtest
| stats min(_time) as _time max(_time) as max by userid
| eval duration = max - _time
| table _time userid duration

           _time            userid duration
--------------------------- ------ --------
2014-05-01 00:00:01.000 JST u01           6
2014-05-01 00:00:02.000 JST u02           6
2014-05-01 00:00:04.000 JST u03           1

I want to get the result I get from transaction command, but I can not use transaction command because of the limitation of Hunk.

Is there any way to get transaction information without using transaction command?

Any commend would be appreciated.

Install on Ubuntu 14.04 fails

$
0
0

When installing Splunk on Ubuntu 14.04 it fails with the following message:

# dpkg -i splunk-6.1.1-207789-linux-2.6-amd64.deb 
(Reading database ... 80138 files and directories currently installed.)
Preparing to unpack splunk-6.1.1-207789-linux-2.6-amd64.deb ...
Unpacking splunk (6.1.1) over (6.1.1) ...
Setting up splunk (6.1.1) ...
/var/lib/dpkg/info/splunk.postinst: line 85: 2853SPLUNK_HOME/etc/splunk-launch.conf: No such file or directory
touch: cannot touch ‘2853SPLUNK_HOME/ftr’: No such file or directory
/var/lib/dpkg/info/splunk.postinst: line 103: 2853SPLUNK_HOME/ftr: No such file or directory
/var/lib/dpkg/info/splunk.postinst: line 128: 2853SPLUNK_HOME/ftr: No such file or directory
chown: cannot access ‘2853SPLUNK_HOME’: No such file or directory
complete

I guess the number 2853 is a PID and varies betweent tries.

Any idea?

Cheers Steffen

Making Search bar available in Saved Searches

$
0
0

I have added some searches to some Navigation dropdowns in an app I created. When you select the saved search in the dropdown menu it opens the saved search. Is there a way that I can make it so the saved search loads AND has the search bar available without making users click Edit->Open in Search ?

Here is the current nav menu: <nav color="#65A637"> <collection label="Dashboards"> <view name="search" default="true"/> <view name="inventory_integration"/> <view name="api-gateway-traffic"/> <view name="order_integration"/> <view name="exc"/> <view name="queue_errors"/> </collection> <collection label="Misc. Views"> <view source="unclassified"/> </collection> <collection label="Saved Searches"> <collection label="Errors"> <saved source="unclassified" match="error"/> </collection> <collection label="API"> <saved source="unclassified" match="api"/> </collection> <collection label="TIS"> <saved source="unclassified" match="tis"/> </collection> <collection label="Listing Lookup"> <saved source="unclassified" match="listing"/> </collection> <collection label="Support"> <saved source="unclassified" match="CS"/> </collection> <collection label="Todds Frequent"> <saved source="unclassified" match="Todd"/> </collection>

</collection>
</nav>

Splunk Executive Reports Tool needed - Trends and Violations-Can sparklines(trends) be superimposed with SLA(flat line) in one pic to depict trends and violations for quick executive summary?

$
0
0

Hi All,

Can someone also recommend a good executive report splunk addons if someone has tried. Splunk looks like more for technical audience, but for Directors, VP etc need a better tool to produce executive reports in short concise fashion. A good example is above. Spark lines are good way to illustrate trends such as degradations in response times. But if spark lines can be superimposed with a flat SLA line which can also indicate SLA violations, that can be super way to communicate to executives instead of pages of splunk elaborate reporting graphs.

Please advise. Thanks in advance.

Geo Location Lookup Script (powered by MAXMIND) -- broken with 6.1?

$
0
0

Love this app! Worked fine with 6.0.2 -- but broke when I applied 6.1 (build 206881).

"Script for lookup table 'geoip' returned error code 1. Results may be incorrect. "

Any ideas? If the developers are around -- pretty please fix?


Timechart with large split by gives" Your search generated too much data for the current visualization configuration." Is it truncating stats or chart or both?

$
0
0

When running a search against a weblog, and attempting to "|timechart span=1h limit=0 count by queryname" for 24hrs, I get the " Your search generated too much data for the current visualization configuration."

Is it just truncating the graph, or the statistics table as well?

is possible to index XML ?

$
0
0

is possible to index the XML pattern data into splunk and do Splunk search?

In our case, we need to index the XML and co-relate the other logs using Splunk. Can you please suggest the best approach.

Sample Data:

<listpersonattributes recordcount="717"> <personattribute id="3"> <name>firstName</name> <desc>firstName</desc> <attributetype>STRING</attributetype> <isimmutable>true</isimmutable> <createddatetime>2008-07-03 02:41:19.0</createddatetime> </personattribute> <personattribute id="4"> <name>lastName</name> <desc>Last Name</desc> <attributetype>STRING</attributetype> <isimmutable>false</isimmutable> <createddatetime>2008-10-14 02:35:24.0</createddatetime> </personattribute> <personattribute id="6"> <name>middleName</name> <desc>Middle Name</desc> <attributetype>STRING</attributetype> <isimmutable>true</isimmutable> <createddatetime>2007-11-30 01:12:55.0</createddatetime> </personattribute> </personattribute>

create lagtime panel with average time between two string value datetime fields

$
0
0

I have two datetime fields that I would like to use to calculate average lagtime as each message coming contains these fields. I would like to display some sort of panel showing this in seconds.

pubDate:"2014-04-30 11:27:49"   scrapeDate:"2014-04-30 11:27:53"

any help appreciated.

Transform log file or field at index time using script/python instead of at search time?

$
0
0

I have a base64 field in my IIS log file. There are 3 very important properties within the base64 string that I want to extract at index time. It looks like everything available within splunk will be translated at search time and not added to the index.

What I don't want to have to do is manage a scheduled process (windows) on each server to run a transform script on the log, make sure it ran, process it intelligently to avoid re-processing already translated rows, having splunk monitor the translated log instead, etc. This was largely the purpose of Splunk.

I would even be ok if splunk orchestrated running the transform script if it couldn't directly do the decode at index time. E.g., splunk runs this script before indexing.

I am currently using a search app to do the decoding with python but doing nothing more than calling the following is a 13-15x performance hit. I want to be able to filter based off of these 3 decoded properties and that makes this approach unacceptable.

results = splunk.Intersplunk.getOrganizedResults()
for r in results
    // do nothing

Any help or suggestions are appreciated

Filter several strings in transforms.conf

$
0
0

I'm trying to filter out log events that contain one of the strings belowThis is not working, not sure why:

In transforms.conf:

[setnull]
REGEX = (W0032|L0041|ACM0033|ACDB0000|\[DEBUG\]) 
DEST_KEY = queue
FORMAT = nullQueue

In props.conf:

[default]
TIME_FORMAT = %Y-%m-%d %H:%M:%S,%9N
TRANSFORMS-null = setnull
CHARSET = AUTO
NO_BINARY_CHECK = 1
pulldown_type = 1

[foo-prod]
TIME_FORMAT = %b %d %H:%M:%S
NO_BINARY_CHECK = 1
pulldown_type = 1

What am I forgetting or doing wrong?

Thanks!

edit: log entries look something like this. Pretty standard stuff:

2014-05-13 22:56:20,988 [INFO] ACDB0000: ACDB_LOG - IncomingRequest. guid=AN-ON method=register idx=0 <soap:envelope xmlns:soap="http://schemas.xmlsoap.org/soap/envelope/"><soap:body><ns2:register

The end result is that stuff isn't getting filtered.

Make chart value ranges and show the number of hits in that range

$
0
0

I have a database with two values (time and fees). It shows the fees that someone pays and the time in seconds each transactions take to validate. representing it is a simple bar chart like

source="dbmon-dump://Bitcoin/Transactions" | eval Fee=fee/1000 | chart avg(Fee) by time

I would like to represent time ranges, as I have several times for each transaction and it's difficult to represent in a bar chart, e.g the field time defined as groups of 100 (0-100, 101-200, 201-301 and so on).

If there is a way, I would also like to represent the number of transactions (number of rows) is used in each time and represent it in the same chart, in line mode. for example having in the range of 0-100 an average value of 25 fee (which is showed as the first column with 25 in heigh in the y-axis) and, let's say, a point in 200 (with a new Y-axis) which represents the number of rows used to obtain the column.

Can anybody help with this? this should be very simple by I'm start working with spunk.

Thank you very much !!

Securing Splunkweb (Free version)

$
0
0

Hi.

It sounds completely inane to me to not have any authentication on the free splunkweb interface.

I use splunk professionally, so naturally i run splunk free on my personal servers, but they are just not secure!

How would one go about securing their splunkweb in the free version?


Failing to install jmx app. Keep getting following 3 errors. Appreciate any help possible:

$
0
0

During installing the JMX app: Apps > find more apps> jmx app> Install:

1) An error occurred while installing the app: 500 - [HTTP 500] [HTTP 409] App "jmx_ta" already exists; use the "-update true" argument to install anyway

2) In the msgs pulldown shows error: Unable to initialize modular input "jmx" defined inside the app "jmx_ta":

Introspecting scheme=jmx: script running failed (exited with code 1). 5/13/2014 9:56:04 AM

3) Back on 0301 - it appears as if the jmx app installed but when do: Apps> jmx > example JMX modular input > errors: 404 Not Found Return to Splunk home page Splunk cannot find "data/inputs/jmx".

Splunk for SalesForce

$
0
0

I am researching the use of Splunk to pull system event logs out of SalesForce. I am new to Splunk and would like to know if anybody has use or have it successfully working in their organization. Also, what API or connector is needed and any type of logs that Splunk is able to pull out of SalesForce.

Can splunk retrieve system logs from SalesForce?

Thank you, Christine Vladic cvladic@standard.com

Issues Monitoring Fast Rotating Logs - UNIX

$
0
0

Hi All,

I am running into a few errors on my host that is monitoring some logs in RHEL. One of the logs in question could write, fill up, close and rewrite again, all within a second.

A few errors in my splunkd on the host:

05-12-2014 13:25:29.087 -0700 ERROR WatchedFile - Error reading file 'LOG LOCATION': Stale NFS file handle

05-12-2014 13:25:29.087 -0700 ERROR TailingProcessor - error from read call from 'LOG LOCATION'.

05-12-2014 13:26:24.187 -0700 INFO WatchedFile - File too small to check seekcrc, probably truncated. Will re-read entire file='LOG LOCATION' ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

I am running crcSalt = <source> and am still experiencing the problem. I've looked throughout Answers but Im not sure exactly what is causing this problem, if it's a problem with the speed at which the file is written to, an issue where Splunk thinks it has already read the file or something else.

Anyone have any ideas?

Thanks in advance!

Convert NT Epoch Time with props.conf

$
0
0

I'm using db connect to access our SQL SCCM database which stores timestamps as NT EPOCH. I want to use props.conf to have the data indexed with the time field converted for human readability. From the search line I can easily leverage the strftime command to get the date I need. However, due to how NT Epoch works, that same command doesn't work in props.conf.

Here is the stanza for my props.conf:

[host::UBERSCCMSERVER]
TIME_PREFIX = (?i)^(?:[^ ]* ){8}\w+=(?P<FIELDNAME>.+)
MAX_TIMESTAMP_LOOKAHEAD = 0
TIME_FORMAT =/10000000-11644473600,"%m-%d-%Y %H:%M:%S"

Anyone know how to modify my TIME_FORMAT line to work properly with NT Epoch?

How does splunk eat newly copied edition of a file

$
0
0

I mean e.g. if i manually copy and overwrite a "message.log" to splunk monitoring path, the new one contains some growth at end than the old one. How could i make sure splunk ignore the already indexed data, and just eat the increased part?

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>