Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

splunk with s3 add-on - monitor a s3 directory

$
0
0

Hi, I have installed splunk w/ s3 add-on. I can add data for s3 bucket, but I can't add data for a s3 bucket/directory. I will get the error saying no objects found under the directory whereas the directory does contain subdirectories and then files within the subdirectories. How to work around this? Thanks.


Unauthenticated dashboard

$
0
0

Hello,

I don’t know about you, but every so often I get the request for a dashboard that does not require a user to authenticate and/or timeout. Prior to 6.x the time out issue could be solved using jscript. Still execs/management do not like to login (It should be like magic doesn’t the system know who I am! Joking just a little). Has anyone found a solution to this yet.

thawing out multiple buckets at once?

$
0
0

Is it possible to thaw out more than one bucket at once? Or do you have to do a rebuild for each, one by one?

I have to thaw out months and months worth of data - something like hundreds of buckets. I'd hate to have to rebuild each one at a time.

Rule based source typing

$
0
0

I'm trying to set the sourcetype on some events I get based on their contents, and then I want to send each of those differentiated sourcetypes to their own indexes. I've tried a bunch of different ways, and none of my approaches seem to work quite like the docs say they should.

So, for starters, source typeing. I feel like what I'm trying to do is simple. If the string FlightEvent occurs anywhere in the event, it should be a FlightEvent. Flight and Event are actually separate xml opening tags, but I can't seem to get less-than and greater-than symbols to display in markdown. I don't know if that has any impact in props or transforms.conf.

In props.conf

[FlightEvent] TRANSFORMS-flighteventtrans = flighteventformat

In transforms.conf

[flighteventformat]
REGEX = FlightEvent
LOOKAHEAD = 16
DEST_KEY = MetaData:Sourcetype
FORMAT = sourcetype:FlightEvent

No good.

I tried setting up rule based source typing.

In props.conf

[rule::flighteventrule]
sourcetype=FlightEvent
MORE_THAN_1 = FlightEvent

No good. I also can't get sourcetypes to go to the correct indexes, or actually any index other than main, but I guess I'll try to deal with that when I get source typing figured out.

Saved search time modifier in simple XML dashboard not working

$
0
0

I am writing a simple XML dashboard (so I can do scheduled PDF reporting) in Splunk 5.0.5.

I want to do a side-by-side graph of a saved search:

<row>
    <chart>
      <title>Internet Inbound Destination IP (Yesterday)</title>
      <searchName>H-Top-Internet-dst-ip-permitted</searchName>
      <earliestTime>-1d</earliestTime>
      <latestTime>@d</latestTime>
      <option name="charting.chart">bar</option>
    </chart>
    <chart>
      <title>Internet Inbound Destination IP (Last 60 Minutes)</title>
      <searchName>H-Top-Internet-dst-ip-permitted</searchName>
      <earliestTime>-60m</earliestTime>
      <latestTime>@m</latestTime>
      <option name="charting.chart">bar</option>
    </chart>
  </row><!-- 2. -->

But the result is a row with two of the same graphs for "Yesterday".

My saved search is currently like this:

[H-Top-Internet-dst-ip-permitted]
#dispatch.earliest_time = -2d@d
#dispatch.latest_time = @d
search = index=techsecu_summary source="Top-Internet-dst-ip-permitted" | top asa_dstip
action.email.inline = 1
alert.digest_mode = True
alert.suppress = 0
alert.track = 0
auto_summarize = 1
auto_summarize.dispatch.earliest_time = -7d@d

All the lines below "search =" are added for accelerating the search. I previously had the two "dispatch." lines in there but they have been commented out for some time.

A colleague did point this post out to me. But that may very well have been Splunk 4 or earlier. I checked the simple XML references for 5.0.5. It does show the <earliesttime> and <latesttime> options for panels.

So, have I hit a bug? Or is this a misunderstanding of the document on my part?

Splunk solution for Soft Defined networking

$
0
0

Does anyone have usecase of visualizing traffic of SDN ? In case of Overlay network(for example VXLAN, MPLS over GRE), we can not see the detail of traffic flowing through the Underlay network, so I want to visualizing the detail of traffic using Splunk. In case of Using Openflow protocol, does anyone try to send the flow table to Splunk and visualizing per flow traffic? Please let me know these usecases for SDN if you have.

Regards,

Mismatch search result between sdk-python and splunk web

$
0
0

Hi, i'm just learning using splunk and sdk-python. I have this search run from sdk:

search = 'search index=main sourcetype=syslog | search *ERROR* | stats count by process' params = {"earliest_time" : "-30d", "latest_time" : "now", "exec_mode" : "blocking", "auto_cancel" : 600 }

And i get this result:

<?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder> <field>process</field> <field>count</field> </fieldOrder> </meta> <result offset='0'> <field k='process'> <value><text>dbus</text></value> </field> <field k='count'> <value><text>4</text></value> </field> </result> <result offset='1'> <field k='process'> <value><text>kernel</text></value> </field> <field k='count'> <value><text>10</text></value> </field> </result> </results>

If i run the same search from splunk web, i get the following result:

<?xml version='1.0' encoding='UTF-8'?> <results preview='0'> <meta> <fieldOrder> <field>process</field> <field>count</field> </fieldOrder> </meta> <result offset='0'> <field k='process'> <value><text>dbus</text></value> </field> <field k='count'> <value><text>4</text></value> </field> </result> <result offset='1'> <field k='process'> <value><text>kernel</text></value> </field> <field k='count'> <value><text>17</text></value> </field> </result> </results>

So in the first result count for process kernel is 10, in the second is 17. Why? Could be for the exec_mode of search in python-sdk? Thanks.

Captcha broken for edits

$
0
0

I entered a question, had no problem with the Captcha, went back to edit it, but edit will not save because Captcha always fails (have tried like 30 times).


Python SDK Visualization

$
0
0

Hi to all,

How to produce a visualization in splunk sdk python? For example pie charts, line graph, and etc.

Thanks in advance!

DB Connect, OS X 10.9, and KeyError: elements

$
0
0

Team,

I had a heckuva time getting DB Connect running on Apple OS X 10.9. I got this error:

KeyError: elements

After thrashing around for a while, including installing what I thought was the latest version of Java (both runtime and JDK), I finally discovered this version of Java:

http://support.apple.com/kb/DL1572?viewlocale=en_US&locale=en_US

I downloaded it, installed it, and the problem was solved. Hope this helps others!

Thanks, -S.

regex file names from path and/or url

$
0
0

I need to extract filenames so I can transact across many logs of different types and such.

some logs have full urls - http://www.test1.com/43/test.txt

some logs have only paths - /43/test.txt

some logs are standar looking logs and some are actually XML data dump that was indexed as a "standard log". - <url>http://www.test1.com/43/test.txt</url>

sometimes the whole path may be enclosed in parenthesis or quotes too - "/43/test.txt"

the basic principle is i need to extract files (filename.ext)

I don't have access to the file system and can only use "Extract Fields" in the web interface?

any thoughts?

path of props.conf for applications in indexer.

$
0
0

In our environment,
We have Universal forwarder, Indexers and search head.
We have different approximate 20-22 splunk apps for different kind of configurations.

All apps are configured In Universal Forwarders at '/opt/splunkforwarder/etc/apps/'
No Apps are configured in indexers.
For each applications, props.conf are configured separately.

Some threads are saying "time stamping" and "line breaking" should be configured in indexers only not in UF.

Where should I place the props.conf in indexer for "time stamping" and "line breaking" for each splunk apps. ?

accelerated search with specific week day

$
0
0

I have an accelerated search which is set for a 3 months time range. The acceleration works, I can get a whole day's logs in the past in an average of 10 seconds, where it would take forever otherwise. I need to be able to see the data for all the same day of the week. But, since you can't specify a time range before an accelerated search query, you can't use "date_wday=Thursday". And doing this: | savedsearch "my_saved_search_name" | date_wday=Thursday won't help since it will force the acceleration to get all the records for the whole week so as to filter them afterward. This results in again an extremely lengthy search. My experiments show that the time it takes for acceleration increases exponentially with the time range you are looking at. Here is a little table to give you an idea of what I mean: Days search time 1 4 2 13 3 31 4 65 5 104 6 207 7 216 8 246

So, as I need to look at all the Thursdays for the last 6 weeks, I end up with a search that takes more than an hour to complete.

Any suggestion on how to get this working will be very appreciated.

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Here is my answer regarding martin_mueller & lguinn's requests for the exact searches:

Actually I disagree. The principle should be applicable to any searches, it should not be dependent on my specific search. I have a search that's accelerated. I want to get the accelerated data only for a specific week day, say Thursday (this means all the Thursdays), in the past 6 weeks. And as I said earlier, the way I understand the usage of accelerated searches, you can't do this without looking at the whole 6 weeks worth of data. Unfortunately this nulls the value of accelerated reports.

But just to make you happier:

Accelerated search (3 months) - Name: acc_metric_ps4_create_account_all_history Query: index=apache uri="*/user/accounts.json" method=POST | bin _time span=1m | rex field=_raw "(?<response_time>d+) d+ "ajp_resource"" | stats count(eval(status=201)) as "Succ", count(eval(NOT status=201)) as "Fail", count as Total, avg(eval(response_time/1000000)) as Latency by _time

six weeks expected data for next day of the week: | savedsearch acc_metric_ps4_create_account_all_history [search earliest=-1s | head 1 | eval date_wday=strftime(relative_time(now(), "+1d@d"), "%A") | fields date_wday | format] | eval lat=round(Latency,2) | eval tot=round(Total) | eval succ=round(100-(Fail/Total*100),1) | eval _time=strptime(strftime(relative_time(now(), "+1d@d"), "%m/%d/%Y").strftime(_time,":%H:%M:%S"), "%m/%d/%Y:%H:%M:%S") | bucket _time span=1h | stats max(lat) as LATENCY_MAX_100, perc99(lat) as LATENCY_MAX_99, perc98(lat) as LATENCY_MAX_98, perc97(lat) as LATENCY_MAX_97, perc95(lat) as LATENCY_MAX_95, perc90(lat) as LATENCY_MAX_90, perc80(lat) as LATENCY_MAX_80, perc70(lat) as LATENCY_MAX_70, perc30(lat) as LATENCY_MIN_30, perc20(lat) as LATENCY_MIN_20, perc10(lat) as LATENCY_MIN_10, perc5(lat) as LATENCY_MIN_5, perc3(lat) as LATENCY_MIN_3, perc2(lat) as LATENCY_MIN_2, perc1(lat) as LATENCY_MIN_1, min(lat) as LATENCY_MIN_0, stdevp(lat) as LATENCY_STD_DEV, max(tot) as TOTAL_MAX_100, perc99(tot) as TOTAL_MAX_99, perc98(tot) as TOTAL_MAX_98, perc97(tot) as TOTAL_MAX_97, perc95(tot) as TOTAL_MAX_95, perc90(tot) as TOTAL_MAX_90, perc80(tot) as TOTAL_MAX_80, perc70(tot) as TOTAL_MAX_70, perc30(tot) as TOTAL_MIN_30, perc20(tot) as TOTAL_MIN_20, perc10(tot) as TOTAL_MIN_10, perc5(tot) as TOTAL_MIN_5, perc3(tot) as TOTAL_MIN_3, perc2(tot) as TOTAL_MIN_2, perc1(tot) as TOTAL_MIN_1, min(tot) as TOTAL_MIN_0, stdevp(tot) as TOTAL_STD_DEV, max(succ) as SUCCESS_MAX_100, perc99(succ) as SUCCESS_MAX_99, perc98(succ) as SUCCESS_MAX_98, perc97(succ) as SUCCESS_MAX_97, perc95(succ) as SUCCESS_MAX_95, perc90(succ) as SUCCESS_MAX_90, perc80(succ) as SUCCESS_MAX_80, perc70(succ) as SUCCESS_MAX_70, perc30(succ) as SUCCESS_MIN_30, perc20(succ) as SUCCESS_MIN_20, perc10(succ) as SUCCESS_MIN_10, perc5(succ) as SUCCESS_MIN_5, perc3(succ) as SUCCESS_MIN_3, perc2(succ) as SUCCESS_MIN_2, perc1(succ) as SUCCESS_MIN_1, min(succ) as SUCCESS_MIN_0, stdevp(succ) as SUCCESS_STD_DEV by _time | collect marker="bw_metric_ps4_create_account_all_expected"

This "expected" search takes almost 2 hours to complete. However I have devised a new technique which doesn't use the accelerated reports, and yet gets me the same results in 20 to 30 minutes. But I still would like to know if there is something I am missing here. Thank you very much for your interest and suggestions.

stats first behaving differently in a dashboard to a search - is this a bug.

$
0
0

Since upgrading from 5 to 6, one of my dashboards started behaving "strangely", and I have distilled it down to this.

If I have a dashboard that uses "stats" and "first"

<dashboard>
  <label>TimeTest</label>
  <description>TimeTest</description>
  <row>
    <table>
      <searchString> index=_internal
        |stats first(_time) as f, last(_time)  as l by sourcetype
        |eval d = f - l
        |fieldformat f = strftime(f, "%c")
        |fieldformat l = strftime(l, "%c")
        |table sourcetype f l d
      </searchString>
      <earliestTime>-2mon</earliestTime>
      <latestTime>now</latestTime>
    </table>
  </row>
</dashboard>

This produces some strange results for me when I run it I get cases where "first" is further in the past than "last" - giving negative values for 'd'.

If I then click "Open In Search" - the same results are shown (as expected), but is the then click on the magnifying glass to do the search again.......I get sensible values for all with positive values for 'd'

The above case does take a while to run, but it is example of what I am experiencing in a form that others can, hopefully, reproduce.

All help much appreciated.

SRX Indexing

$
0
0

I am able to see srx_logs in a new index "SRX" but I want it to go to the "main" index. I can not see SRX logs in the search app when changing Splunk>etc>System>local>Inputs.conf>[UDP://514] index=main

BTW:I can see other source types in the "main" index.


Overlapping events in summary index

$
0
0

How does splunk handle overlapping events in summary index?

Does it simply searches the latest one?

Question index csv with field contain comma

$
0
0

I have issue with index field which contain comma. Below is my csv input

"28650096","2013-12-02 20:30:30","blocked","porn, sexual content","a@a.com","1.1.2.3" "28650093","2013-12-02 20:30:30","allow","search site","b@b.com","2.2.2.4" "28650092","2013-12-02 20:30:30","blocked","gambling","c@c.com","3.3.3.2"

my props.conf [temp-audit] FIELD_DELIMITER = , INDEXED_EXTRACTIONS = csv KV_MODE = none NO_BINARY_CHECK = 1 REPORT-audit = temp-audit-csv SHOULD_LINEMERGE = false pulldown_type = 1

my transforms.conf [temp-audit-csv] DELIMS=", " FIELDS=id,timeStamp,Type,Reason,email,SourceIP

When add data using A file or directory of files it can see three events without problem. But after done adding data when in search when I do "search *" it only return 2 events it seem the first one didn't make it to the search.

Please help thanks

Question about timemodifier

$
0
0

alt textHi!

I would like to ask about the timemodifier.

I have a following search including subsearch,

index=hoge [ search index=hoge _index_earliesst=-1d@d _index_latest=@d | stats earliest(start) as earliest latest(stop) as latest by field | eval earliest=substr(earliest,5,2) . "/" . substr(earliest,7,2) . "/" . substr(earliest,1,4) . ":" . substr(earliest,9,2) . ":" . substr(earliest,11,2) . ":" . substr(earliest,13,2) | search conditionA | eval latest=substr(latest,5,2) . "/" . substr(latest,7,2) . "/" . substr(latest,1,4) . ":" . substr(latest,9,2) . ":" . substr(latest,11,2) . ":" . substr(latest,13,2) | fields field earliest latest | format "(" "(" "" ")" "OR" ")" ]

My purpose is to search the events that meets the conditionA that were indexed the previous day and pass the earliest and latest time of each field to the main search.

However, when the number of events should the main search returns are 5000 , it scans more number of events.

For example, field earliest latest fieldA 1/25/2014 00:00 1/25/2014 01:00 3 records exists fieldB 1/25/2014 02:00 1/25/2014 02:00 5 recoreds exists fieldC 1/26/2014 00:00 1/26/2014 01:00

  • my latest event in this record is 1/25/2014 01:50:00

if I expect the subsearch to return (fields="fieldA" earliest="1/25/2014:00:00" latest="1/25/2014:01:00") OR ( field="fieldB" earliest="1/25/2014:02:00" latest="1/25/2014 02:00") , I expect the main search to scan only 8 records, But it seems that it scans the event more than I expect.

Is the timemodifier not working corrctly if you concatenate with OR's?

I have added a screen shot where the scanned events are increasing although the mathing events are finished.

Thanks, Yu

How to combine information from 2 different sources?

$
0
0

Hi!

I have a small problem here.. I have two different sourcetypes named 'server' and 'metrics'. Server-sourcetype has fields named customername, servername and server_id. Metrics-sourcetype has fields _time, server_id, meter, value. There are several different meters and many values per meter.

I'm trying to combine these two. I have a populating scrolldown-input for selecting server (as a $server$) but I'm unable to find information from metrics-sourcetype with the name of the server (server_id is the key value).

I have tried almost everything but nothing seems to be working.. Output should be a table or list of time , meters and values by the meter. Can you please help me with this one?

HP Service Manager app

$
0
0

Hi friends

I am developing an small app to dashboard HP Service manager incident/change/catalog data by db connect to the database. Has anyone done something like this before? If we have something already done, i can expand it.

Thanks in advance

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>