Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Unable to use case with stats

$
0
0

Hi Everyone, I am not able to use eval command with stats. I am using the following search in a form, I want to find the sum of a field depending on the action selected from drop down. I am using the eval command to find the field but not able the pass the same to stats command.

Any help is much appreciated..

sourcetype=brm_batch_data ACTION=$ACTION_SELECTED$ | eval action_val = case (ACTION == "INVOICING", "BILL_DUE", ACTION == "BILLING", "AMOUNT", ACTION="PAYMENT", "AMOUNT") | stats sum(action_val)


Check if value is null for a period of time

$
0
0

Hi Guys,

I need help to set-up an email alert for Splunk that will trigger if a value is null for a specific amount of time. The value in question is derived from multiple values and added by eval command and is piped into timechart command with timespan of 1min.

I basically want it to inform me that value is null for x amount of mins.

Thanks!

ports used between splunk instances

$
0
0

I have two splunk instances on either side of a firewall. I use the deployment server and a shared license. Each instance is an indexer and I use distributed search if that matters.

I'm seeing inbound firewall traffic from 8089 to a random high port on the inside of the firewall.

What ports do I need to permit between these two servers? Right now I'm just permitting any tcp between the two but over the last 12 hours, they've made connections on 18 different ports.

How are you guys watching traffic if you have a firewall between the two (in this case it's a firewall between a couple of vlans).

Thanks!

Nick

Snip of ports I'm seeing:

50280 51473 52118 52843 56289 63700 64082 51649 51836 52449 53372 53716 57232 57330 57634 58366 59676 62753

How to include a python script in views.

$
0
0

hi all,

i have developed a simple python script to get the current external ip of machine. Script is working fine in standalone.i hav created a commands.conf file and make a corresponding entry for it also.then i restart the splunk but It is not getting listed in my Manager>APP>view objects. Also how to include this script in my custom view.

DB Connect not inputting MS SQL from data source

$
0
0

Have the latest install of DB Connect that I have been able to on board two data sources that are MS SQL, I am doing the same for a 3rd MS SQL with no success. I have check the following files on DBX inputs.conf database.conf statexml file from the persistentstorage folder

and all appear consistent but the 3rd has not come in. Pull interval is every minute. Anyone have any ideas of where else to check?

How to change a search into a marco filter?

$
0
0

This is a follow up to Background exclusion The question I have now is no longer on topic with the original posted hence the following.

The title is fairly descriptive I have a search that gives logs which are regularly occuring events. In order to be able to better look and the data logs that indicat problems I want to throw out these results in future searches. In my last question it seems best to use a marco to exclude the results that the original search (below) currently gives.

search terms | eval TimeInHour=_time%3600 
    | rex mode=sed "s/ \d{4}-\d{1,2}-\d{1,2} \d{1,2}:\d{1,2}:\d{1,2}//g" 
    | stats first(_raw) by punct,TimeInHour,_raw,_time 
    |  stats count by _raw,TimeInHour,punct 
    |  addinfo| eval hours = round((info_max_time - info_min_time)/3600,0) 
    | where count > hours-1

How do I turn this in to one big exclusion/negation statement that I can then make into a macro without any subsearch garbage?

db connect timestamp issues

$
0
0

Hi,

I want to do a tail on an Oracle table, using db connect. The table contains a column that appears to be in epoch time, so it should be good for the rising column value. However, whenever I get the data, it is always appearing as midnight of that day, which is incorrect. I'm not sure what I'm doing wrong to get Splunk to recognize the data properly.

Here's some output from the db connect tool:

CREATIONTIME HOST 1277956800.000 cpu2 1277956800.000 ftgbosbb02vwin 1277956800.000 cmsvrtpcva1win 1277956800.000 ftgbos018vwin 1277956800.000 amro348 1277956800.000 cpu4

I have CREATIONTIME set as the Rising Column, and my props.conf has the following:

[jmagicAlerts] MAX_TIMESTAMP_LOOKAHEAD = 300 NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false pulldown_type = 1 TIME_PREFIX = CREATIONTIME= TIME_FORMAT = %s.%3N

If I execute "SELECT CREATIONTIME from..." via sqlplus, I get the following, which shows specific timeframes:

CREATIONTIME

07-01-2010-00:07:41 07-01-2010-00:07:45 07-01-2010-00:07:48 07-01-2010-00:07:48 07-01-2010-00:07:48 07-01-2010-00:07:48 07-01-2010-00:07:53 07-01-2010-00:07:54

Does anyone have suggestions?

Using tokens in splunklib?

$
0
0

Hi,

I'm writing a custom command that is supposed to do some actions on Splunk through its REST interface so I wanted to use the SDK. However, I'm having problems authenticating with the session token. Here's the setup:

In commands.conf:

[mycommand]
filename = mycommand.py
generating = true
maxinputs = 1
stderr_dest = message
passauth = true

The code (auth part):

import splunklib.client as client
import splunk.Intersplunk as si

settings = dict()
records = si.readResults(settings = settings, has_header = True)

sKey = settings['sessionKey']

service = client.connect(token=sKey)
a = service.apps["search"]
st = a.state()

And I don't get anything back. If I change the client.connect call to use hardcoded credentials it works without any problems. In Splunk I'm logged in as admin.

Any idea why I can't pass tokens like this to the Service class?


Heavy Forwarder and Splunkstorm

$
0
0

Can I use a heavy forwarder together with SplunkStorm?

  • Is it supported in general? From a license point of view etc.
  • Are the credential applications compatible?
  • Any other potential issues?

What do you want to see in a Splunk Mobile app?

$
0
0

Here's a very early version of a Splunk App I wrote that generates HTML friendly to small screens and fat fingers.
Use it on your phone or ipad, for example.

http://splunk-base.splunk.com/apps/28665/splunk-mobile

Give it a try. It's a very early version.  The charting only supports timechart right now.

Please post suggestions here. What features do you want?

In a Distributed Search environment, how do I restrict what indexes (or sources) the Search head sees on the Search Peer?

$
0
0

I have a main centralized splunk index server with logs for 50+ hosts. I have a secondary Splunk instance for a smaller application where it logs its own data. I would like to set the smaller instance up as a search head to the centralized server so it can see a small subset of data on the central server which is isolated to one index.

How do I restrict what the search head sees on the search peer, or can it see everything?

Note - not talking about restricting the search which is topic of another question but the access to ensure they don't see other data at all.

app not deploying to client

$
0
0

1 Serverclass.conf in ~splunk/etc/system/local using clientName attribute

placeholder app in ~splunk/etc/deployment-apps/placeholder [global] whitelist.0=* stateOnClient = enabled [serverClass:base-xyz] filterType = whitelist whitelist.0 = xyz-common-apps [serverClass:base-xyz:app:placeholder]

2 deploymentclient.conf on deployment client /etc/system/local

[deployment-client] disabled = 0 clientName = xyz-common-apps [target-broker:deploymentServer] targetUri = x.y.z.d:8089

3 list deploy-clients in DS shows deployment client with right client name connected

But still no app has been deployed to client although app is available

4 restarted both sides

Appreciate inputs

Thanks

performance considerations for eventtypes?

$
0
0

I've been looking at the "Search Job Inspector" recently and noticing that command.search.typer is often showing up at the top of the list. It's not uncommon for it to be using nearly 50% (sometimes more) of the total command.search time. My searches are not performing unacceptably yet, but I and anticipate the number of eventtypes growing as we add more and more sources (as will the search load) so I can't imaging this will magically improve; so I would like to look at this now, before it become a bigger problem.

Based on general optimization principles, I'm starting with the following assumptions:

  1. The more eventtypes defined the more effort required to match events with eventtypes and therefore a longer execution of typer is to be expected. So reducing the total number of eventypes should improve performance.
  2. Poorly defined eventtypes will be more expensive than a well-defined eventtype. (For example, I'm assuming that an eventtype defined by the search "user!=joe bytes>=1000" would be less efficient than an eventtype defined as "sourcetype=ftp UPLOAD OK")

If I'm missing something or have any of this wrong so far, please say so.

#1: Reduce number of eventtypes:

Based on the Eventtypes' numbers limits question, the answer suggested that the total number of eventypes should ideally be limited to a few hundred. However, I'm not sure that very realistic. (The answer wasn't clear, but I'm thinking that a "few hundred" means somewhere between 200-400?)

I looked at my system and I currently have over 340 eventtypes defined that are shared across all apps. Of those, 111 of the come from the windows app. I have the eventtypes in the "unix" app set to application-only sharing, or that would add another 133 eventtypes globally (I did this because the "unix" eventtypes generally seem to be too-loosely defined and rather unhelpful. To be honest, the quality seems pretty poor. For example, as of Splunk 4.1.3, the Unix app contains 17 eventtypes (e.g. "df", "cpu", ...) that don't even have a "search" defined in the config file. They show up as "None" in the UI. Also the eventtype tags are pretty inconsistent. So I chose to ignore them rather than try to deal with them.)

I have an app with nearly 100 app-level eventtypes. It's fairly self contained, and it would be nice to "block" out the eventypes of the other apps to improve performance within that app, but that's not possible as far as I know.

Again, it seems inevitable that the number of eventtypes will only grow as splunk usage increases. So other than doing some cleanup, it doesn't seem possible to reduce this dramatically.

#2 Optimize eventtype definitions:

This is where I would really like to focus my efforts. The problem is, I haven't come across any recommendations/suggestions/guidelines as to how to write more-efficient eventtypes, and I would really appreciate some input from the people who know this stuff.

Without a good place to start, I've done what I always do: Ask lots of questions!

If these can be answered directly, that would be great, but even starting with some general principles would be a great help. Even a never-do-this list would be helpful.

Here are some specific eventtype performance questions:

What's the impact of...

  • Using the core indexed fields (source/sourcetype/host)? It seems eventtypes based on sourcetype can be included/excluded faster than eventtypes based on simple search terms, is that true?
  • Using index=? (Old docs said you shouldn't do this, but newer docs say any search expression is fine. If I have a bunch of firewall events that only occur in index=firewall will they be faster if I add that to the eventtype definition?)
  • Using splunk_server=?
  • Using field=value in an eventtype? Or is it better to use a literal string (like "EventCode=538") than using the field lookup (EventCode=538)? (Does using an eventtype with fields prevent field extraction engine from automatically disabling extractors when splunk detects that the fields being outputted are not needed by the search. I know some non-interactive searches try to do disable extractors for efficiency when possible, can eventtype get in the way of this?)
  • Using lookup fields? (Example: where an automatic extraction is based on a sourcetype, and that sourcetype is included in the eventtype definition)
  • Using a source/host/sourcetype tags as part of an eventtype criteria.
  • Using indexed fields vs extracted fields? (indexed fields like "punct")
  • Using quoted strings. (Can indexed terms alone be matched faster than a quoted expression? Is there any concept of segmentation here, or does typer re-evaluate the raw events anyways?)
  • Using wildcards (e.g. term*)
  • Nested eventtypes. Say you have an "base" eventtype that is used in the definition of several other eventtype (essentially creating a simple way to extend the "base" eventtype to cover a more specific scenario). So if the base eventtype doesn't match, can typer more quickly eliminate the derived eventypes too? Or does it cause more work? Or is it more like a macro-expansion thing where the eventtype get's unrolled before it's evaluated so it doesn't make much difference in performance in any case?

I'm guessing there are lots of corner cases here. An eventtype definition can go across tons of layers which is what makes them so powerful, and I'm sure that also mean they can be quite expensive at times too. So any hints would be appreciated, and some kind of "profiler tool" would be amazing (I'll even consider naming my first born after you.)

Thanks in advance!

Proper REX command

$
0
0

What would the proper REX command be to extract the following:

SPACE:SPACE then a numeric string

so ends up being ' : 949495'

How to escape double quotes in SideView Utils Textfield Module

$
0
0

I have a dashboard that uses two SideView Utils TextField modules. The set up is as follows:

view
    textfield
        checkboxes
        checkboxes
   textfield
   button
   search
   table
view

The problem I'm having is in the search at the bottom I use the eval to set default values if the textfields are empty. If a user submits something with quotes, I always get an error:

"unbalanced quotes"

The parameters for the TextField parameters are only using the 'name' value with the others left as default. I did try to incorporate the 'template' as:

$value$

and "$value$"

But the results are the same.

I also try using the ValueSetting module and sent one of the TextField values through it but the results seem the same.

So my question is basically: Is there a way to automatically escape any characters that might cause the downstream search to throw an error? My interpratation of the help documents is that the foo token $*.value$ should send a fully escape value that can be used within a search. In another dashboard I ended up using a custombehavior to help with this.

Thanks, -jp


process control chart e.g. upper/lower control limit.

$
0
0

I have been asked to help a co-worker create a process control chart to understand an applications response time.

The following three events are generated for each test.

INFO=Signon_Screen RESPONSE_TIME=2.1000 INFO=Signon_Dept_Screen RESPONSE_TIME=0.6000 INFO=Citrix_Login_Comp RESPONSE_TIME=7.6000

The link below is a step in the right direction but I am having trouble getting this to work.

http://splunk-base.splunk.com/answers/73300/which-search-is-faster-reusing-a-calculation-in-an-if-clause-or-using-the-defined-variable-from-the-original-eval

SavedSearch module doesn't use results from scheduled search

$
0
0

I'm trying to use the Sideview SavedSearch module to load results from a previous scheduled search in order to quickly populate dropdowns on my page.

<module name="SavedSearch" autoRun="True">                
            <param name="name">populate_dropdowns</param>
            <module name="Pulldown">
etc.

My saved search "populate_dropdowns" is running every minute (for testing purposes!).

When I load my dashboard though it always executes the search again though, rather than loading the previously saved results. Looking in the jobs window, I see the following..

alt text

I can see the previously scheduled populate_dropdowns job. I can see the populate_dropdowns search being run again when I load the dashboard (but this time I see it's full definition rather than just it's name). Clicking on inspect shows that both instances return exactly the same results.

Any ideas?

Is there a way to control layoutPanel in Tabs/Switcher ?

$
0
0

I was wondering, is there a way to control the layoutPanel for Tabs/Switcher? I couldn't seem to make the layout contain within the Switcher itself.

Build a Splunk app

$
0
0

Hi,

I'm new to Splunk and I'm trying to build a dashboard app. I have installed Splunk and Java sdk and managed to perform search and get the results to my console. What is the next step? How do I construct a chart out of the data and display it?

Thanks, Tony

An error occurred while rendering the page template. See web_service.log for more details

$
0
0

Hi there,

When i did splunk start, everything goes well. There was no error. but when i try to go to the url in my browser, i see this message==>

"An error occurred while rendering the page template. See web_service.log for more details"

Peeking into web_service.log, i see this error. Please help

2013-08-12 14:29:36,059 ERROR   [520880cffa22fc850] __init__:281 - Mako failed to render:

Traceback (most recent call last):
  File "/ice_scratch/scratch/nf/yltan/nobackup_nf/splunk_alioth/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/__init__.py", line 277, in render_template
    return templateInstance.render(**template_args)
  File "/tools/share/python/2.7.1/linux64/lib/python2.7/site-packages/Mako-0.5.0-py2.7.egg/mako/template.py", line 302, in render
    return runtime._render(self, self.callable_, args, data)
  File "/tools/share/python/2.7.1/linux64/lib/python2.7/site-packages/Mako-0.5.0-py2.7.egg/mako/runtime.py", line 660, in _render
    **_kwargs_for_callable(callable_, data))
  File "/tools/share/python/2.7.1/linux64/lib/python2.7/site-packages/Mako-0.5.0-py2.7.egg/mako/runtime.py", line 691, in _render_context
    (inherit, lclcontext) = _populate_self_namespace(context, tmpl)
  File "/tools/share/python/2.7.1/linux64/lib/python2.7/site-packages/Mako-0.5.0-py2.7.egg/mako/runtime.py", line 637, in _populate_self_namespace
    ret = template.module._mako_inherit(template, context)
  File "/ice_scratch/scratch/nf/yltan/nobackup_nf/splunk_alioth/splunk/share/splunk/search_mrsparkle/templates/account/login.html", line 10, in _mako_inherit
    <%namespace name="lib" file="//lib.html" import="*" />
  File "/ice_scratch/scratch/nf/yltan/nobackup_nf/splunk_alioth/splunk/share/splunk/search_mrsparkle/templates/account/login.html", line 10, in _mako_generate_namespaces
    <%namespace name="lib" file="//lib.html" import="*" />
  File "/tools/share/python/2.7.1/linux64/lib/python2.7/site-packages/Mako-0.5.0-py2.7.egg/mako/runtime.py", line 403, in __init__
    calling_uri)
  File "/tools/share/python/2.7.1/linux64/lib/python2.7/site-packages/Mako-0.5.0-py2.7.egg/mako/runtime.py", line 624, in _lookup_template
    uri = lookup.adjust_uri(uri, relativeto)
  File "/ice_scratch/scratch/nf/yltan/nobackup_nf/splunk_alioth/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/__init__.py", line 158, in adjust_uri
    result = super(TemplateLookup, self).adjust_uri(uri, relativeto)
  File "/tools/share/python/2.7.1/linux64/lib/python2.7/site-packages/Mako-0.5.0-py2.7.egg/mako/lookup.py", line 227, in adjust_uri
    if key in self._uri_cache:
  File "/ice_scratch/scratch/nf/yltan/nobackup_nf/splunk_alioth/splunk/lib/python2.7/site-packages/splunk/appserver/mrsparkle/controllers/__init__.py", line 39, in __getitem__
    return self._i18n_dict[key]
KeyError: 0
Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>