Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Is Splunk Supported on RHEL 6.4?

$
0
0

We currently are running Splunk 5.0.3 and will probably be upgrading to 6 sometime in the future. Just need to know if Splunk will work with RHEL 6.4.


Can someone provide a report to get application usage statistics?

$
0
0

I am looking for an example report that shows application usage statistics and Top 10 applications. Can anyone help? I don't see this baked in.

How to pass tokens in url using the new web framework

$
0
0

I have built a app using django & js in splunk web framework. The home screen gives running status of servers.On click i should pass that server name to a different view .

And then i should be able to take that token from URL and substitute in search of new view.

I know to use it using url redirector in advanced dashboards, but couldn't find any documentation on how to pass values to other views and use in new splunk web framework

It would be of great if someone could help me in this?

Splunk for Citrix xenapp Missing some data

$
0
0

I have a Xenapp 6.5 Farm. PowerShell 2.0 with remote execution set on all servers. I am not getting any data in the following areas. We are running splunk version 6.0 and I have the latest forwarder on all the servers.

Server Performance Zone Data

Also when I run go into maintenance and do a rebuild zone farm look up I get the following error Could not write to file 'xa_zone_farm.csv': Failed to move file to final destination.

Any help with this would be very appreciated.

splunk 5.0.5 tokens on forms are not getting resolved

$
0
0

In splunk 5.0.5, I am trying to create a form to pass in a set of input (user and times) and build a set of charts/tables. I pulled in the example from the inverted flow description on this page: http://docs.splunk.com/Documentation/Splunk/5.0.5/Viz/Exampleform

The time range appears to get me different results, but the username selection appears to remain as "$username$" in the final search, resulting in no results. What am I doing wrong?

<form>
  <label>Form search example - inverted flow, panel-defined post-process</label>

  <!-- Define a search that returns a single result set. -->
  <!-- The subsequent panels choose specific results to display -->

  <fieldset>
    <input type="text" token="username">
      <label>username</label>
      <default>jtcassid</default>
      <!-- <seed>*</seed> -->
    </input>
    <input type="text" token="site">
      <label>Grid name</label>
      <default></default>
      <seed>lsf*</seed>
    </input>
    <input type="time" />
  </fieldset>

  <row>
    <chart>
      <title>Breakdown of Jobs</title>
      <searchString>sourcetype=lsfstreamlog index=lsf* user=$username$ |eval Command=mvindex(split(lsfcommand," "),0) | transaction lsfjobid | chart sum(duration) AS "TimeInCommand"  by  Command</searchString>
      <option name="charting.chart">pie</option>
    </chart>

    <table>
      <title>Top files touched by the user</title>
       <searchPostProcess>top lsfcwd</searchPostProcess>
    </table>
  </row>

  <row>
    <table>
      <title>Users vs changetype</title>
      <searchPostProcess>ctable user changetype maxcols=4</searchPostProcess>
      <option name="count">20</option>
    </table>

    <chart>
      <title>Average lines added by the user</title>
      <searchPostProcess>timechart avg(added)</searchPostProcess>
      <option name="charting.chart">line</option>
      <option name="charting.legend.placement">none</option>
    </chart>
  </row>

</form>

Add button to view to call script

$
0
0

Hello all. I am working on a view to display accounts that are locked out in our AD environment, and it also shows the caller, which is the computer that caused the lockout. Basically, I want to add a button on each row that when clicked will call a script to call a remote log off. Creating the script I can handle, but I need help adding the buttons to my view. Here is a pastebin of my view, and also a link to a screenshot of the current results:

pastebin.com/hQ2T54AE

i.imgur.com/d5YPsRs.png

Basically, I want a button beside each result that when pressed will call a script local to my Splunk server and put the value from "Lockout Source" into a variable that I can call in the script. The script will likely be python or VB, since my Splunk server runs on RHEL and will be call remote Windows actions. I have seen some similar things like this done with Sideview Utils, but I haven't been able to figure it out.

Thanks!

Forward to Splunk indexer, then forwarded from Splunk server to another server

$
0
0

If I were to forward syslog messages to a Splunk server and then from there forwarded to another server, would my syslog messages be changed in any way (due to the indexing)? If so, is there any way to do the forwarding I described while also preserving the syslog messages?

Optional Field Extraction

$
0
0

Hi,

I have log files for java stack traces I am trying to parse to get the names of the exceptions that caused them extracted into different fields. The log files are formatted in a way that gives the initial exception early on in the up to 200 line long event. Much farther down the event, you can SOMETIMES find a line that reads: Caused by: another exception name: Reason.

The regex I am using to find the initial exception is as follows:

(?i)[a-z]+(\.[a-z]+)*\.(?=[a-z]+Exception:*\s*)(?P<FIELDNAME1>[^\n]+)

I want to add another piece of regex to pull in this other line and this is my attempt with all the regex together:

(?i)[a-z]+(\.[a-z]+)*\.(?=[a-z]+Exception:*\s*)(?P<FIELDNAME1>[^\n]+)((.+\n)+(?i)Caused by: [a-z]+(\.[a-z]+)*\.(?=[a-z]+Exception:*\s*)(?P<FIELDNAME2>[^\n]+))?

This regex returns no matches. I want this second regex matching and field matching to be optional because it is not always present, so I tried adding the ? after the entire match for FIELDNAME2. Here are other configurations I have tried:

(?i)[a-z]+(\.[a-z]+)*\.(?=[a-z]+Exception:*\s*)(?P<FIELDNAME1>[^\n]+)((.+\n)+(?i)Caused by: [a-z]+(\.[a-z]+)*\.(?=[a-z]+Exception:*\s*)?(?P<FIELDNAME2>[^\n]+)

This almost worked. It does the extraction correctly where it finds the line: Caused by: ....... Although, it does not work correctly for events where 'Caused by:' is not found. It instead takes the last character of FIELDNAME1, chops it off of the line for FIELDNAME1 and puts it in FIELDNAME2.

(?i)[a-z]+(\.[a-z]+)*\.(?=[a-z]+Exception:*\s*)(?P<FIELDNAME1>[^\n]+)(.+\n)+(?i)Caused by: [a-z]+(\.[a-z]+)*\.(?=[a-z]+Exception:*\s*)?(?P<FIELDNAME2>[^\n]+)?

This attempt shows no results. I tried making both the regex search optional, and the field optional, and no matches are found at all.

Any help is appreciated.


Consuming XML Database

$
0
0

I have an XML database that contains up to fifteen different record formats. Many have a common set of fields but each also has its own unique set of fields. It's similar to combining the contents of different database tables into one XML file.

Is it possible to get Splunk to consume and index XML files of this type so that I can refer to each different record formatas as a separate sourcetype?

How to configure access_combined_wcookie directly in the files props.conf and transforms.conf using regex ?

$
0
0

Hy guys,

I have files in the format access_combined_wcookie, the last field called "other", has informations that are importants for business and us (IT). How to extract the information this field using the format access_combined_wcookie(known by default by splunk) and use regex directly in the files props.conf e transforms.conf ?

Follow one line of the file of the log.

186.241.214.128 - - [28/Nov/2013:02:09:24 +0000] "GET /127.0.0.1/_files/local/defaultTheme/img/image.png HTTP/1.1" 200 3288 "-" "Mozilla/5.0 (Windows NT 6.1; rv:25.0) Gecko/20100101 Firefox/25.0" "-" "other_1234|xxxxxx:xxxxxx|yyyyyy:yyyyyy"

count list host count by sourcetype, sourcetype by index

$
0
0

Hi, This seems like it would be simple, but I can't figure it out for the life of me. I really like the stats list layout for dashboard panels where you can have a list of results as a subset of parent results. The most useful use case for this, IMO, is to create a list of all splunk indexes, and the sourcetypes associated with each index (as a list). This is pretty easy:

index=* earliest=-30m@m | dedup index sourcetype | stats list(sourcetype) by index

Beautiful layout, relatively quick search, and it's almost perfect. But I want to add a count of hosts per sourcetype to the list so that the count of hosts is on the same line item as sourcetype. I thought this would work:

index=* earliest=-30m@m | dedup index sourcetype host | stats count(host) as HostCount by sourcetype |stats list(HostCount) by index sourcetype

but alas... it doesn't. I'm pretty sure it's Splunk's fault, because clearly my logic is flawless. :) However, could someone please help? Thanks.

The files props.conf and transform.conf don't work

$
0
0

Hi guys,

I did the following configuration in props.conf in the splunk:

C:\Program Files\Splunk\etc\system\local

[sctmainframe]
NO_BINARY_CHECK = 1
SHOULD_LINEMERGE = false
pulldown_type = 1
REPORT-myname = mainframe-extract

And in the transforms.conf file too

[mainframe-extract]
EXTRACT = (?<INSTCLI>\d{3})(?<BANCOCLI>\d{3})(?<AGENCLI>\d{4})

The sourcetype "sctmainframe" appear for me as a new sourcetype into the administrator splunk web, but don't work correctly.

What I'm doing of the wrong ?

Extract date from a varying source name

$
0
0

Hi Guys,

My log files has events with the time stamp on it, just the time not the date but luckily the source name has the date in it and splunk automatically identifies date from the source name and displays it with the events accordingly.

My logs:- 10:32:21,453 INFO [2212] abcdxyz 10:32:21,112 INFO [2212] abcdxyz 10:32:22,409 INFO [1121] abcdxyz

source names :- server-nameA.2013-10-01 server-nameB.2013-10-01

splunk is showing the events after indexing like:-

2013/10/01 10:32:21,453 INFO [2212] abcdxyz 2013/10/01 10:32:21,112 INFO [2212] abcdxyz 2013/10/01 10:32:22,409 INFO [1121] abcdxyz

But sometimes my log files also has version number attached to them at the last.

source name with version number : server-nameA.2013-10-01.1 server-nameB.2013-10-01.1

Now splunk is also taking version number for the date and after indexing my events look like:

2010/10/01 10:33:23,343 INFO [2232] abcdxyz 2010/10/01 10:33:19,144 INFO [2394] abcdxyz 2010/10/01 10:34:23,239 INFO [1943] abcdxyz

i want the date to be 2013/10/01 not 2010/10/01 when the source name is something like server-nameA.2013-10-01.1

I have searched through the internet for an answer but none of them assured me a valid result. Please, Can anyone help me fix this issue?

Many Regards...

How to ignore a field during search so total count is correct

$
0
0

I have repeating error events that are identical except for a single id field value that is incremented for each occurrence. I want to have them be considered as the same, so i get an accurate total of occurrences of that error, rather than each one counted as a different Error message.

The scenario actually occurs in 2 ways. One is with the field value changing, and another is with a value in the actual error message changing. I assume the way to ignore it may be different for a Field vs a string in another field, so this may be a 2 part answer.

Predict command and custom alert condtion

$
0
0

index=symantec (virus OR "security risk" OR "web attack") NOT "Tracking Cookies" earliest=-30d@d latest=now | rex "(?i) name: (?P<virus_host>[^,]+)" | timechart span=1h count(virus_host) as count | predict count | rename upper95(prediction(count)) as upper95 | where count>upper95

What I am trying to do is get an alert going that will run hourly and determine if the number of Viruses seen by Symantec in the last hour is greater than what has been predicted as the upper 95%. I have this search going back 30-days in 1-hour buckets to get the most accurate prediction going forward. I do not wish to alert on stuff 30-days old, just the last hour. What can I do to still get the more accurate prediction from 30-days worth of data but only alert on the last hour of data?


JMX_ta app with Universal Forwarder

$
0
0

I want to be able to install the jmx_ta app on a Universalforwarder. I've read a lot of questions on here and the default answer seems to be "Install a python runtime and it "should" work. It doesn't. As soon as I provide a python runtime for splunk, when restarting the process, I get the following error...


    $ sudo /opt/splunkforwarder/bin/splunk start
    Checking prerequisites...
    Checking mgmt port [8089]: open
    Checking configuration...  Done.
    Checking critical directories...        Done
    Checking indexes...  Problem parsing indexes.conf: stanza=default Required parameter=blockSignatureDatabase not configured.  
    Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue

Before adding the runtime, it would never ( visibly ) check the 'critical directories' and the 'indexes'. I don't intend on doing any indexing on this server at all. I just want to foward the raw data to the search head.

I would also like to point out that I have this working on a test box using a FULL install of Splunk. I need to get it working on the UniversalForwarder. Any ideas? Thanks

Problem installing TA-uas_parser

$
0
0

I am attempting to get this TA working but am encountering errors when trying to update the cache via the update_cache.py script. My Splunk servers do not have internet access so I installed this TA locally on my windows 7 machine where I have splunk installed as well to test with. I am encountering errors any way I try this. Any help would be greatly appreciated here this is just what I have needed for a project I am working on.

Errors I am getting:

running the following command from the command prompt in windows: c:$Splunk_homeetcappsTA-uas_parserbin $Splunk_homebinpython.exe update_cache.py

Error: ImportError: No module named site

When installing python and running the command I get this:

c:python.exe update_cache.py

File "update_cache.py", line 6 print "Cache data updated." SyntaxError: invalid syntax

I really need to get this addon working it would help us out a ton! Thanks in advance!

Roles won't display in add/edit user/role page, "Failed to fetch data: Not Found"

$
0
0

In the web_services.log file I see this error at the same time:

2013-11-14 17:46:44,232 ERROR   [528552d33521c6990] eai:164 - Failed to fetch dynamic element content from the server for splunkSource:/authentication/roles
[HTTP 404] https://127.0.0.1:8089/servicesNS/admin/launcher/authentication/roles?count=-1; [{'type': 'ERROR', 'code': None, 'text': 'Not Found'}]

The failure shows with red letters above the role list boxes: "Failed to fetch data: Not Found"

Any ideas where to start? I had not seen this problem before, but i hadn't added a role in some time. Upgraded from 5.0.2 -> 6.0 recently. Pooled search heads. 64-bit SUSE linux.

PCI CGI vulnerability

$
0
0

We're getting PCI security alerts on the Cherry web engine. Is there some method of resolving this issue - i.e. install a later version of the web engine?

Thanks,

Bill

Here's the alert:

Server IP = X.X.X.X

THREAT:When the service made an HTTP request for a CGI file that was found to exist on the Web server host, the Web server returned an HTTP redirection page containing unsanitized user-supplied input to at least one of the CGI file's parameters. Thus the host is vulnerable to cross-site scripting attacks.

A list of CGI vulnerable files can be found in the Result section below.

IMPACT:By exploiting this vulnerability, malicious scripts could be executed in a client browser which processes the content of an HTTP redirection page returned by the Web server.

SOLUTION:Contact the vendor/author of the CGI file(s) for a solution to this issue.

RESULTS:GET /en-US/search?client="><script>alert(document.domain)</script>&site="><script>alert(document.domain)</script>&output="><script>alert(document.domain)</script>&q="><script>alert(document.domain)</script>&proxystylesheet="><script>alert(document.domain)</script> HTTP/1.1 Host: X.X.X.X:8000

HTTP/1.1 303 See Other Date: Wed, 04 Jul 2012 19:12:56 GMT Content-Length: 618 Content-Type: text/html;charset=utf-8 Location: http://X.X.X.X:8000/en-US/search/?client="><script>alert(document.domain)</script>&site="><script>alert(document.domain)</script>&output="><script>alert(document.domain)</script>&q="><script>alert(document.domain)</script>&proxystylesheet="><script>alert(document.domain)</script> Server: CherryPy/3.1.2 Set-Cookie: session_id_8000=b35a7fbfe22ca405f9db492b63aa1544f6aa0846; expires=Thu, 05 Jul 2012 19:12:56 GMT; httponly; Path=/

This resource can be found at http://X.X.X.X:8000/en-US/search/?client="><script>alert(document.domain)</script>&site="><script>alert(document.domain)</script>&output="><script>alert(document.domain)</script>&q="><script>alert(document.domain)</script>&proxystylesheet="><script>alert(document.domain)</script></a

Active Directory LastLogonTimestamp EVAL/WHERE Date Math

$
0
0

I'm attempting to locate systems that have not logged into AD for 90 days. I am using the following search;

index=foo   | where lastLogonTimestamp<relative_time(now(), "-90d" )  | dedup cn  | table  cn,lastLogonTimestamp,operatingSystem

This does not appear to function. It returns results, however the LastLogonTimestamp field appears to return ALL dates. Reversing the query returns garbage results. Every field returned says "OptionalProperties".

If I recall, this variable is stored in some Microsoft tick time similar to epoch, however Splunk seems to display it properly in the following format;

07:44.36 PM, Sun 11/17/2013

Is Splunk automatically converting this? Do I have to define a format in order to evaluate or use a where command?

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>