Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Monitoring of Java Virtual Machines (jmx) - pidFile

$
0
0

I have setup the .xml file used by jmx.py to read the PID to examine from a file.

<jmxserver pidfile="/var/tmp/APP1.pid" jvmdescription="APP1-Jboss">

I then have a script that runs once an hour to update the PID in /var/tmp/APP1.pid. This is to ensure that stats are collected even if the app is restarted.

However, it doesn't appear that jmx.py is checking /var/tmp/APP1.pid each time it runs.

After starting splunk, JMX correctly monitors PID 781.

  • cat /var/tmp/APP1.pid

781

When the APP starts, the PID is updated as expected:

  • cat /var/tmp/APP1.pid

14289

But JMX logs this error in splunkd.log:

ERROR ExecProcessor - message from "python /opt/splunkforwarder/etc/apps/discovery-jmx/bin/jmx.py" host=, jmxServiceURL=, jmxport=0, jvmDescription=APP1-Jboss, processID=781,stanza=jmx://discovery,systemErrorMessage="No such process"

At this point I have to restart Splunk to get it to start monitoring correctly.

Have i set this up incorrectly ?

Thanks


JMX App and XML Parsing Error

$
0
0

Trying to get the JMX input running, and the following is being reported in the splunkd.log:

6-05-2014 03:15:47.837 -0700 ERROR ExecProcessor - message from "python /opt/splunk-home/splunk/etc/apps/jmx_ta/bin/jmx.py" Error parsing XML : null

06-05-2014 03:15:47.839 -0700 ERROR ExecProcessor - message from "python /opt/splunk-home/splunk/etc/apps/jmx_ta/bin/jmx.py" Error executing modular input : null

It is driving me nuts. I have validated and re-validate the config XML multiple times. No syntax errors. I have the schema doc and ensured the right values are present. Basically, what I did was take config.xml, copy it, and modified the first 'jmxserver' element to look like this:

<jmxserver jvmDescription="weblogic-banapps" jmxServiceUrl="service:jmx:t3://10.201.2.58:7001/jndi/weblogic.management.mbeanservers.domainruntime" jmxuser="#####" jmxpass="#####">

Any ideas on where else I can go to get more info on the error so I can correct it?

I also figured out I can run this on the command to reproduce the same error:

[splunk@splunk bin]$ python jmx.py --validate-arguments < config/weblogic_banapps.xml ERROR Error parsing XML : null

Splunk Dashboards on PHP web application

$
0
0

Hello there,

Do you know how can I integrate and visualize Splunk Dashboards from a PHP web application?

Thanks!

Realtime Alerts stop working after 1 hour of no search results

$
0
0

I have a realtime search that sends me an e-mail alert when a search result comes up. The triggers and alerts work fine off the saved search, until it goes idle for 1 hour (no rt search results within 1 hour), then triggers and alerts don’t go through. In audit.log, I get an error like this:

06-06-2014 15:44:28.414 -0700 INFO  AuditLogger - Audit:[timestamp=06-06-2014 15:44:28.414, user=n/a, action=read_session_token, info=denied, reason="non-locally generated token", session_user="admin"][n/a]

Theoretically, I could work around the issue by creating a scheduled search that triggers every hour, but that seems pretty lame.

I updated my C:Program FilesSplunketcsystemlocallimits.conf to include:

[authtokens]
expiration_time = 86400

That didn’t seem to fix the problem. Any help would be appreciated.

Load old log files with correct time stamp

$
0
0

Hi,

I have a old log file with no year in every events, looks like (May 01 01:00:04 .....) and i read that is possible to force the year with the command touch -t 201305010100 log.txt and it's works but...

Splunk started to indexer the file with year 2013 but it set the timestamp as his speed of indexing and i want to index the file with the year 2013 and the time stamp of every events.

I have tried to put in propfs.conf TIME_FORMAT=%b %d %H:%M:%S but it doesn't works

Any body knows how to solve this issue?

Thanks,

How to group together events based on their relative distance in _time?

$
0
0

Hello All,

I'm trying to figure out how to group certain events together if they happen within 1 second of each other's relative _time (they happened <= one second from each other).

Current search as an example example:

sourcetype=logins login_server="server_01" login_server="server_02" login_server="server_03"  | stats  values(login_server) count(login_server) AS UniqueEventCount dc(login_server) AS UniqueServerCount by HostName, User | sort -UniqueServerCount | where UniqueServerCount > 1

What the above answers is: "Show me the events where a host and user name logs into two or more different login servers". What I need to add is that I only want to show events that log into two or more login servers within 1 second of each other.

Bucket does not do this as two events can fall within 1 second of each other, but not fall into the same one second buckets markers.

Any ideas?

Scheduled Real-time AlertsTerminating

$
0
0

I have a number of real-time alerts scheduled that prior to upgrading to Splunk 6.1 would run continuously. Since the upgrade these jobs now stop alerting even though the jobs are visible in the Activity/Jobs window and are in status "Running 100%".

To get the jobs to start alerting again I have to either delete and recreate them.

Is this a known issue or have I missed a breaking change somewhere in the upgrade?

SOURCE_KEY & multiple fields issue

$
0
0

Hi Everyone, I have encountered an issue with SOURCE_KEY and MV_ADD I need to extract multi-value fields (shown as FRAG's below) the event looks like this:

*** 10 0 8 30 *NULL* foo 2 1 13671237459 11 1392550059 0 0 128928 4 3 3 0 bar *NULL* *NULL* 0 1 0 0 0 *NULL* 1 0 0 0 0 0 0 *NULL* 0 0 0 *NULL* 1607660 2 0 1440 0 0 1 1 0 1 0 *NULL* *NULL* 
FRAG 1 1 121238 0 0 0 0 1 2 457210 0 0 -1 0 3 0 0 6 1368450059 1234240476 0 *NULL* *NONE* 
FRAG 1 1 121258 0 0 0 0 1 2 187351 0 0 -1 0 3 0 0 6 1328450059 6235240476 0 *NULL* *NONE* 
FRAG 1 1 128518 0 0 0 0 1 2 262144 0 0 -1 0 3 0 0 6 1362410859 1233240476 0 *NULL* *NONE*

my Props.conf looks like this:

[foo] 
BREAK_ONLY_BEFORE = ***\s
MAX_TIMESTAMP_LOOKAHEAD = 150
NO_BINARY_CHECK = 1 pulldown_type = 1
REPORT-foo-a = foo-FRAG, foo-FRAG-fields

my transforms.conf looks like this:

[foo-FRAG]
SOURCE_KEY = _raw
REGEX      = (FRAG) ([^\r\n]+)
FORMAT     = $1::$2
MV_ADD     = true

[foo-FRAG-fields]
SOURCE_KEY = FRAG
DELIMS     = " "
FIELDS     = "field1","field2","field3","field4","field5"

Now the issue is, that the system recognizes the fields but does not treat them as multi-value fields. Is there any resolution for this issue?

Thanks in advance!


Drill down on dashboard is showing large search

$
0
0

I have a few dashboards that display different information about top browsers. I have created a

| replace "long string here" with "user friendly here" in cs_User_Agent

in the search on each dashboard that replaces the IIS log cs_User_Agent value with something more understandable for the user. This replace has grown larger and larger with all the different cs_User_Agent values generated by the IIS logs. When the user clicks to drill down on one of the values on the browser dashboards it takes them to the event tab. The search is displayed and the search is very large. Is there a way to not have the search field expanded when they drill down?

How to drop all entries to a specific index?

$
0
0

We've reached our license limit. So, at the indexer, I want to drop all log entries destined to a specific index. Documentation is clear how to do that on a heavy forwarder, for example, but I haven't found any documentation of how to drop all traffic to a specific index at the indexer. Props.conf looked promising but it doesn't support an index key. In props.conf, I was expecting that I could create a stanza like this:

[index::development] # This key is not listed in the props.conf.spec

TRANSFORMS-blackhole = blackhole

and in transforms.conf:

[blackhole] REGEX = . DEST_KEY = queue FORMAT = nullQueue

It just seems there has to be a way, but I haven't been able to discover it.

with inputs.conf:connection_host=dns, events are being logged where host=[ip address]. Why

$
0
0

I'm running version 6.0.2 on CentOS 6. My DNS servers are a pair of Windows Server 2008 domain controllers. Every month, when I patch and reboot these Windows servers - which I do sequentially, Splunk writes logs to the database where host=[ip address] instead of host=[fqdn]. This breaks my alerting because my alerts are (mostly) defined by hostnames, for examplle: host="DC*" AND "EventCode=4740"

While the logs are being written with host=[ip address], these alerts will never trigger.

Do I need to change the order of my dns servers listed in /etc/resolv.conf prior to rebooting my DNS servers? Or should I expect splunk to seamlessly send queries to the 2nd DNS to get a response?

Does splunk perform it's own DNS queries or does it rely on the underlying OS? If it performs its own queries, is that configurable? Will changing the order of entries in /etc/resolv.conf require restarting Splunk?

I'd prefer to fix this wholly within Splunk, and without having to restart it monthly, because it takes 15 minutes to shut down.

How to get top x-forwarded-for ip addess in apache access log ?

$
0
0

Hello,

My data same :

10.54.3.81 188.54.195.26, 10.5.81.2 - - [08/Jun/2014:13:16:08 +0000] "POST /index.php HTTP/1.1" 200 40 "" "Mozilla/5.0 (Linux; U; Android 2.3.6; en-us; GT-S5300 Build/GINGERBREAD) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1" BytesIn:3342 Bytes:596 Time:87556 Process:31989 Conn:+ Host:localhost

10.54.3.81 188.54.195.26, 10.5.81.2 - - [08/Jun/2014:13:16:08 +0000] "GET /sa-ar/%D8%A8%D9%84%D8%A7%D9%83-%D8%A8%D9%8A%D8%B1%D9%8A-q10-%D8%B3%D8%B9%D8%A9-16-%D8%AC%D9%8A%D8%AC%D8%A7%D8%A8%D8%A7%D9%8A%D8%AA-%D9%86%D8%B8%D8%A7%D9%85-%D8%A7%D9%84%D8%AA%D8%B4%D8%BA%D9%8A%D9%84-%D8%A8%D9%84%D8%A7%D9%83-%D8%A8%D9%8A%D8%B1%D9%8A-10-%D9%88%D8%A7%D9%8A-%D9%81%D8%A7%D9%8A-+-lte-%D8%A7%D9%84%D8%AC%D9%8A%D9%84-%D8%A7%D9%84%D8%B1%D8%A7%D8%A8%D8%B9-%D8%A3%D8%A8%D9%8A%D8%B6-%D8%B0%D9%87%D8%A8%D9%8A-6971438/i/ HTTP/1.1" 200 31265 "/sa-ar/%D8%B0%D9%87%D8%A8%D9%8A/%D9%85%D9%88%D8%A8%D8%A7%D9%8A%D9%84%D8%A7%D8%AA--bslash--%D8%AC%D9%88%D8%A7%D9%84%D8%A7%D8%AA-33/a-t/s/?seller=DOD_KSA%2Ctest-Shop&rpp=10&utm_source=SilverpopMailing&utm_medium=email&utm_campaign=dod_sa_ar_a_080614_O&utm_content=" "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/35.0.1916.114 Safari/537.36" BytesIn:4103 Bytes:31824 Time:950381 Process:31922 Conn:+ Host:localhost

10.54.3.81 66.249.65.252, 10.5.81.2 - - [08/Jun/2014:13:16:09 +0000] "GET /sa-ar/casio/s/ HTTP/1.1" 200 20351 "-" "Mozilla/5.0 (compatible; Googlebot/2.1; +/bot.html)" BytesIn:388 Bytes:21087 Time:794695 Process:31985 Conn:+ Host:localhost

How I can get number of Ip address on column Like : ip_list count 188.54.195.26 2 66.249.65.252 1

Thanks

automate archive data deletion

$
0
0

Hi - I am archiving data to the frozen dir using the frozentimeperiodinseconds which works well. I now want to automate the deletion of this data from my frozen dir after a certain period. I have read somewhere this can be done - can someone point me to documentation which would help.

Distribute Apps to multiple Indexers

$
0
0

What is the best way to distribute apps to multiple indexers?

For instance, if I have dozens of indexers (installed on Linux) in my distributed environment, and I want to monitor them with the Linux App, how should I install the Linux App TA on the indexers? Should I install a forwarder on each indexer, and then manage them all with the deployment server?

The point is: If I have to change the configuration of the TA, or even install new Apps, I would like to do it from a central point, instead of configuring server by server manually.

Is it ok to have the forwarder on the same machine as the indexer?

Thanks!

Custom Python Script invoked as a custom command failing

$
0
0

Hi,

I have a python script that pull 1 million rows, populates to a python dictionary and returns to Splunk. The script fails intermittently. Is there any parameters i need to set in commands.conf to fix this?

Is volume the issue?


No data in app.

$
0
0

Hi all,

I've installed everything correctly and I have quite a lot of data being logged in splunk now (nearly 20GB per day). I can see when I search for one of the SQL servers in splunk as "host=servername" that it shows source = WinEventLog://Security and sourcetype = WinEventLog:Security so it's definitely logging data and indexing it in splunk.

However, the Microsoft SQL Server App itself isn't showing any data. When I run all 5 lookup generators, they all show no results, despite me seeing data indexed in splunk for the SQl server.

How can I get the app to find the data?

How to configure sql server app

Splunk search not returning results prior to ./splunk clean eventdata

$
0
0

As the title says, after cleaning the event data and reindexing, the splunk search doesn't return events prior to the clean. How can I change this?

Thanks

international site best proactices

$
0
0

WE have two small international sites. What's the best practice for getting that data into our main SPlunk here in the U.S.? Our main concern is bandwidth usage.

Should we have an indexer at each site as detailed int eh Multi-Site cluster doc? should we first try using the compression on the data flowing back to the US?

WE have an enterprise license, BTW.

Issue with deploying Splunk App for MS SQL

$
0
0

For the past few days I am trying to deploy the sql app. I found not all the powershell scripts returns results. Below are the 4 sourcetypes in the mssql index.

MSSQL:Instance:Service
Powershell:ScriptExecutionSummary
Powershell:ScriptExecutionErrorRecord
MSSQL:Host:Memory

Host:Memory and Instance:Service tells me that that there is no problem with the execution of the PS script.

When I was doing some research about why the Lookup generators are showing no-results I found a lot of other sourcetypes missing.

Below is the result of this search eventtype=mssql sourcetype="Powershell:ScriptExecutionErrorRecord" | dedup ErrorMessage | table ErrorMessage Exception

ErrorMessage

Could not find file 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\DBUsers\DBUsers.xml'.

Could not find file 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\DBInstances\DBInstances.xml'.

Could not find file 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\Databases\Databases.xml'.

Exception

System.IO.FileNotFoundException: Could not find file 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\DBUsers\DBUsers.xml'. File name: 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\DBUsers\DBUsers.xml' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at System.Management.Automation.PathUtils.OpenFileStream(String filePath, PSCmdlet command, Boolean isLiteralPath)

System.IO.FileNotFoundException: Could not find file 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\DBInstances\DBInstances.xml'. File name: 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\DBInstances\DBInstances.xml' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at System.Management.Automation.PathUtils.OpenFileStream(String filePath, PSCmdlet command, Boolean isLiteralPath)

System.IO.FileNotFoundException: Could not find file 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\Databases\Databases.xml'. File name: 'C:\Program Files\SplunkUniversalForwarder\var\lib\splunk\modinputs\powershell\Databases\Databases.xml' at System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath) at System.IO.FileStream.Init(String path, FileMode mode, FileAccess access, Int32 rights, Boolean useRights, FileShare share, Int32 bufferSize, FileOptions options, SECURITY_ATTRIBUTES secAttrs, String msgPath, Boolean bFromProxy, Boolean useLongPath, Boolean checkHost) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share, Int32 bufferSize, FileOptions options, String msgPath, Boolean bFromProxy) at System.IO.FileStream..ctor(String path, FileMode mode, FileAccess access, FileShare share) at System.Management.Automation.PathUtils.OpenFileStream(String filePath, PSCmdlet command, Boolean isLiteralPath)
Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>