I am curious if anyone has attempted to or is currently using an F5 Big-IP LTM as a reverse proxy for Splunk web. I've consulted Google U, but haven't been successful.
Splunk Single Sign-On With F5 Big-IP
join two event logs between two specific times
I have two indexes that I have successfully joined, they are indexA and indexB. There is a field in the resulting (joined) event FieldC. I have another index, indexY with FieldD. I need to join this indexY to indexA and indexB. This works ok.
index=indexA FieldC | join FieldC [search index=indexB FeildC] | join FeildD [search index=indexY FeildD] | table _time, FeildC, FieldD
now the tricky bit, I have indexE which has a start and finish event. How do I run the double join, between the two time events (Logon and logoff) in index E.
Search sourcetypes by forwarders
I need to collect list of sourcetypes for each forwarder using search query. i can get forwarders list from metics.log and sourcetype list from licence_usage log seperately from _internal index. Is there any way to get all the sourcetypes configured from each forwarder using single search query?
log4j truncating the log entry
We are noticing some of the log entries which are getting truncated. we are using the log4j sourcetype.
actual log entry looks like below, however several times we will only see first two lines and line starting with Title: onwards will be truncated. Any ideas how to fix it.
Splunk and forwarder both are 5.0.3
2013-12-10 10:11:27,986 INFO [something.here] :-) Transfer successful! Bytes: 508,174,896, ET: 0:00:12.604
ID: 1f1496c2-cea5-4148-ade2-e625ef6a2e82
Title: ABCD - 11/23/12 EFGH - Something HERE - username (00:11:48;00 - 00:12:22;00)
SRC: source.name:host=my.fqdn.hostname,path=/path/to/file.txt,port=21,type=TypeOfFile
DEST: destination.name.1001:host=10.11.12.13,name=servername,path=/1111/,poolId=2222,port=21,type=Container,zoneId=1001
Splunk Duplicating IIS Log data
All,
I've recently started forwarding IIS log data to Splunk, and there is at least one file that keeps sending duplicate data. This file is the log file in a W3SVC103 folder. The log file in W3SVC3 is sent without any duplicate data popping up.
I know Splunk hashes the file names in some way to see if there is a new file detected, so my guess is that Splunk occasionally thinks this is a new file. Is there a way to work around this issue? Is it possible that I have some property set up wrong?
I'm just looking for any potential reasons that duplicate events might be being sent.
Thanks, Bruce
where i have to add props.conf for indentify sourcetype based on filename
Hi,
I want to create my own sourcetype on indexer based on file name coming from multiple forwarders.
I read doc and findout that we can do that from props.conf file as following way.
[source::.../JMSConsumerlog.log] sourcetype=JMSConsumerLog
However I made and add this in indexer as well as forwarders system/local folder but its not working.
Please Help.
IIS log file data duplication - "Checksum for seekptr didn't match, will re-read entire file"
I have a base install of 1 indexer and a few UFs. Both the indexer and UFs are version 6.0, build 182037 (UFs are Windows 2012, indexer is on Ubuntu).
In the UF's .etcsystemlocalinputs.conf I have a basic stanza:
[monitor://C:\inetpub\logs\LogFiles\W3SVC1]
sourcetype = iis
index = iis_logs
disabled = false
After making the change above and restarting the UF, it starts reading the IIS logs, then logs this entry:
12-02-2013 11:54:39.390 -0500 INFO WatchedFile - Checksum for seekptr didn't match, will re-read entire file='C:\inetpub\logs\LogFiles\W3SVC1\u_ex131202.log'.
12-02-2013 11:54:39.390 -0500 INFO WatchedFile - Will begin reading at offset=0 for file='C:\inetpub\logs\LogFiles\W3SVC1\u_ex131202.log'.
12-02-2013 11:54:39.437 -0500 INFO WatchedFile - Resetting fd to re-extract header.
and then a couple of minutes later, the above 3 lines repeat... then again, and again, duplicating data, using up the indexing quota and chewing through disk space. I am not the only person with this issue, as it seems from a quick search through the answers - here is one. I tried the workaround in this post and it worked, but since Splunk 6.0 changed the way IIS logs are handled (see this product announcement), I thought I'd try to use the new way, instead of hacking it to make it work and (probably) eventually break something when this gets fixed.
Does anyone have any suggestions? An official fix maybe?
Thanks in advance!
Time format in DB query result
I am using splunk DB connect to pull out some data to create a dashboard. But having difficulty in getting the time format corrected in search result. The time format looks like in seconds, how do i convert them to Date-Month-year format. Below is the sample of search result, i am trying to get Creation_field and last_update_field time format adjusted.
CREATION_DATE DESCRIPTION LAST_UPDATE_DATE USERNAME 1384405200 xnje411 server monitoring addition 1385010000 Melvin Bolden (a056648) 1384318800 snjw100 server monitoring addition 1385960400 Melvin Bolden (a056648)
Using Stats Command
this search works great to provide me a list of hosts showing how much license usage over a 1 day period, but when I put it in a bar graph it does not work well because the stats command provides an OVERALL total as well as a total for each host, how to I remove the overall total and only show the total for the top 5 hosts.
index="_internal" source="*license_usage.log" | rename h as host b as bytes | eval my_splunk_server = splunk_server | fields source mysourcetype host bytes pool originator my_splunk_server | eval mbytes=((bytes/1024)/1024) | stats sum(mbytes) as mbytes by host
passing user id for lookup query
I am glad i found an app that gives me id of the user who logged in. That will help me some way.
But my main goal is as follows:
Display list of services whose owner is the person who logged in. the service list is output of a search query. And service to owner link is present in a lookup table. But it is not direct link. Service is linked to group names in look up table ( 1 service is assoicated to many grps) - this is stored in table. Now when user is logged in i want to get the user id, query ldap, get user's group list from ldap - do all this when his session has started- store it in session and then when he goes to dashboard - use the user's grp list from session and service-grp link lookup to filter data. Is this possible? Can i store something in session when user logs in?
Graphical email alerts
I created a bar chart of results using a saved search - I need to present the same bar chart view in my email alert. Pls help.
Adding additional Fields?
Is there a way to add additional fields like File Owner or File Creation Date? Having difficulty finding the field names from DLP. Any help would be greatly appreciated.
when is it safe to delete oneshot input file?
Hello. I have a script that invokes the command line splunk tool on an single index/search head to oneshot index log files. Is it safe to delete the input log file after splunk oneshot returns with status 0? The reason I ask is the search status webpage shows the number of indexed events ticking upward for a while after that the command returns. It seems to work, but I don't want to do it this way if it is not safe.
managing log.cfg through deployment server
I am trying to minimize noise level (across WAN) by splunk to greatest degree possible..
With review of index=_internal source=splunkd, I see that each of my universal forwarders is forwarding lines from splunkd.log. This log file is very noisy with most components logging INFO level events by default. I want to change most of the logging levels to >= WARN.
I know this can be done by manually altering logging levels in .etclog.cfg. Does anyone have any experience managing this configuration as a deployment-app? I imagine it would be possible with deployment of a script to execute line changes.. Is this a bad idea?
inputs appreciated.
Expand json messages by default
We have json data being fed into splunk. How can I instruct Splunk to show me the JSON object expanded by default. If default expansion is not possible can I query such that the results are expanded. Right now they are collapsed and I have to click to get to the Json fields I want
Timechart Graph extends into the future
index=summary_security earliest=-1d@d latest=now orig_sourcetype=dhcp | timechart count by orig_sourcetype | eval marker = "today" | eval _time = _time+1800 | append [search index=summary_security earliest=-7d@d latest=-6d@d orig_sourcetype=dhcp | timechart count by orig_sourcetype | eval marker = "last week" | eval _time = _time+86400*7+1800] | timechart sum(dhcp) by marker
I'm using the above search in order to create a week over week comparison of my sourcetype counts. The problem is that my "last week" data flatlines after the day i'm looking at and continues for a week into the future creating a bunch of white space in my graph.
Any ideas how to solve this one?
Duplicate IIS event logs | WatchedFile - Checksum for seekptr didn't match
I'm receiving duplicate events from IIS logs being sent through the universal forwarder.
The forwardeds 'splunkd.log' is showing:
10-24-2013 14:45:02.882 +1100 INFO WatchedFile - Checksum for seekptr didn't match, will re-read entire file='C:\path\to\iis\logs\u_ex131024.log'.
10-24-2013 14:45:02.882 +1100 INFO WatchedFile - Will begin reading at offset=0 for file='C:\path\to\iis\logs\u_ex131024.log'.
10-24-2013 14:45:02.882 +1100 INFO WatchedFile - Resetting fd to re-extract header.
Splunk versions are:
- Splunk 6.0.182037
- Splunk universal forwarder 6.0.182611
inputs.conf
[monitor://C:\path\to\iis\logs\*.log]
disabled = false
sourcetype = iis
props.conf (as per universal forwarder defaults)
[iis]
pulldown_type = true
MAX_TIMESTAMP_LOOKAHEAD = 32
SHOULD_LINEMERGE = False
INDEXED_EXTRACTIONS = w3c
detect_trailing_nulls = auto
Any ideas where I am going wrong?
Can I upgrade Splunk from 5.0.5 to 6.0.1 without upgrading to 6.0.0 first?
I am upgrading my Splunk environment from 5.0.5 to 6.0.X. 6.0.1 was just released today. Can I upgrade directly to 6.0.1 or do I need to upgrade to 6.0.0 first and then from 6.0.0 to 6.0.1?
Receiving data via Splunk Forwarder, I want to forward it as syslog
The original data is NOT syslog, and it's coming via universal forwarder, but I would like to forward it from my Splunk indexer onward to a 3rd party receiver as UDP Syslog. Can we take data that is from a monitor stanza in a universal forwarder, index it, and then also send it in raw syslog format. Has anyone faced this challenge and come up with a solution?
Outputs.conf:
[syslog:syslog_out]
server = 209.83.194.68:514
type = udp
timestampformat = %b %e %H:%M:%S
Transforms.conf
[trapfields]
DELIMS = "~"
FIELDS = A1,A2,A3,A4,trapagt,trapsrc,oid,A8
[syslog_routing]
REGEX = .
DEST_KEY = _SYSLOG_ROUTING
FORMAT = syslog_out
Props.conf
[traplog]
TZ = UTC
pulldown_type = 1
REPORT-f1 = trapfields
[impact]
TZ = UTC
pulldown_type = 1
MAX_TIMESTAMP_LOOKAHEAD=50
NO_BINARY_CHECK=1
[nc_syslog]
TZ = UTC
pulldown_type = 1
MAX_TIMESTAMP_LOOKAHEAD=50
NO_BINARY_CHECK=1
[syslog_test]
TRANSFORMS-routing = syslog_routing
Splunk version 5.0.4
How do I make a multi-dimension timechart?
I have a need to count up both failures and successes on a chart, split them by something, and then compare these values to the same time period in the past. Is it possible to do this all on one graph?