Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Using Microsoft Eventing 6.0 instead of Splunk;'s forwarder agent

$
0
0

This is a follow up for a conversation I had with Splunk engineers a year ago at SplunkLive! The conversation was about using Microsoft's Eventing 6.0 (native to Windows) which would eliminate the need to use Splunk's forwarder agent on all production devices. (Depending on the environment not using Splunk's forwarder can have governance, security and performance advantages. Another is being able to pull information Splunk's forwarder can't using WMI.

The Splunk engineers I spoke to at the time were not familiar with Evneting and could not comment. Not using Splink's forwarder has some intriguing advantages such as the ability to pull data Splunk's forwarder cannot.

I have plans to use Splunk in the U.S. Cyber Challenge and was looking for a way to automate the deployment of the publishing rules. I'm wondering is Splunk has made any progress in using Eventing?

Using PowerShell the initial Event Publication configuration could be distributed easily to hundreds of servers and to perform any updates. Trevor Sullivan has a post providing the details. If you have any can think of any enchantments Microsoft or the PowerShell team would like to hear from you just post your suggestions.


Include zero-count items from lookup

$
0
0

I have a search that checks my connection logs so to track users who log into my website against a lookup csv with about 500 users listed:

sourcetype!="*Private*" "Connected" "10.0.0.44" | transaction USERNAME maxspan=210s | lookup users.csv Username AS USERNAME | stats count by USERNAME "First Name" "Last Name" Region Country "Job Title" Role Department | table USERNAME "First Name" "Last Name" Region Country "Job Title" Role Department count | rename count AS Visits | sort -Visits

This returns all users who logged in to the website in the specified period in the order of most visits to fewest. But now, I'm more interested in the users who are NOT logging into my website (so I can find out why). Basically I want the full lookup.csv user list with their Visit counts for each period. How do I return the -0- Visits users along with the N Visits users together in one table?

Index lag increasing for REST API event input

$
0
0

I have an event generator that simulates five servers running uberAgent. Data is sent to Splunk via the REST API. When I start the event generator, everything is fine. But while it keeps running, the index lag keeps increasing. In other words: it takes longer and longer for the events to show up in a search.

I am seeing the REST API calls as they are made in splunkd_access.log. Example:

192.168.8.1 - uainput [15/Dec/2013:18:05:38.139 +0100] "POST /services/receivers/simple?source=uberAgent&sourcetype=uberAgent%3aApplication%3aApplicationUsage&host=RDS-1&index=uberagent HTTP/1.1" 200 215 - - - 0ms

In metrics.log I can see that the max_age is increasing. It starts out slow and keeps getting bigger. Example:

12-15-2013 18:05:22.428 +0100 INFO  Metrics - group=per_sourcetype_thruput, series="uberagent:application:applicationusage", kbps=0.402483, eps=9.450443, kb=12.478516, ev=293, avg_age=921.771331, max_age=938

I have no errors in splunkd.log. What is happening here? Is there some kind of quota that limits the number of events to be processed?

Incorrect Event Date Issue

$
0
0

We have Splunk free version protected by IBM Tivoli Access Manager. SPlunk indexes the access logs from access manager. There are no logs in the system before Sep 2013 since system is just implemented. Whenever I run a search in Splunk for events e.g. from Feb 2013 onwards the my access gets logged in access manager log with following string

splunk/en-US/app/search/flashtimeline?q=search%20*&earliest=1360573200&latest=1384074000

Splunk indexes this as event occurred in Feb 2013 (as per my example above) and show this under Feb 2013 events while the actual timestamp in the log is todays date . Why Splunk is treating the above as Feb 2013 event and how to fix this issue?

SmokePing, Cacti results into Splunk?

$
0
0

Anyone tried piping SmokePing or Cacti results into Splunk?

Counting xml tags in raw event

$
0
0

my event records are xml based as shown below coming in from one file, one sourcetype- <transaction><id>12</id>........</transaction> <transaction>.....</transaction> // inside transaction tag i can contain anything <transaction>.....</transaction> <error>.....</error> <error>.....</error> <transaction>.....</transaction> <transaction>.....</transaction> <error>.....</error> I am able to extract child tags inside each one - thats not an issue. But how do i count how many records were of type Transaction and how many were of type Error.

Counting xml tags in raw event

$
0
0

my event records are xml based as shown below coming in from one file, one sourcetype- <transaction><id>12</id>........</transaction> <transaction>.....</transaction> // inside transaction tag i can contain anything <transaction>.....</transaction> <error>.....</error> <error>.....</error> <transaction>.....</transaction> <transaction>.....</transaction> <error>.....</error> I am able to extract child tags inside each one - thats not an issue. But how do i count how many records were of type Transaction and how many were of type Error.

I configured inputs.conf,but my data can't indexed?

$
0
0

I configured inputs.conf,my data can't indexed,but on UI i can add the data.

/opt/splunk/etc/apps/$APP/local

indexes.conf [_cpu] coldPath = $SPLUNK_DB/_cpu/colddb homePath = $SPLUNK_DB/_cpu/db thawedPath = $SPLUNK_DB/_cpu/thaweddb

inputs.conf [monitor:///root/date/CPU_.dat] disabled = false followTail = 0 host = host_regex = (?i).?(?P<hostname>d+.d+.d+.d+)_ index = _cpu sourcetype = cpuinfo

/opt/splunk/etc/system/local/ props.conf [cpuinfo] NO_BINARY_CHECK = 1 SHOULD_LINEMERGE = false pulldown_type = 1

When i search "index=_cpu", event is 0.i don't know why? who can help me?


splunk for squid bytes empty in requests search table

$
0
0

Hi,

I'm trying to get Splunk of Squid working on Splunk v6. I am using squid v 3.1.20-2.2

most of the stuff works, the only thing I can't seem to figure out is the table at the bottom of the requests search view. All the columns but the bytes one has data. Manually searching and displaying bytes values works, the bandwidth over time chart on the traffic dashboard displays properly.

any suggestions?

thanks Yozik

edit: I guess I should have noted that in-order to get the other saved searches to work, I needed to remove the action="*" other than that there have been no modifications to the app

splunk indexes state

Internal 500 Errors

$
0
0

Our single instance Splunk indexer/search host becomes unresponsive every week or so. Root cause has been determined to be the system running out of sockets. We increased the number of TCP ports to 55K which delayed the onset of the condition but it is continuing. The Splunk host is running Windows 2008r2.

Any ideas why this host is consistently running out of sockets?

rsyslog for websphere application server

$
0
0

Hi

we are collecting the logs to splunk indexer via rsyslog,we've got quite a number of unix serves monitored in this fashion and it is all working well Now I want to include Websphere application logs into rsyslog so that splunk can pick it up from there do you have any recommended way of doing this or can you let me know how to achieve this please? Cheers

Query with Thousands of "OR"s

$
0
0

Greetings,

I want to know the least resource intensive way of searching thousands of URLs in one search. So what I am doing is taking the Infragard warnings and then building them into queries enterprise-wide. The latest warning had about 2500 URLs that have been used for DDoS and Botnets. Right now I have a scheduled search with URL OR URL OR URL....etc.

Is there a better way to do this? When I want to adjust the search, I have to pull it into a text editor and then put it back because Splunk web will crawl while I mess with it.

Thanks and let me know if I need to be more specific.

Dave

Quality indicators for bars, charts

$
0
0

Is there a way to specify the color of a single value bar or column chart based on value ranges - green for normal, yellow for warning and red for critical along with a legend to specify the ranges? This can be done with iDashboards.

Displaying results table in tab switcher tab, BEFORE clicking on drilldown field in panel above

$
0
0

I have a dashboard with two panels. The first panel contains a table which is a drilldown table. When the value is clicked, the second panel has three tabs with different searches, for the filtered by the clicked item. The drilldown and intention, etc all work fine. The problem is before you click, the second panel is hidden, or blank. The customer would like there to be a default search results shown (as if it were clicked!) I unfortunately cannot use sideview utils and all its wonderfulness :(

This is the xml (partial) for the dashboard: <module name="StaticContentSample" layoutpanel="panel_row2_col1_grp1"> <param name="text"><h1>Library</h1></param> </module> <module name="HiddenSavedSearch" layoutpanel="panel_row2_col1_grp1" group=" " autorun="True"> <param name="savedSearch">remote_level1_lib</param> <module name="ModifiedSimpleResultsTable"> <param name="drilldown">all</param> <param name="showResetButton">false</param> <param name="displayRowNumbers">False</param> <module name="EnablePreview"> <param name="enable">True</param> <param name="display">False</param> <module name="ConvertToIntention"> <param name="intention"> <param name="name">addterm</param> <param name="arg"> <param name="libname">$click.value$</param> </param> </param> <module name="SimpleResultsHeader" layoutpanel="panel_row4_col1"> <param name="entityName">results</param> <param name="headerFormat"> Details for Remote Monitor Library $click.value$ </param> </module> <module name="TabSwitcher" layoutpanel="panel_row4_col1"> <param name="mode">independent</param> <param name="selected">Subsystems</param> <module name="HiddenSearch" layoutpanel="panel_row4_col1_grp1" group="Subsystem" autorun="True"> <param name="search"> mysearch1here </param> <module name="Paginator"> <param name="count">25</param> <param name="entityName">results</param> <param name="maxPages">10</param> <module name="HiddenFieldPicker"> <param name="strictMode">True</param> <module name="ModifiedSimpleResultsTable" layoutpanel="panel_row4_col1"> <param name="showResetButton">false</param> <param name="allowTransformedFieldSelect">True</param> <module name="ModifiedViewRedirector"> <param name="viewTarget">flashtimeline</param> </module> </module> </module> </module> </module> <module name="HiddenSearch" layoutpanel="panel_row4_col1_grp2" group="Functional" autorun="True"> <param name="search"> mysearch2here </param> <module name="Paginator"> <param name="count">25</param> <param name="entityName">results</param> <param name="maxPages">10</param> <module name="HiddenFieldPicker"> <param name="strictMode">True</param> <module name="ModifiedSimpleResultsTable" layoutpanel="panel_row4_col1"> <param name="showResetButton">false</param> <param name="allowTransformedFieldSelect">True</param> <module name="ModifiedViewRedirector"> <param name="viewTarget">flashtimeline</param> </module> </module> </module> </module> </module> <module name="HiddenSearch" layoutpanel="panel_row4_col1_grp3" group="BIT" autorun="True"> <param name="search"> mysearch3here </param> <module name="Paginator"> <param name="count">25</param> <param name="entityName">results</param> <param name="maxPages">10</param> <module name="HiddenFieldPicker"> <param name="strictMode">True</param> <module name="ModifiedSimpleResultsTable" layoutpanel="panel_row4_col1"> <param name="showResetButton">false</param> <param name="allowTransformedFieldSelect">True</param> <module name="ModifiedViewRedirector"> <param name="viewTarget">flashtimeline</param> </module> </module> </module> </module> </module> </module> </module> </module> </module> </module> </view>`


How can an Indexer best utilize a combination of SSD/HDD storage?

$
0
0

Recent Splunk versions include many acceleration technologies to speed up common search scenarios using technologies like summary indexing (3.1?), bloom filters (4.3), report acceleration (5.0), and accelerated data models (6.0). All of these speedup techniques have a different sweet spot and still provide value today. Fundamentally, they all trade some additional storage for really fast search performance.

Fortunately, Splunk allows the admin to control where all this additional storage gets placed on the Indexer via the indexes.conf file. However, this does make estimating disk usage and determining what type of data should be placed on the fastest storage a difficult thing to plan.

From a storage perspective, Summary indexing is just a special-purpose index, so there's not much new to calculate there. So the focus of my question is on the Splunk search performance features in Splunk 4.3 or later.

Path related index.conf settings:

SettingPurposeAdvantage of fast storage
homePathHot/Warm storageRecent events are available more quickly.
coldPathCold storageHistoric searches are quicker.
bloomHomePathBloom filters?
summaryHomePathReport Acceleration?
tstatsHomePathData model Acceleration?

Splunk and SSDs

Now that SSD are becoming more economical with very clear performance advantages it makes sense to incorporate them into a Splunk system. But the cost is still high enough that hybrid SSD/HHD approach still provides a better retention and speed combination. So my question is two fold:

  1. Which of the above acceleration techniques are most well suitable for fast storage? (Specifically, storage with high IOPS provided by an SSD)
  2. What's a good way to estimate the size requirements for these different acceleration techniques?

My initial thought was simple. Stick hot/warm data on SSDs and place the cold data on the HHDs. I think that makes sense, but then question I had was what "auxiliary" data (bloom filters, summary dat, tstats?) would benefit the most from faster storage? Real-life experience is preferred, but general insights into the typical I/O usage patterns would be helpful too.

Splunk for bluecoat tstats searches

$
0
0

I have recently downloaded and installed the splunk for bluecaot app, and i'm having some difficulty adapting it. We are using the legacy proxySG (5.4) so I have usedthe bcreportmain_v1_old transofrm to extract the necessary fields, and this is working properly.

I am having difficulty with some of the other views however, and it seems to be the views that are attempting to utilize the 'tstats' command in the search. take for instance the 'Bandwidth Savings' view. I see absolutely no results, so in troubleshooting I have opened the .xml to see what the search query looks like that this dashboard is using. I put that into a regular search to see if it returned some results, but it returns nothing at all. all searches for this view exhibit the same behavior.

alt text

Please take a look. this is for the "Requests" count. The very first result set on the page.

| tstats sum(bytes_in) AS sbi FROM bluecoat_stats | eval mb_in=round(sbi/(1024*1024), 2) | fields mb_in

I've never used the 'tstats' command before, so I'm unfamiliar with it's function, however after viewing the documentation, it looks like this query is attempting to call a named series of data to present statistics on it.

I am in an attempt to figure out what loads the "bluecoat_stats" data block, because it would seem that it may be broken, or needs some tweeking to make it work.

Any suggestions would be greatly appreciated!

restrict scheduled real-time searches?

$
0
0

Hi,

Is it possible to give people the ability to execute, but not schedule real-time searches?

Add Credential error

$
0
0

Get the following error when Add new credentials for PAN devices: Encountered the following error while trying to update: In handler 'localapps': Error while posting to url=/servicesNS/nobody/SplunkforPaloAltoNetworks/admin/passwords/

I am using admin to edit it and admin is the owner of the app. I manually edited the setup.xml and app.conf, adding the credentials. But when I open configuration page, credentials are still empty. Can't add the credentials from app GUI.

Do anyone know about the problem? Thanks.

Conditional searching

$
0
0

I'm unsure how to do the following. In our environment, some clients receive private IP addresses (and are translated to public) and others receive public addresses. I need to be able to enter a public IP address and then sift through logs to find the associated mac address and username.

If it's a translated public IP address, I need to FIRST check for the IP in sourcetype=firewall for src_translated_ip=<publicip>.

  • If it finds a result, take the associated src_ip (i.e., the private IP address) and then search in sourcetype=dhcp for the src_mac, and then map to sourcetype=auth with the src_ip and src_mac in order to get the username.
  • If it does NOT find a result, use the original src_translated_ip and search with it as "src_ip" in sourcetype=dhcp for the src_mac, etc....

So basically, first see if it's translated; if it's not, proceed using the IP. If it is translated, find the "real" IP address, then proceed using the real IP.

I have both searches figured out independently, but I want to allow for a user to simply provide the one IP address and then use if/then/else or an equivalent to do the heavy lifting.

Ideas?

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>