Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Link Switcher using basic xml

$
0
0

Hello Fellow Splunkers,

I am fairly new to Splunk , so apologies in advance if this is a silly question. I have a specific task that I am trying to achieve and I want to use basic xml. I know this can be done through advanced xml and sideviewutils but i want to stick to basic xml if possible.

My requirement:

I have a certain log file and I have defined several Eventtypes. I am also extracting several fields, specific to each Eventtype. I am attempting to display the top eventtypes in chart and table format and then for each of the event types, I run several searches displaying bar charts, pie charts and tables. The number of sub searches for the eventtypes would vary depending on the eventtype.

Question:

I have gone through the Splunk 6 Dashboard examples, and I am thinking of using the Link Switcher. The problem is that the data-items in my case is not static, it will be dynmaic based on the top eventtypes. So will the Link Switcher take a search result instead of statically defining them ?

I have used Sideviewutils Tabs Module and Switcher Module to achieve the same but I am wondering if there is way to do this in basic xml. I have gone through the documentations, previous posted questions, etc, Unable to find any. Any help would be appreciated !

Thanks


Is this a linebreaking issue?

$
0
0

I'm collecting events from a logfile that look like this :

270929.542: [GC 270929.542: [ParNew Desired survivor size 1288490184 bytes, new threshold 16 (max 31) - age 1: 34518968 bytes, 34518968 total - age 2: 257792 bytes, 34776760 total - age 11: 60416 bytes, 34837176 total : 3156097K->34336K(4718592K), 0.0357680 secs] 3548065K->426305K(17301504K), 0.0359060 secs]

However, when I see them in Splunk, I only get the first line. The entire 6 lines of this log get written to the file at once, but Splunk seems to only be storing the first line. Does anyone have any ideas as to what could be going on here? The last line contains the info I really want to work with.

Configure or add multiple apps reports into a dashboard

$
0
0

Hi

I'm able to create multiple reports from the searches and show all these reports into a single Dashboard. I can add report into a existing dashboard of the same APP (E.g: search & reporting app). But, I want add the search query results from one app into another app's dashboard. Is it possible to do?

The steps I followed are: 1. Load the data and index against any index (say index_aaaa) 2. Goto APP->search & Reporting. 3. write a search query and make any chart 4. click save-->dashboard (note that "existing dashboard" radio button is grayed out, which is correct for this step) 5. Give the dashboard title as "Dashboard-1" and Panel name as "Chart - 1" 6. Go to search & reporting app again 7. make another chart 8. click on save->dashboard (Note that "existing dashboard" is ENABLED as "Dashboard-1" is already created in Search & Reporting App. ) 9. Select "Existing App" and choose "Dashboard1" 10. give the panel name as "chart - 2" 11. Goto Dashboards 12. Select "Dashboard 1". You could see " chart - 1" and "chart - 2" are displayed in the same Dashboard. 13. Now, install googlemap APP from splunk 14. Goto google map app in splunk 15. write a search query and create map. 16. click save-->dashboard . Note that "Existing Dashboard" radio button is grayed-out. But I have "dashboard - 1" created already in APP::"Search & Reporting". I understand the APP context is different. But my requirement is, I want to add this google map geo location chart into "Dashboard - 1 " and I want show along with "Chart - 1" and "Chart - 2".

Please help me.

Regards Jayanna Hallur jayanna.hallur@wipro.com

ability to read remote files?

$
0
0

Hi,

Doe splunk has a built-in method for watching a directory on a remote server to look for new files to download and index??

Shuttl Splunk 6 (Distributed)

$
0
0

Has anyone tried to deploy Shuttl onto a Splunk 6 cluster?

It seems that the last compatibility check was for 5.x, so I am unclear if it will work on 6.

It also has not bee updated in a while, has the project been abandoned?

Send an alert when the Fill ratio of data processing queues exceeds a certain percentage

$
0
0

I am using the Splunk SoS App, and am interested in setting up some alerts around the "Fill ratio of data processing queues" metrics. I'd like receive an alert when "X" queue is more than 75% for more than 10 minutes.

Is there an embedded version of Splunk?

$
0
0

Is there an embedded version of Splunk?

I'm looking for a secure real-time solution for non-networked systems. I’m looking for Splunk with no web interface, at all.
I’m looking for Splunk with hash verification of config files when they are loaded, and protection from changes to the configs while loaded.
I’m looking for Splunk with hash verification of all warm and cold dbs – or proof that it is not necessary.
I’m looking for Splunk with x.509 requirements for real-time CLI searches.
I’m looking for Splunk with verified export of dbs to other Splunk instances.

I’m looking for something than can collect logs in real-time, from an application suite and its OS. This thing cannot miss entries to any log at any time, not withstanding the limitations of the application or its OS. This thing cannot allow access to change config files, or allow access to requested searches without x.509 authentication, or a pre-verified local equivalent. The data indexed by this thing must be retrievable, if readable, with the appropriate x.509 authentication.

Why? Because not all application suites and OSs have centralized logging, and because searching the indexed data is fast, and sometimes milliseconds count.

captcha did not work

$
0
0

fyi... when entering a question (prior to creating an account), a captcha phrase was required. I entered one - no luck - tried another... several more... tried in a different browser... gave up and created an account first. I don't think the captcha gadget is working.


Deployment Server - reload configs without restarting splunk

$
0
0

Hi;

We are currently setting up multiple new forwarders, which are getting their configs from the deployment server.

Everytime, we setup a new app or modify an existing app we are having to restart splunk.

Is there an easier way to re-initialise the Deployment Server to refresh the /opt/splunk/var/run/tmp/ sub-folders without having to do a "service restart splunk"

Trevor..

Necessary to order db query by rising column?

$
0
0

Is it necessary to include an ORDER BY $rising_column$ in my database tail query? This can be very expensive on a large database not indexed on that column. (example: using the row's modified_time rather than indexed column create_time)

Or is Splunk/DBX smart enough to get a result set back and find the maximum of that column for saving until next run.

How do I get the Excel Export button to display?

$
0
0

I am using Splunk 6.0.1 and Excel Module 2.0.4 on win2008r2. I have tried firefox, explorer, chrome. I do not see the button displayed. I have read the documentation and there is no indication that there are manual configuration changes required.

How do I get the Excel Export button to display?

PersistentValueStoreException when creating DB Input

$
0
0

I am consistently getting the following error when trying to create a Database Input:

ERROR:TailDatabaseMonitor - Configuration Error: Error creating PersistentValueStore type xstream: com.splunk.persistence.PersistentValueStoreException: File not found while trying to store persistent values in XML file

I can run SQL Queries from DBQuery so I know my External Database connections are working. I am not seeing any SQL Syntax errors in dbx.log. Until last week, I had been able to create DB Inputs against several databases on different servers. However, now I cannot create a DB Input against any database, I always get the PersistentValueStoreException failure. I have looked around Event Logs, Splunk recent DB Connect Errors, and log files for any related errors to explain this and I have searched web search engines for resolution steps. So far I have come up with no explanation or resolution of this blocking issue.

Any suggestions are appreciated.

Display chart only within Time range where there data exists

$
0
0

I want to display a chart that automatically crops that whole chart to where there is data and not display any empty before or after time ranges where there is no data at all, how can this be done?

Please let me know if more information is required.

javascript scripted input

$
0
0

I have a js script and want to consume the output in splunk. What command should I use to make sure splunk can consume the output of the .js file? Print("output")?

Speed up search?

$
0
0

Can anyone make some recommendations in speeding up this search? It might be slow due to the large number of records, around 1/2 million.

index=charlesriver sourcetype=windows_events "An account was successfully logged on." | bucket span=1d _time | stats count first(_time) as Date by ADDomain, ADLogon, ADWorkstation, _time | eval Date=strftime(Date,"%m/%d/%Y") | eval ADDomain=case(ADDomain=="CRDWELLS","WELLS",ADDomain=="INTENSIVE","RACKSPACE",ADDomain=="CRDRS","CRD",1==1,"OTHER") | table Date, ADDomain, ADLogon,ADWorkstation, count | chart sum(count) over Date by ADDomain

rex commands using sed in props.conf on a field

$
0
0

Is there a way to use a rex command with mode=sed against a specific field in a config file (props.conf)?? I understand how to use the SEDCMD in the props but that pre-processes and only appears to go against _raw (since the fields wouldn't be defined yet). Is there a way to do the following: rex mode=sed field=a "s/this/that/g" via config files?? Preferably at search time? Thanks, -Bob

Can we rename row, column when we use transpose function

$
0
0

Hi, Can we rename row, column when we use transpose function

Extracting fields from an existing Field

$
0
0

I am working on some http_referer analysis from my proxy logs, seems like an interesting thing to do. I want to do an additional search time field extraction and rip apart the http_referer field to provide more search functionality from the data.

Can I do something like:

transforms.conf: REGEX = field=http_referrer ^(?<http_referer_scheme>w+)://

*Yes, I realize my field name isn't the same as the RFC... haha, official misspelling :/

I can build the whole thing out with a single line, and I am sure the hardware can handle the overhead without issue (I hope), but I'd rather have field anchor of some sort to go off of.

Am I missing something on this?

After thoughts: I can do a content match on the :// as there is nothing in the logs that should contain that combination of characters in ASCII, any colons in the URI will be in hex or something else.

Thanks.

Search-App Activity-DropDown-System-Activity only viewable by Administrator

$
0
0

Hi, this is likely a noon question

In V6, "Search & Reporting" App - the menu-bar contains an "Activity" drop-down (far right next to "help"), if we are logged in as Administrator then within the "Activity" drop-down is "System Activity", otherwise it just contains "Jobs" and "Triggered Alerts"

The ability to access the Sub-Menu "Activity" (which contains essentially a great subset of the deployment-monitor/SOS apps) is only visible for the Admin user, what role-permission(s) or tweaks do we need to set on other roles (without inheriting admin directly) should we set to make that sub-menu visible please?

delta counts by keyname

$
0
0

How can I get a delta count by a key name when there are multiple keys for plotting the delta in a report?

I have a collection that outputs like this via syslog:

TimeStampMsec="1390586680463" QueueName="ad.input" ConsumerCount="1" MessagePendingCount="0" EnqueueCount="9" DequeueCount="9"
TimeStampMsec="1390586680463" QueueName="ldap.input" ConsumerCount="0" MessagePendingCount="0" EnqueueCount="0" DequeueCount="0"
TimeStampMsec="1390586680463" QueueName="foo.bar" ConsumerCount="0" MessagePendingCount="4" EnqueueCount="0" DequeueCount="0"

The DequeueCount could increment for the next log entry for any of these records as identified by the QueueName key. I would like to setup a report that provides a linear graph by time for by QueueName of the delta on DequeueCount. I cannot figure this out with delta since I can't seem to get it to take the delta by the QueueName, it can only take the delta of the record previously.

I have done this with mvlist, but we could add/subtract the QueueNames and mvlist feels like it's accessing points via an array and I can't guarantee the order.

Viewing all 13053 articles
Browse latest View live