Hello,
I'm trying to create a dashboard with various reports using a Switcher and multiple Button modules with advanced xml.
1) Select Report
2) Press Button
3) Enter Text Field data (1 or more text fields)
4) Press Button
5) Run Search
Module Order
1) Pulldown
2) Button
3) Switcher
4) Text Field (1 or more)
5) Button
6) Search
7) Pager
8) Table
The problem I have is that no matter how I order everything, the first Button pushes all the sub-searches to happen. The only way I can get this to NOT happen is by putting a Table Module after the Text Fields and use the Table Module's drill down functionality to make the user click 1 row, which then initiates the sub-search.
1) Pulldown
2) Button
3) Switcher
4) Text Field (1 or more)
5) Table with 1 field named "Question" with value: "Click here to continue"
6) Search
7) Pager
8) Table
Is there a more elegant way of using multiple button modules without having to have a Table Module stop the sub-searches?
↧
Sideview Utils: Creating a dashboard with Switcher and multiple Button modules, how do I prevent the first button from running all subsearches?
↧
Can we create custom folders from Splunk Web for multiple dashboards for different environments?
I have multiple dashboards for different environments. I want to store them in respective folders and create a Hierarchy so that individuals can go and have look. Can we create custom folder from Splunk Web?
↧
↧
Eventgen token increase random number based on time of day
Eventgen is great but there seems to be one key feature missing.
I want a particular value(token) to increase or decrease based on the time of day.
Something like this
token.5.token = \s{1}(468)\s{1}
token.5.replacementType = random
token.5.replacement = integer[100:1000]
token.5.timeMultiplier = { "0": 0.30, "1": 0.10, "2": 0.05, "3": 0.10, "4": 0.15, "5": 0.25, "6": 0.35, "7": 0.50, "8": 0.60, "9": 0.65, "10": 1, "11": 1.2, "12": 2, "13": 2, "14": 1.5, "15": 1, "16": 1, "17": 0.90, "18": 0.95, "19": 1, "20": .8, "21": .8, "22": 0.60, "23": 0.45 }
Where I added timeMultiplier which takes the random number created and multiplies it by the associated value based on the time of day.
I'm happy to try to add the code myself I just need a pointer of where it might be.
More importantly it is possible the function handling token replacement might not even have access to the timestamp of the event being generated.
Too hard?
↧
Eventgen not finding timestamp
Another eventgen question.
I have this log template
Tue Aug 4 16:15:07 2015: APHT SNMP Query Hostname=er1-edge.isp.mysite.net.au Interface=TenGigE0/7/0/6.39 Type=NET State=WA ifHCInOctets=2471207606 ifHCOutOctets=2214577021 bpsIn=@@bpsInSmaller bpsOut=@@bpsOutSmaller ResultCode=0 ResultMsg=SUCCESS
Tue Aug 4 16:15:06 2015: APHT SNMP Query Hostname=er1-edge.isp.mysite.net.au Interface=TenGigE0/0/0/6.31 Type=NET State=WA ifHCInOctets=2447659847 ifHCOutOctets=2276999814 bpsIn=@@bpsInSmaller bpsOut=@@bpsOutSmaller ResultCode=0 ResultMsg=SUCCESS
Tue Aug 4 16:15:06 2015: APHT SNMP Query Hostname=er1-edge.isp.mysite.net.au Interface=TenGigE0/7/0/4.402 Type=NET State=WA ifHCInOctets=1152387976 ifHCOutOctets=294026141067407 bpsIn=@@bpsIn bpsOut=@@bpsOut ResultCode=0 ResultMsg=SUCCESS
This token definition
token.0.token = (\w{3}\s\w{3}\s+\d{1,2}\s\d{2}:\d{2}:\d{2}\s\d{4}:)
token.0.replacementType = timestamp
token.0.replacement = %a %b %e %H:%M:%S %Y:
and this error in eventgen
2015-08-07 10:28:34,595 ERROR Can't find a timestamp (using patterns '['(\\w{3}\\s\\w{3}\\s+\\d{1,2}\\s\\d{2}:\\d{2}:\\{2}\\s\\d{4}:)']') in this event: 'Tue Aug 4 16:15:06 2015: APHT SNMP Query Hostname=er1-edge.isp.mysite.com.au Interace=TenGigE0/0/0/6.31 Type=NET State=WA ifHCInOctets=2447659847 ifHCOutOctets=2276999814 bpsIn=@@bpsInSmaller bpsOut=@@psOutSmaller ResultCode=0 ResultMsg=SUCCESS
ValueError: Can't find a timestamp (using patterns '['(\\w{3}\\s\\w{3}\\s+\\d{1,2}\\s\\d{2}:\\d{2}:\\d{2}\\s\\d{4}:)']' in this event: 'Tue Aug 4 16:15:06 2015: APHT SNMP Query Hostname=er1-edge.isp.mysite.com.au Interface=TenGigE0/0/0/.31 Type=NET State=WA ifHCInOctets=2447659847 ifHCOutOctets=2276999814 bpsIn=@@bpsInSmaller bpsOut=@@bpsOutSmaller ResutCode=0 ResultMsg=SUCCESS
Any ideas why it can't find the timestamp?
↧
I can see Sonicwall syslog data under the Search app and data summary, but why is there no data in the Dell Sonicwall Analytics 2.0 app?
Hi All
I am testing a Trial of Splunk Enterprise and doing a little experimenting.
I can see the Sonicwall syslog data in Splunk under "Search and Reporting" and the data summary, so the data is getting into Splunk.
However, there is no data with the Sonicwall app. I created a sonicwall index, but still no luck.
Any pointers ? I'm new at this so maybe something basic.
Thanks !
↧
↧
Is it possible to create a dashboard where you do not have to authenticate in order to log in to Splunk?
Is it possible to create a dashboard where you do not have to authenticate in order to log in to Splunk? I want it to be like an embedded report, but with a dashboard. I have been trying to do this with the Splunk SDK for Javascript, but I have had no luck. I don't know if this is the correct approach. Any comment is greatly appreciated.
↧
Splunk App for Enterprise Security: Why is the Incident Review Rule not working?
I am trying to lower the count of incidents being created by modifying the Brute Force Detected rule, but it does not seem to reflect the new search I enter. I am trying to have the alert on trigger if the failed login attempts are greater than or equal to half the success rate. I have double checked that the rule now contains the query as I modified, and have restarted the splunk service after saving the change. Below is the full search, yet I am still getting results such as `The system 10.225.0.133 has failed authentication 11 times and successfully authenticated 1336 times in the last hour`.
| `datamodel("Authentication","Authentication")` | stats values(Authentication.tag) as tag,count(eval('Authentication.action'=="failure")) as failure,count(eval('Authentication.action'=="success")) as success by Authentication.src | `drop_dm_object_name("Authentication")` | eventstats count(success) as success_count | eventstats count(failure) as failure_count | where failure_count>=(success_count/2) | `settags("access")`
↧
Data Model: Is there a way to modify the transaction command to include extra parameters?
I am trying to provide more business focused users a way of querying our Splunk data, and have been experimenting with Data Models.
For searching of raw events, the tools seems quite capable and should meet most needs of the business. The Transaction function however seems to lack some of the basics.
When looking at creating a Root Transaction, you are presented with 4 fields: Group Objects, Group By, Max Pause & Max Duration. My problem is that there are _many_ more parameters that can be applied to a transaction command, one parameter being mvlist. mvlist is useful when analysing web logs with the transactions command as it helps you to extract landing and exit pages.
Does anyone know of any advanced ways in which we can modify the transaction command to include extra parameters? Otherwise I guess I can use a Root Search, but that seems sub-optimal...
Any suggestions greatly appreciated.
↧
Does the Website Monitoring app support SiteMinder authentication?
Hello,
Would like to know if this app supports "SiteMinder authentication" or https sites? We have some URL that redirects to a SiteMinder page for authentication and if the authentication is successful, it will open up the main page. How about managing different authentications for different URL's? can the password filed be encoded?
Thanks, Rithick
↧
↧
Can someone explain why Search A has 0 results, but the refined Search B has multiple results?
Can someone explain to me how Search A can have 0 results, but the refined Search B has multiple results? They are exactly the same except that the second theoretically has a smaller result set to process, right? Index pgbs has ~650,000 events.
Search A (0 results):
index=pgbs | makemv delim="," GtinToAsset | eval GtinCount=mvcount(GtinToAsset) | where GtinCount>1
Search B (188 results):
index=pgbs GtinToAsset="*,*" | makemv delim="," GtinToAsset | eval GtinCount=mvcount(GtinToAsset) | where GtinCount>1
↧
What are the steps to configure receiving logs from McAfee ePO via SNMP?
Hello,
I would like to know the steps on the configuration of receiving logs from McAfee ePO via SNMP and how to deal with the inputs?
Thank u
Regards,
Azhar
↧
How to troubleshoot why I'm not getting any events from ePO with Splunk DB Connect 1 and the Splunk Add-on for McAfee?
My question is similar to the below:
http://answers.splunk.com/answers/179701/splunk-db-connect-why-am-i-getting-an-error-config.html
This saga started when I upgraded to 1.2 back on July 17. At the time I was running Java 1.7. Things got a little crazy and I never noticed that I stopped getting data from ePO. Fast forward to this week when I finally noticed that my ePO dashboards weren't working. While troubleshooting, I found that I need to upgrade java to 1.8 as DB Connect 1 version 1.2 didn't work with java 1.7
I upgraded to Java 1.8 and removed versions 1.6 and 1.7. So I now have DB Connect 1 version 1.2 and I also upgraded Splunk Add-on for McAfee to version 2.1.1
Splunk is installed on CentOS 6.5 and McAfee ePO 4.6.9 is running on a Windows 2008R2 server with MSSQL 2008R2.
java bridge is now running just fine.
But here's my problem. I am still not getting any events from ePO.
I've double/triple checked that the domain/username and password are correctly entered. I don't have any errors in splunkd.log, dbx.log or jbridge.log.
However, when I go to the Splunk DB Connect app and go into the Database Info page where it had the Database Tables panel and I click the 'Fetch tables' button, I get nothing back (after, mind you, selecting the correct database in the drop down above).
Also, when I got to Settings- External Databases - mydatabase and try to re-enter the domain/username and password, I get this error:
Encountered the following error while trying to update: In handler 'databases': Error connecting to database: com.ibm.db2.jcc.am.DisconnectNonTransientConnectionException: [jcc][t4][2043][11550][4.19.26] Exception java.net.ConnectException: Error opening socket to server /x.x.x.x on port 3,700 with message: Connection refused. ERRORCODE=-4499, SQLSTATE=08001
And if I go to Settings - Database Inputs - myinput and (without changing anything) click save, I get this error:
Encountered the following error while trying to update: Splunkd daemon is not responding: (u'Error connecting to /servicesNS/-/dbx/dbx/dbmon/dbmon-tail%3A%252F%252Fmcafee_epo_4_db%252Fta_mcafee_epo_4_input: The read operation timed out',)
and finally, if I got to the app itself and go to settings - Splunk DB Connect configuration and click save (with or without changing anything), I get the following error:
Encountered the following error while trying to update: In handler 'localapps': Error while posting to url=/servicesNS/nobody/dbx/dbx/install/java
I'm wondering what else I can do. The two things I know I have not tried are 1) Uninstall and reinstall DB Connect 1 and 2) Install and use DB Connect 2.
Suggestions?
Thanks.
↧
How can I chart the numbers from these two strings in my data into one graph?
Hello community,
I have a string `.net clearing cache request for user took this many miliseconds:` and `.net clearing cache request for listing took this many miliseconds:`. How can I chart the numbers into a line graph for both values after the `:` so my output in a graph will show the latency numbers for the user and listing? Here is an example output from the logs:
8/7/15
1:24:04.913 PM
[2015-08-07 13:24:04.913][INFO ] .net clearing cache request for user took this many miliseconds: 4
source = /var/log/jboss/documentService.log
8/7/15
1:24:04.913 PM
[2015-08-07 13:24:04.913][INFO ] .net clearing cache request for user took this many miliseconds: 4
source = /var/log/jboss/documentService.log
8/7/15
1:24:04.908 PM
[2015-08-07 13:24:04.908][INFO ] .net clearing cache request for listing took this many miliseconds: 7
source = /var/log/jboss/documentService.log
8/7/15
1:24:04.908 PM
[2015-08-07 13:24:04.908][INFO ] .net clearing cache request for listing took this many miliseconds: 7
source = /var/log/jboss/documentService.log
8/7/15
1:22:44.708 PM
[2015-08-07 13:22:44.708][INFO ] .net clearing cache request for user took this many miliseconds: 5
↧
↧
Why am I unable to download a Universal Forwarder for Windows from splunk.com?
I'm going to the page below and selecting Windows OS, I'm then redirected to the download page and it thanks me for downloading the forwarder but nothing happens.. I've done this countless times in the past with no issues
I tried on 3 different browsers with no luck.. Can anyone verify that this is not working? Can I grab the .msi install file from another box and install it on the different server?
http://www.splunk.com/en_us/download/universal-forwarder.html
↧
Unknown Index Sec_Events Causing Errors
I have a distributed deployment and use Universal Forwarder on Windows to get the eventlogs and performance information into indexers. After deploying Splunk_TA_windows to the Windows client, the eventlog data comes into the indexers and get indexed to wineventlog just fine. However, I still get errors as below:
"Search peer idx2 has the following message: received event for unconfigured/disabled/deleted index='sec_events' with source='source::Perfmon:Available Memory' host='host::Prod-TS1' sourcetype='sourcetype::Perfmon:Available Memory' (1 missing total) 8/7/2015, 3:40:29 PM"
I checked both my indexers and forwarders but could not find where the index "sec_events" came from. If you have any suggestions please let me know.
Thanks!
↧
In Splunk Cloud, how do I delete data?
For example, I want to run a query:
host="localhost" | delete
But I am given the error: "Error in 'delete' command: You have insufficient privileges to delete events."
↧
Capabilities search and rtsearch
What is the difference between search and real-time search? Doesn't the search provide the real-time data?
↧
↧
sourcetype and field extraction
Need your help,
We have this below format of log and need to assign sourcetype to extract the fields, can you please provide the working regex to include this in transforms.conf
2015-08-07T18:59:32.388226Z pnews-api 1.1.2.1:5681 10.4.0.81:8081 0.000049 0.002743 0.000021 200 200 0 686 "GET https://xyz.xyz.com:443/news-content/ HTTP/1.1" "Mozilla/5.0 (Windows NT 6.1; WOW64; rv:24.0; GomezAgent 3.0) Gecko/20100101 Firefox/24.0" ECDHE-RSA-AES128-SHA TLSv1
fields:
timestamp
elb
client
backend
request_processing_time
backend_processing_time
response_processing_time
elb_status_code
backend_status_code
received_bytes
sent_bytes
request
user_agent
ssl_cipher
ssl_protocol
I have tried this, seems somehow its not working for me,
transforms.conf:
[s3-access-extractions]
REGEX = ^[[nspaces:req_time]]\s++[[nspaces:elb]]\s++[[nspaces:client]]\s++[[sbstring:backend]]\s++[[nspaces:request_processing_time]]\s++[[nspaces:backend_processing_time]]\s++[[nspaces:response_processing_time]]\s++[[nspaces:elb_status_code]]\s++[[nspaces:backend_status_code]]\s++[[nspaces:received_bytes]]\s++[[nspaces:sent_bytes]]\s++[[access-request]](?:\s++[[qstring:useragent]]\s++[[nspaces:ssl_cipher]]\s++[[nspaces:ssl_protocol]]
props.conf
[s3_access_combined]
REPORT-access = s3-access-extractions
SHOULD_LINEMERGE = false
TIME_FORMAT = %Y-%m-%dT%H:%M:%S.%6NZ
EVAL-date_hour = strftime(_time,"%H")
EVAL-date_mday = strftime(_time,"%d")
EVAL-date_minute = strftime(_time,"%M")
EVAL-date_month = strftime(_time,"%m")
EVAL-date_second = strftime(_time,"%S")
EVAL-date_wday = strftime(_time,"%A")
EVAL-date_year = strftime(_time,"%Y")
category = Custom
pulldown_type = true
[rule::s3_access_combined]
sourcetype = s3_access_combined
MORE_THAN_75 = ^\S+ \S+ \S+ \S* ?\[[^\]]+\] "[^"]*" \S+ \S+ \S+ "[^"]*"$
↧
DB Connect 2: Best practice for making [mi_inputs]
Long story short. We use DB Connect to pull data. Recently packaged an app for testing purposes to put on another instance. Realized db connect information wasn't being included in the app when packaged since the Name Input > App was set to Splunk DB Connect. Changing that seems to have fixed my packaging issue and works great.
Here is the real question. We recently decided to copy the stanza in inputs.conf and manually create 6 more since db table was the only thing changing. Rising column is set and using that to track changes. When I added the new stanzas, the data was indexing properly but looking in the inputs and noticed that {app folder}/local/inputs.conf looked great and nothing was changed, but noticed that {db connect 2.0}/local/inputs.conf was coping over the [mi_input] name and placing the tail_rising_column_checkpoint_value=value here and a disable=1 - thats it.
Is this normal? And/or something I need to worry about? Could some sort of permissions regarding the connection or something be causing this?
Thanks
↧
How to find the difference between the results of two different searches in one search to display in a table panel?
Hi,
I hope you can help me with this,
I have 2 search results and I want to get the difference between both in the same search to display it in a table panel.
So..
search events 1:
New apps retrieved | stats values(Count) as Apps_retrieved | Table _time, Apps_retrieved
search events 2:
Apps_Assignment: apps generated in | stats values(Count) as Apps_generated | Table _time, Apps_generated
So, basically what I need is to get:
{(search events 1) - (search events 2)} | timechart span=1h count
or some way to expose this difference in 1h intervals.
Thanks in advance,
↧