Is there a way to accurately determine the volume of events being dropped to the nullQueue?
I have a standard props & transforms setup to drop events for a given source type by a single regex entry.
Any help would be appreciated.
Many thanks
Staftly
↧
Is there a way to accurately determine the volume of events being dropped to the nullQueue?
↧
How to Set an Alert on a Moving Average
I am tracking 500 errors on a daily basis. The average usually remains constant but sometimes it will increase more than 50%. If this happens I want to have Splunk send an alert
My current search
index=vertex7-access RTG_Error="500" earliest=-6d@d latest=@d | timechart count | timewrap d
So if the moving average deviates more then 50% over the average for the past 6 days, I want Splunk to alert me
↧
↧
How to share a macro globally within the context of my app configuration in macros.conf?
I have created a macro within an app using the macros.conf file. I am able to see the macro within the Settings -> Advanced Search -> Search Macros if I look under the context of my app. There is an option there under the "Sharing" heading to have it shared globally. What I am trying to figure out is how to set that setting within the macros.conf. I have set it to globally, but I don't see my macros.conf being updated with any additional setting and I don't see any other *.conf file being updated to show that setting. How do I go about setting that property from within the context of my app configuration?
↧
Where can I get SystemUpTime to configure Anomalous System Uptime in the Splunk App for PCI Compliance?
Folks,
I am looking to configure the Anomalous System Uptime report within the PCI app. As per the manual: "Relevant data sources for this report include uptime data extracted through scripts from Windows, Unix, or other hosts." Is then the Splunk_TA_windows pre-configured to pull the SystemUpTime? I cannot seem to find anything related to system uptime within the Windows logs; I tried looking at the data by doing sourcetype=Win*.
What does "data extracted through scripts" mean? Is this something that the Splunk Admin has to pull via Scripted Inputs?
Thanks!
http://docs.splunk.com/Documentation/PCI/2.1.1/Install/AnomalousSystemUpdate
↧
How to overwrite a default entry in commands.conf from another app
I'd like to push an app that overwrites which script sendemail uses. For instance I pushed:
**email_app**
bin/sendemail2.py
**local/commands.conf**
filename = sendemail2.py
**metadata/default.meta**
[]
access = read : [ * ], write : [ admin, power ]
export = system
Running **btool** shows that the new config is pulled in, and I've restarted splunk for good measure, however the **old** sendemail script is still being used. Is it possible to do it this way? It works if I modify etc/apps/search/local/commands.conf, but I'd rather push an app to do it.
↧
↧
Splunk DB Connect 2: How to get the all the rows for a specific timestamp?
I'm currently doing a DB Connect Dump every hour, and the query produces multiple rows. How do I display only those rows as a table?
To explain the question in more detail, here is an example. For example, lets say the time currently is 4:38pm, the database was queried at 4:00pm, and I want to display the only the results from that query at 4:00pm.
This is what the index could look like:
_time | Count
8/11/2015 4:00pm | 1512456
8/11/2015 4:00pm | 1645241
8/11/2015 4:00pm | 5768575
8/11/2015 4:00pm | 2178565
8/11/2015 4:00pm | 5678688
8/11/2015 4:00pm | 8725768
8/11/2015 3:00pm | 4515351
8/11/2015 3:00pm | 6437567
8/11/2015 3:00pm | 1244795
8/11/2015 3:00pm | 2024553
8/11/2015 3:00pm | 8823452
So the Splunk search query should only return all results which occurred at 4:00pm.
After looking around, I thought maybe I can use the return function to get the first timestamp and use that to create a search query, but splunk didn't like that. Below is the query I just mentioned:
eval timestamp=[search index=[redacted] sourcetype=[redacted] | stats first(_time) as "time" | return time] | append [search index=[redacted] sourcetype=[redacted] _time=timestamp] | table [redacted]
↧
The $SPLUNK_HOME/var/spool/splunk/ directory is filling up with stash_new Files
After upgrading to Splunk version 6.2.4, the $SPLUNK_HOME/var/spool/splunk/ directory starts filling up with files with the extension of .stash_new. This [answers post][1] has been reviewed, but the issue should have been fixed in version 5.0.3. Why is this occurring?
[1]: http://answers.splunk.com/answers/123825/how-to-clean-stash-new-files-from-the-spool-directory.html?utm_source=typeahead&utm_medium=newquestion&utm_campaign=no_votes_sort_relev
↧
If our Splunk 5.0.2 search head is also a deployment server for 100+ universal forwarders, what is the safest way to upgrade it to Splunk 6.2.3?
We are planning to upgrade our search head from 5.0.2 to 6.2.3. The search head is also the deployment server for 100+ universal forwarders. I read in many forums that the upgrade has broken their deployment server setting. Is there a safe way for this upgrade?
↧
Error in forwarder : Invalid payload_size=1213486160 received while in parseState=1
I have configure my forwarder on my local machine.
It is working fine for my local setup i,e, forwarding data to local network indexer.
When I am adding my remote server in the outputs.conf file, then I am getting above mentioned error.
I am able run to telnet command on <server> 9997.
I am also able to add data to remote server from REST API.
Exact logs are :
08-12-2015 06:59:54.703 +0530 INFO TcpOutputProc - found Whitelist forwardedindex.2.whitelist , RE : forwardedindex.2.whitelist
08-12-2015 06:59:54.703 +0530 INFO TcpOutputProc - Initializing connection for non-ssl forwarding to <my-server-url>:9997
08-12-2015 06:59:54.703 +0530 INFO TcpOutputProc - tcpout group group1 using Auto load balanced forwarding
08-12-2015 06:59:54.703 +0530 INFO TcpOutputProc - Group group1 initialized with maxQueueSize=512000 in bytes.
08-12-2015 06:59:54.704 +0530 INFO PipelineComponent - Pipeline merging disabled in default-mode.conf file
08-12-2015 06:59:54.704 +0530 INFO PipelineComponent - Pipeline typing disabled in default-mode.conf file
08-12-2015 06:59:54.704 +0530 INFO PipelineComponent - Pipeline vix disabled in default-mode.conf file
08-12-2015 06:59:54.704 +0530 INFO PipelineComponent - Launching the pipelines.
08-12-2015 06:59:54.707 +0530 INFO loader - Limiting REST HTTP server to 853 sockets
08-12-2015 06:59:54.707 +0530 INFO loader - Limiting REST HTTP server to 682 threads
08-12-2015 06:59:54.803 +0530 INFO TailingProcessor - TailWatcher initializing...
.
.
.
08-12-2015 06:59:56.159 +0530 ERROR TcpOutputFd - Invalid payload_size=1213486160 received while in parseState=1
08-12-2015 07:00:25.242 +0530 ERROR TcpOutputFd - Invalid payload_size=1213486160 received while in parseState=1
08-12-2015 07:00:55.246 +0530 ERROR TcpOutputFd - Invalid payload_size=1213486160 received while in parseState=1
↧
↧
Why can't the Alert Framework - RedAlert app find my shell script to run?
I have tried many different options on the configuration screen, but I always get the same result in the "Interesting Events : Last 24 Hours" panel:
2015-08-12 15:43:47,571 ERROR [ALERTS] action=SHELL, id="LAR_TEST", err="[Error 2] The system cannot find the file specified" >> r:\lar_test_alert.bat
I have tried putting it in `%SPLUNK_HOME%\/bin\/scripts` but nothing so far works. This can't be that hard - it should be written down in the app help, but it isn't.
↧
How to upload a file/folder from a remote machine to Splunk using Java?
I want to upload a file/folder from a remote machine using a java program to Splunk on a local machine. I have created the connection with Splunk, but I couldn't figure out how to upload it. Splunk requires that the path of the file/folder to upload be local. Also, if possible, is there a way to do it without sharing the directory?
↧
How to filter events based on event's datetime as current date?
Hello! Sup?
I've been into some trouble when comparing datetimes to strings, I know I should convert'em.
Logs I've received are in this format:
CAMPAIGN_START_TIME
00:01:05
CAMPAIGN_END_TIME
00:06:12
CAMPAIGN_DATE
04/08/2015
So, what I did, was create a datetime based on these fields:
| eval CAMPAIGN_COMPLETE_DATE = (CAMPAIGN_DATE+ " " + CAMPAIGN_START_TIME)
The thing is, I need to make splunk filter results, based on this date, not the acctual _time filter.
So I was gonna compare CAMPAIGN_COMPLETE_DATE to "Today"
| eval Today = strftime(now(), "%d/%m/%Y %H:%M:%S")
But I'm having some issues due to string comparisson to datetime.
Does anyone know how can I solve this?
Thanks in advance!
**- Vinicius Guerrero**
↧
Pass starttime/endtime results to another search
I'm trying to do something similar to what I have below, where I gather the latest transaction for when splunk was shut down, find the start/end values, and then run a search based on what happened when my search head was down. How do I use the results from one in another search?
**Example**
index=_audit host=searchhead-host-name* action=splunk* | transaction maxevents=2 startswith="action=SplunkShuttingDown" endswith="action=SplunkStarting" | head 1 | eval starttime=strftime(_time, "%m/%d/%Y:%H:%M:%S") | eval endtime=strftime(_time+duration, "%m/%d/%Y:%H:%M:%S") | search index=* earliest=$starttime$ latest=$endtime$
↧
↧
How write one search to find a percentage using fields from two reports with different statistics in the same summary index?
Hi guys,
I have a summarized index that contains two different reports, and these reports have statistical data with different parameters.
One report (`report=MobilePJTotalClientesUnicos23hs`) summarizes unique clients `clientes_unicos`, and the other report (`report=ClientesImpactadosPorTransacaoMobilePJ_23h`) summarizes impacted clients `ClientesImpactados` by program `programa`.
So I want to do a search to calculate one percentage of impacted clients by program that is as simple as `eval percentual=ClientesImpactados/clientes_unicos`, by program, but I can't figure out how to do that because one report statistic is by program and the other is not.
I'm posting one example of the search I thought would do the job, but the result I get is the image below.
index=sum_internet report=ClientesImpactadosPorTransacaoMobilePJ_23h OR report=MobilePJTotalClientesUnicos23hs | eval percentual=ClientesImpactados/clientes_unicos | table programa percentual
![result_exemple][1]
So sorry about my English, I hope someone can help me with that.
Rgs.,
[1]: /storage/temp/52211-capturar.png
↧
How do i stop a file from being segmented?
This is the beginning of the file, line numbers for clarity:
1. Log File for: BatchJobOutput_20150801-0139_13516_MonthlyBatchJob_SAMM191.log
2. Started: Sat Aug 1 01:39:22 CDT 2015
3. Using path to access.properties: /opt/WebSphere/AppServer/lib/app
4. --------------------------
5. /usr/java64/jdk1.6.0_43/bin:/bin:/usr/bin:/opt/gnome/bin:/usr/X11/bin:/home/cd7543/scripts
This is the end of the file:
159. Ended: Sat Aug 1 01:40:55 CDT 2015
160.
There are many date references between these two sections and Splunk takes this one file and splits it up which is then displayed to the end user segmented, in reverse order.
How do I get Splunk to index this as one contiguous file?
↧
How to modify simple xml table headers via JavaScript
I have a simple table with a custom renderer, a la:
table.getVisualization(function(tableView) {
tableView.table.addCellRenderer(new CustomRangeRenderer());
where my CustomRangeRenderer modifies the class of certain cells
var CustomRangeRenderer = TableView.BaseCellRenderer.extend(
{
render: function($td, cell)
{
var value = cell.value;
if (value=='0') {
{
$td.addClass("zero");
}
}
}
What I need to do, however, is modify the column **headers** programmatically. For example, I want the column header to be red if all the values in the table for that column are zero. I can't find a way to modify this given just the td/cell. I thought maybe it has to be done with something other than a cell renderer, but I can't find anything in the Splunk JS docs that is appropriate. By the way, using SideView is not an option.
Any suggestions?
↧
Are there best practices for controlling my daily License quota used per Pool?
I am a newbie and just getting started. I'm only pulling local data from the Splunk Server. I do have a few apps installed for Active directory and Utilization Monitor. I have a 5GB limit limit and my daily usage is already at 2.267GB of usage. What happens when I set up forwarders for at least 60 additional servers? Is my license big enough? Is there a best practice documentation for newbies?
↧
↧
Is it possible to create a field alias based on eventtypes in props.conf?
Hi,
I saw a conflicting instructions in the props.conf
http://docs.splunk.com/Documentation/Splunk/6.2.4/admin/Propsconf
# The following example creates an extracted field for sourcetype access_combined
# if tied to a stanza in transforms.conf.
[eventtype::my_custom_eventtype]
REPORT-baz = foobaz
It says " sourcetype access_combined" but in the example, it's an eventtype.
Can I actually do something based on eventtype? Something like this in props.conf:
[eventtype::my_event_A]
FIELDALIAS-my_event_A_alias_C = field_in_A as general_field_C
[eventtype::my_event_B]
FIELDALIAS-my_event_B_alias_C = field_in_B as general_field_C
My expectation is, even if there is a field "field_in_A" in my_event_B, I still can ignore that and alias B to C.
log example
my_event_A: this is my field_in_A
my_event_B: this is my field_in_A but I have field_in_B
↧
How to troubleshoot why a deployment client is unable to phone home to the deployment server?
We are unable to get the deployment client to show in the deployment console. Other Windows/Linux servers are connected and apps are being distributed fine.
Deployment Client:
- Windows 2012 x64
- Splunk version 6.2.4
Deployment server:
- oel 6 x64
- splunk version 6.2.0
We have validated that the client can telnet to the deployment server on the correct port. We were able to see the TCP transaction on both sides and enabled debug logging on the client and deployment server. Deployment server has no entry regarding the client.
**Client splunkd.log**
08-12-2015 16:33:03.791 -0700 DEBUG DC:PhonehomeThread - PhonehomeThread::main top-of-loop, DC state=Initial
08-12-2015 16:33:03.791 -0700 DEBUG DC:PhonehomeThread - Attempting handshake
08-12-2015 16:33:03.791 -0700 DEBUG DC:DeploymentClient - Sending message to tenantService/handshake
08-12-2015 16:33:03.791 -0700 INFO DC:DeploymentClient - channel=tenantService/handshake Will retry sending handshake message to DS; err=not_connected
08-12-2015 16:33:03.791 -0700 DEBUG DC:PhonehomeThread - Handshake not yet finished; will retry every 12.0sec
08-12-2015 16:33:03.791 -0700 DEBUG DC:PhonehomeThread - Phonehome thread will wait for 12.0sec (1)
↧
[SplunkJS] SavedSearchManager - How to pass a token into the search query?
I have a SavedSearchManager defined in my Django template that I then reference from SplunkJS. It has been running fine, but now I want to extend it by passing a token into the search. I have been searching around for documentation on how to do this, but all I find is how to do it for SearchManager which has the search query inline. Anyone have any guidance?
http://docs.splunk.com/DocumentationStatic/WebFramework/1.1/compref_savedsearchmanager.html
↧