Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Splunk 6.1 upgrade - "Splunk Installer was unable to set the CACLS on the Splunk files. Exitcode='13'

$
0
0

I upgraded from 6.0 to 6.1 this morning and received the following message in a window titled "Force ACLs":

Splunk Installer was unable to set the CACLS on the Splunk files.  Exitcode='13'

Then it lists the Splunk domain user I specified earlier in the installation. I was logged in as a domain administrator when performing the installation, and the domain account I specified for Splunk was set up following the guides here and here. I haven't noticed any adverse effects as of yet... Has anyone else experienced this error or seen any consequences of it yet?


Cisco Security Suite blank dashboard - What am I missing?

$
0
0

Hello all,

I am new to Splunk. I am trying to setup some apps, Cisco Security Suite being one. I am having the same "blank dashboard" issue as others have posted. All panels are showing "No results found." I am having exactly the same problem with another security related Splunk app and it is very frustrating.

I am running Splunk 6.0 on Windows Server 2012. There is only one Splunk server in the landscape. I have multiple ASA firewalls sending syslog to Splunk via UDP 514. I have a custom index receiving syslog data from all network devices, and it is searchable in the Splunk UI. I have confirmed I can see results from ASA. I have installed the TA for ASAs. I have also followed the instructions regarding the TA & SA file & folder configuration, but still nothing.

I am not sure what else to do at this point. Any assistance would be greatly appreciated.

Thank you, Drew

Use a Function without Running a Search

$
0
0

Hey guys, is it possible to run an eval function in the search bar without piping a search to it?

In an attempt to test the urldecode function, I'm trying to run the following on the search bar:

| eval x=urldecode("http%3A%2F%2Fwww.blah.com%2Fsomething%2Fsomething-something") | search x!=""

I'm just trying to see what that urldecode function will do with that string, but, I would like to be able to do something like this with other functions in the future.

Any thoughts?

Thanks!

how to forward data from file

$
0
0

For example, i have two hosts. The data is in host1. Now the host2 need recived data, wheather i can send the data to host2 by socket? Syslog or other way? I want use splunk forwarder,because i have lots of host1, if i install splunk forwarder in every host1, its too much.

How to use wildcards in [monitor://...] along with whitelist pattern?

$
0
0

This should be an easy one...

This works great

[monitor:///opt/tcserver/server/appname/logs]
whitelist = \.log$|\.log4j\.log\.
sourcetype = log4j

This does not

[monitor:///opt/tcserver/server/*/logs]
whitelist = \.log$|\.log4j\.log\.
sourcetype = log4j

I need to be able to use the wild card because I have an arbitrary number of wars running under .../server/... and want to index the logs from all of them.

I suspect this is due to an interplay between the splunk converting the wildcard in the monitor line to a whitelist regex pattern and the whitelist line itself, but I can't quite figure it out.

How to Configure Universal Forwarder Monitoring a Windows Compressed Directory with Logs?

$
0
0

I am attempting to monitor a directory that utilizes compression enabled by the Windows folder options (Properties -> General -> Advanced -> Compress contents to save space). Splunk UF is returning the following error for each file identified in the directory structure:

06-18-2014 17:16:46.658 -0400 ERROR TailingProcessor - Ignoring path="X:\LOGS\W3SVCxxxxxxx\ex120316.log" due to: Cannot checksum file due to unknown charset="AUTO".f

Several other servers of similar configuration that do not have system compression enabled work fine. Does anyone know how I should configure UF to deal with this?

How is "Top URL Catogory" on main Palo Alto app Dashboard populated? Mine shows "and" as the Top URL Category

$
0
0

Hi.

Recently, the "Top URL Catogory" on my main Palo Alto app Dashboard shows "and" (yes, literally the word "and") as the Top URL Category. This was not always the case but not sure when it changed. The folks that manage the actual PAN firewalls say they did not change anything.

So I am wondering HOW this Text is populated?

And where I might start troubleshooting?

It seems to be a parsing issue but again not sure.

This value used to be something "expected" so not sure what changed.

Any ideas/suggestions are appreciated.

Passing down values of Pulldown selectors after clicking Button/submitbutton

$
0
0

I am using 5 checkbox Pulldown in my View. I am using the selected values of Pulldown to Populate a Table and chart. The problem is as soon as I select or deselect some values form the Pulldown the Table and chart are getting Populated based on the selected values but I want the table and chart to repopulate only after I click the Button/submitbutton.


XML indexing using TRUNCATE and crcSalt together fails

$
0
0

Having a hard time getting this right, if (TRUNCATE = 0) or (crcSalt = <source>) are used by theselfs they work. Does inputs.conf disable the props.conf settings in some way?

------- My setup -------

props.conf this is on the splunk server

[source::C:\\var\\xml\\*]
MAX_EVENTS = 100000
NO_BINARY_CHECK = 1
TRUNCATE = 0
SHOULD_LINEMERGE = true
KV_MODE = xml

inputs.conf this is on my forwarder

[monitor://c:\var\xml\*.xml]
disabled = 0
followTail = 0
crcSalt = <SOURCE>
initCrcLength = 2048

Multiline Regex trouble - Can't get fields to be associated with keys

$
0
0

Hi,

I am in great troubles with a multilines events i'm trying to analyse, and associated required regex to extract fields.

An example of an event (sql query output):

---- Identification ----
Date :  Mon Apr 28 19:00:00 DFT 2014
Hostname :  MYHOST01
Script : RQ_TB.ksh
Version courante : 1.0
------------------------

   Database Connection Information

 Database server        = DB2/AIX64 9.5.3
 SQL authorization ID   = MYF0001
 Local database alias   = MYF0002

-- 
-- SUMMARY OF USER TABLE DATA SIZES
--

select CURRENT SERVER as DBNAME, CURRENT TIMESTAMP as CURRENT_TIMESTAMP, S.USER_DATA_L_SIZE_KB, DEC( (S.USER_DATA_L_SIZE_KB/1073741824.0), 31, 11 ) as USER_DATA_L_SIZE_TB, COALESCE( CEIL( DEC( (S.USER_DATA_L_SIZE_KB/1073741824.0), 31, 11 ) ), 1 ) as USER_DATA_L_ENTITLEMENT_REQ_TB from ( select ( sum(A.DATA_OBJECT_L_SIZE) + sum(A.LONG_OBJECT_L_SIZE) + sum(A.LOB_OBJECT_L_SIZE) + sum(XML_OBJECT_L_SIZE) ) as USER_DATA_L_SIZE_KB from SYSIBMADM.ADMINTABINFO as A, ( select TABSCHEMA, TABNAME, OWNER, OWNERTYPE, TYPE, STATUS, TABLEID, TBSPACEID from SYSCAT.TABLES where OWNERTYPE = 'U' and TYPE IN ('G', 'H', 'L', 'S', 'T', 'U')  ) as T where A.TABNAME = T.TABNAME and A.TABSCHEMA = T.TABSCHEMA ) as S

DBNAME             CURRENT_TIMESTAMP          USER_DATA_L_SIZE_KB  USER_DATA_L_SIZE_TB               USER_DATA_L_ENTITLEMENT_REQ_TB   
------------------ -------------------------- -------------------- --------------------------------- ---------------------------------
MYF0002           2014-04-28-19.00.01.768168            110325200                     0.10274834930                                1.

  1 record(s) selected.

-- 
-- BREAKDOWN OF USER TABLE DATA SIZES
--

select rtrim(A.TABSCHEMA) SCHEMA, rtrim(A.TABNAME) TABLENAME, sum(A.DATA_OBJECT_L_SIZE) as DATA_OBJECT_L_SIZE_KB, sum(A.LONG_OBJECT_L_SIZE) as LONG_OBJECT_L_SIZE_KB, sum(A.LOB_OBJECT_L_SIZE) as LOB_OBJECT_L_SIZE_KB, sum(XML_OBJECT_L_SIZE) as XML_OBJECT_L_SIZE_KB, ( sum(A.DATA_OBJECT_L_SIZE) + sum(A.LONG_OBJECT_L_SIZE) + sum(A.LOB_OBJECT_L_SIZE) + sum(XML_OBJECT_L_SIZE) ) as USER_DATA_L_SIZE_KB, T.COMPRESSION, T.PCTPAGESSAVED as Taux_de_compression from SYSIBMADM.ADMINTABINFO as A, ( select TABSCHEMA, TABNAME, OWNER, OWNERTYPE, TYPE, STATUS, COMPRESSION, TABLEID, TBSPACEID, PCTPAGESSAVED from SYSCAT.TABLES where OWNERTYPE = 'U' and TYPE IN ('G', 'H', 'L', 'S', 'T', 'U')  ) as T where A.TABNAME = T.TABNAME and A.TABSCHEMA = T.TABSCHEMA group by A.TABSCHEMA, A.TABNAME, T.COMPRESSION, T.PCTPAGESSAVED order by A.TABSCHEMA, A.TABNAME

SCHEMA                                                                                                                           TABLENAME                                                                                                                        DATA_OBJECT_L_SIZE_KB LONG_OBJECT_L_SIZE_KB LOB_OBJECT_L_SIZE_KB XML_OBJECT_L_SIZE_KB USER_DATA_L_SIZE_KB  COMPRESSION TAUX_DE_COMPRESSION
-------------------------------------------------------------------------------------------------------------------------------- -------------------------------------------------------------------------------------------------------------------------------- --------------------- --------------------- -------------------- -------------------- -------------------- ----------- -------------------
SCHEMA01                                                                                                                           ADVISE_INDEX                                                                                                                                       128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           ADVISE_INSTANCE                                                                                                                                    128                     0                    0                    0                  128 N                            -1
SCHEMA01                                                                                                                           ADVISE_MQT                                                                                                                                         128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           ADVISE_PARTITION                                                                                                                                   128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           ADVISE_TABLE                                                                                                                                       128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           ADVISE_WORKLOAD                                                                                                                                    128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_ARGUMENT                                                                                                                                   128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_DIAGNOSTIC                                                                                                                                 128                     0                    0                    0                  128 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_DIAGNOSTIC_DATA                                                                                                                            128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_INSTANCE                                                                                                                                   128                     0                    0                    0                  128 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_OBJECT                                                                                                                                     128                     0                    0                    0                  128 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_OPERATOR                                                                                                                                   128                     0                    0                    0                  128 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_PREDICATE                                                                                                                                  128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_STATEMENT                                                                                                                                  128                     0                  144                    0                  272 N                            -1
SCHEMA01                                                                                                                           EXPLAIN_STREAM                                                                                                                                     128                     0                  144                    0                  272 N                            -1
SCHEMA02                                                                                                                             TE_DEC_ENC_REM                                                                                                                                14523392                     0                    0                    0             14523392 N                             0

Because i need to be able to extract all data within the event, i'm indexing it in multi-line events with config as:

props.conf:

[db2compress]

# your settings
MAX_EVENTS=100000
NO_BINARY_CHECK=1
TIME_FORMAT=%a %b %d %H:%M:%S DFT %Y
TIME_PREFIX=Date :

REPORT-extract_regfields = regfields

EXTRACT-hostname = (?i)Hostname :  (?P<HOSTNAME>\w+)
EXTRACT-database_server = (?i)Database server        = (?P<DATABASE_SERVER>[0-9a-zA-Z/]+)
EXTRACT-sql_auth_id = (?i)SQL authorization ID   = (?P<SQL_AUTH_ID>\w+)
EXTRACT-database_alias = (?i)Local database alias   = (?P<DATABASE_ALIAS>\w+)
EXTRACT-entitlement = (?im)^\w+\s+\d+\-\d+\-\d+\-\d+\.\d+\.\d+\.\d+\s+\d+\s+\d+\.\d+\s+(?P<ENTITLEMENT>[^\.]+)
EXTRACT-size_KB = (?im)^(?:[^\.\n]*\.){3}\d+\s+(?P<SIZE_KB>[^ ]+)
EXTRACT-size_TB = (?im)^\w+\s+\d+\-\d+\-\d+\-\d+\.\d+\.\d+\.\d+\s+\d+\s+(?P<SIZE_TB>[^ ]+)

Transforms.conf:

[regfields]
REGEX = (?im)^(?P<SCHEMA>\w+)\s+(?P<TABLENAME>\w+)\s+(?P<DATA_OBJECT_L_SIZE_KB>\d+)\s+(?P<LONG_OBJECT_L_SIZE_KB>\d+)\s+(?P<LOB_OBJECT_L_SIZE_KB>\d+)\s+(?P<XML_OBJECT_L_SIZE_KB>\d+)\s+(?P<USER_DATA_L_SIZE_KB>\d+)\s+(?P<COMPRESSION>\w+)\s+(?P<TAUX_DE_COMPRESSION>[\-]*\d+)
MV_ADD = True

Events are being indexed with success as multi-lines, and everything could seem to be ok.

BUT, the regex used to extract fields from the schema detail:

(?im)^(?P<SCHEMA>\w+)\s+(?P<TABLENAME>\w+)\s+(?P<DATA_OBJECT_L_SIZE_KB>\d+)\s+(?P<LONG_OBJECT_L_SIZE_KB>\d+)\s+(?P<LOB_OBJECT_L_SIZE_KB>\d+)\s+(?P<XML_OBJECT_L_SIZE_KB>\d+)\s+(?P<USER_DATA_L_SIZE_KB>\d+)\s+(?P<COMPRESSION>\w+)\s+(?P<TAUX_DE_COMPRESSION>[\-]*\d+)

Does not to seem to do the job, when i try to achieve some simple stats with Splunk, i get impossible results (such as a simple stats count(TABLENAME) by SCHEMA)

When i check in details with a "stats values(DATA_OBJECT_L_SIZE_KB) by SCHEMA,TABLENAME" for example, i see the values contains every values of the full event field and not the result of the association between keys (and so the value for this table only) as it should be

when i achieve a "table SCHEMA,TABLENAME,DATA_OBJECT_L_SIZE_KB" for example, then the data is correct but even a stats after the table command reports bad results.

So i think the issue in my regex, but this is driving me crazy and i can't get to know why...

If i remove the multi-line mode of the regex with such command:

index=db2compress sourcetype=db2compress
| rex max_match=1 "(?m-s)^(?P<SCHEMA>\w+)\s+(?P<TABLENAME>\w+)\s+(?P<DATA_OBJECT_L_SIZE_KB>\d+)\s+(?P<LONG_OBJECT_L_SIZE_KB>\d+)\s+(?P<LOB_OBJECT_L_SIZE_KB>\d+)\s+(?P<XML_OBJECT_L_SIZE_KB>\d+)\s+(?P<USER_DATA_L_SIZE_KB>\d+)\s+(?P<COMPRESSION>\w+)\s+(?P<TAUX_DE_COMPRESSION>[\-]*\d+)"

Then i off course only get the first result, so the multi line is required, i'm thinking in something with back line return or something or like that, but everything i tried has failed.

Thank you VERY VERY much for any help !

Indexing a CSV data file with more than one set of data

$
0
0

Hi All,

Just curious about the best method to index a CSV file with multiple sets of data inside?

The basic format of the whole file is

I,DataSet1_FieldName1,DataSet1_FieldName2,DataSet1_FieldName3
D,this,54,fred
D,this,87,barry
I,DataSet2_FieldName1,DataSet2_FieldName2,DataSet2_FieldName3
D,784,moreInfo,thatData
D,5443,moreInfo2,thisData
D,524,moreInfo2,theOtherData
I,DataSet3_FieldName1,DataSet3_FieldName2
D,Wow,SoMuchData
D,Really,MoreData

and on and on it goes with about 5 sets of data.

As you can see the first field of each row determines if the row is a header/index row or a data row

I = Index/Header row
D = Data row

We have successfully been able to retrieve only ONE set of data by using the following


inputs.conf


[monitor:///tmp/csvProvider]<br />
disabled = false<br />
followTail = 0<br />
host = csvProvider<br />
sourcetype = public_data


props.conf


[public_data]<br />
KV_MODE = none<br />
SHOULD_LINEMERGE = false<br />
TRANSFORMS-filterprices = setnull,getprices<br />
REPORT-extracts = csv_extract<br />


transforms.conf


[setnull]<br />
REGEX = .<br />
DEST_KEY = queue<br />
FORMAT = nullQueue<br />
<br />
[getprices]<br />
REGEX = ^D,DISPATCH,PRICE,(.*)<br />
DEST_KEY = queue<br />
FORMAT = indexQueue<br />
<br />
[csv_extract]<br />
DELIMS = ","<br />
FIELDS = "I","DISPATCH","PRICE","THREE","SETTLEMENTDATE","RUNNO","REGIONID","DISPATCHINTERVAL","INTERVENTION","RRP"<br />

Does anyone know the best method to get all sets of data out?

Secondly the files we retrieve are actually .zip files. Currently we are extracting them before handing them off to splunk. Is there a way to get splunk to extract and process these files in the same manner as above? We previously tried to get splunk to process the zip files but it didn't seem to handle them very well at all.

Thanks

License Pool daily volume allocation not working running Splunk 6.0.4 on Linux Redhat

$
0
0

Hi all, I am running Splunk 6.0.4 on linux redhat. I installed an enterprise license and the auto-generated pool was created. However, when trying to set the daily volume allocation, it appears the pool will not go over 0MB. I have a 5GB license and have tried configuring pool in MB and GB, but the pool continues to show as 0MB. Does anyone know what's wrong?

Other info: The Splunk Enterprise stack shows, "Effective daily volume" of "0 MB". The enterprise license "Status" shows, "FROM_THE_FUTURE". "auto_generated_pool_enterprise" shows volume used today as, "57 MB / 0 MB".

Thanks for any help.

DB Connect Java Bridge Server Connection Refused

$
0
0

I am getting the following error when using db connect. I cannot get java bridge server running, although I have been through many of the steps outlined in other q/a on this forum.

2014-06-18 15:29:44,009 INFO CONFIG: log.error_maxsize (int): 25000000 host = hadoop-utility01.bdn.lab.xcal.tv source = /mnt/hunk/home/splunk/hunk/var/log/splunk/jbridge.log sourcetype = jbridge-5 6/18/14 3:29:44.008 PM
2014-06-18 15:29:44,008 INFO CONFIG: log.error_maxfiles (int): 5 host = hadoop-utility01.bdn.lab.xcal.tv source = /mnt/hunk/home/splunk/hunk/var/log/splunk/jbridge.log sourcetype = jbridge-5 6/18/14 3:29:43.981 PM
2014-06-18 15:29:43,981 INFO CONFIG: error_page.default (function): <function handleerror="" at="" 0x2045c80=""> host = hadoop-utility01.bdn.lab.xcal.tv source = /mnt/hunk/home/splunk/hunk/var/log/splunk/jbridge.log sourcetype = jbridge-5 6/18/14 3:28:55.995 PM
SplunkdConnectionException: Splunkd daemon is not responding: ('Error connecting to /services/messages: [Errno 111] Connection refused',) host = hadoop-utility01.bdn.lab.xcal.tv source = /mnt/hunk/home/splunk/hunk/var/log/splunk/jbridge.log sourcetype = jbridge-5 6/18/14 3:28:55.995 PM
raise splunk.SplunkdConnectionException, 'Error connecting to %s: %s' % (path, str(e)) host = hadoop-utility01.bdn.lab.xcal.tv source = /mnt/hunk/home/splunk/hunk/var/log/splunk/jbridge.log sourcetype = jbridge-5 6/18/14 3:28:55.995 PM
2014-06-18 15:28:55,995 ERROR Splunkd daemon is not responding: ('Error connecting to /services/messages: [Errno 111] Connection refused',) host = hadoop-utility01.bdn.lab.xcal.tv source = /mnt/hunk/home/splunk/hunk/var/log/splunk/jbridge.log sourcetype = jbridge-5 6/18/14 3:28:55.994 PM
2014-06-18 15:28:55,994 ERROR Socket error communicating with splunkd (error=[Errno 111] Connection refused), path = /services/messages host = hadoop-utility01.bdn.lab.xcal.tv source = /mnt/hunk/home/splunk/hunk/var/log/splunk/jbridge.log sourcetype = jbridge-5

Any help is much appreciated!

Does the summary index data expire?

$
0
0

Do we have an expiration on summary indexed data, if yes how long we can keep that data and where can we find this detail in conf files.

Why Splunk forwarder installed in Windows 2003 not forwarding log file generated by .bat procedure?

$
0
0

Hi everyone

I have a problem with the splunk_forwarder installed in win2003.

Every time I run a .bat procedure(this procedure is used to grab some data from sql server2005) then it will automatically generate a log file. I have configured the information in the input file.

Theoretically, the forwarder would forward the data in the log, but it doesn't. Then I open the log file which was generated by the .bat procedure and modify some data inside like adding a line or delete some data, we find that the information of this log is forwarded soon after. Why we must modify the log file before it can be forwarded?

Thanks


Is TableView or simple XML/element table recommended for creating tables in Javascript framework?

$
0
0

Which is the recommended way for creating tables in the Java Script framework the TableView or simplexml/element/table? I found the TableView way did not have the ability to include the ability to open in search which had me use the simplexml/element table. What are the advantages vs disadvantages of using both mechanisms? "splunkjs/mvc/tableview" vs "splunkjs/mvc/simplexml/element/table" Thanks, -Bob

ES installation clashing with Splunk on Splunk (SOS) app?

$
0
0

Hi All,

We have encountered an issue with a deployment configuration as follows

All Windows Server 2008 R2 Enterprise

  • Server01 - Splunk Indexer
  • Server02 - Generic Search head with SOS and custom dashboards (Master Licence Server)
  • Server03 - Search Head with PCI App
  • Server04 - Search Head with ES App

We are seeing an issue only with the Server04 where it reports multiple "Script Exited abnormally" errors every hour.

alt text

All instances of splunk are running as the same domain user SA_Splunk

alt text

The Security Policy of this user on a working instance

alt text

The Security Policy of this user on the server04 NOT working instance

alt text

As you can see from the Splunk On Splunk "Deployment Topology" dashboard all servers (except server04) report in their basic details

alt text

But on the Server04 it does not. But strangely it does have data to populate the Physical Memory timechart???

alt text

Server04 is definitely linked to the master licence server

alt text

I did find this answer but it is really not ideal as we do want this active. http://answers.splunk.com/answers/123207/enterprise-security-message

Does anyone have any ideas why this is occuring?

Is the ES app clashing with something on Server04 causing SOS to break?

Only the Overview dashboard has data PAN-App v4.1.1 Splunk v6.1.1

$
0
0

I installed the Splunk for Palo Alto Networks app. I am getting data and my index and source types are correct. When I do searches, all the PA fields are getting extracted.

However, I only the Overview dashboard works; it displays real-time information.

The other dashboards and sub-dashboards under Traffic, Threat, Content and System all say "Search is waiting for input..." and the drop downs all say "Search produced no results."

We are using a cluster so the app in installed on the heavy forwarder that receives the logs and a search head that can search all of our indexers.

EDIT: Just realized that the heavy forwarder is still running v6.0.3. Maybe that's the issue. Upgrading tonight to find out.

Search time field extraction not showing in available fields

$
0
0

I am attempting to perform a search time field extraction via the rex command. I use the default field of _raw and give it a regex with named groups. None of my named groups are showing up as an available field to select from.

Essentially, I am parsing a custom apache access log:

An example of a line of data is:

9.999.999.999 9.999.999.9 xxxxxxxx  [17/Jun/2014:23:11:43 -0400] "GET /someapp/css/windows/default.css HTTP/1.1" 200 767 "protocol://www.ourserver.com/someapp/some.jsp?param=1&param2=a" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; Trident/4.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; .NET4.0C)"

The search I use is:

source=/issue.log| rex "(?:[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+, )?(?<forwardedforip>[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+|\-) (?<remoteip>[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+) (?<userid>\S+|\-)[ ]+\[(?<day>\d+)/(?<month>\w+)/(?<year>\d+):(?<hour>\d+):(?<minute>\d+):(?<second>\d+) (<?timezone>-\d+)] \"(?<action>\w+) (?<url>.*?)(?<parameters>\?.*?)? (?<httpversion>\S+)\" (?<httpstatus>\d+) (?<responsesize>\d+|\-) \"(?<refererurl>.*?)\" \"(?<useragent>.*?)\""

Any ideas why my named groups are not showing up? This regex works without the named groups in regex testing apps. I just cannot get it to be recognized by Splunk.

thanks!

Fundamental issue with Splunk's architecture for overwriting other app's configuration

$
0
0

I don't understand why Splunk implemented a priority architecture which can overwrite another app's property. I wanted to blacklist each app's csvs and i used the Stanzas as below in distsearch.conf. To my suprise, one of the apps csvs were not blacklisted.

App1: [replicationBlacklist] excludeLookup = apps/app1_kpi/lookups/*.csv

App2: [replicationBlacklist] excludeLookup = apps/app2_kpi/lookups/*.csv

Both are global sharing. We changed the sharing but got same result.

Will Splunk change this architecture in future? This is very dangerous for managing. The app concept is fundamental violated.

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>