Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

If we have corrupted Windows metadata, can we delete every file without checking them one by one and will Splunk rebuild them?

$
0
0
Hi everyone, After a disk accident, our Splunk has corrupted/inconsistent metadata, but to solve the problem, we should execute recover-metadata /pathname/ --validate and check every single file. If it is damaged, delete it, and restart Splunk because it would rebuild them as shown in http://answers.splunk.com/answers/5374/how-to-quickly-validate-the-metadata-files-of-a-given-index-and-of-all-its-buckets.html My question is: Can I delete every file without checking them one by one and will Splunk rebuild them? Thank you

Is it possible to provide a user the capability to change the colors of a dashboard, but allow another user to have different colors for the same dashboard?

$
0
0
I'd like to provide individual users the capability to change colors of their dashboard. Individual A = Blue dashboard Individual B = red dashboard but it is the same dashboard

Can we save a search as an alert via email with a sparkline in it?

$
0
0
Is there a way to save a sparkline in an email alert?

How to search the percentage of occurrences of certain values in a field?

$
0
0
Hi, I have a table like this: userID is_successful version userA true 1.1 userA true 1.3 userB true 1.3 userB true 1.1 userC true 1.1 userC false 1.1 My application sends data to Splunk with userID and whether a particular event was a success or not. I'd like to see the % of distinct users for which that event has failed for every version of the application. Thanks in advance.

Are Splunk consultants and Splunk developers the same? If not, what are the roles and responsibilities for both?

$
0
0
Are both a Splunk consultant and Splunk developer the same? If not, what are the roles and responsibilities of both?

Splunk Add-on for Check Point OPSEC LEA: Why am I unable to set up lea_loggrabber?

$
0
0
I have a problem, and I hope that you can help me, please: I'm installing the Splunk Add-on for Check Point OPSEC LEA, and I can't set up lea_loggrabber: I'm using CentOS 7.1, and I have only one machine with Splunk. I have attached the output file in this message. Any help, I'll be very grateful Regards ./lea-loggrabber-debug.sh Using Splunk instance: /opt/splunk, app name Splunk_TA_opseclea_linux22 Splunk username: admin Password: DEBUG: LOGGRABBER configuration file is: /opt/splunk/etc/apps/Splunk_TA_opseclea_linux22/bin/fw1-loggrabber.conf DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_icmp DEBUG: function string_duplicate DEBUG: function string_duplicate DEBUG: function string_icmp DEBUG: function string_duplicate DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_icmp DEBUG: function string_duplicate DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_icmp DEBUG: function string_duplicate DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_icmp DEBUG: function string_duplicate DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_icmp DEBUG: function string_duplicate DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_icmp DEBUG: function string_duplicate DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_icmp DEBUG: function string_duplicate DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_duplicate DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function string_trim DEBUG: function string_left_trim DEBUG: function string_right_trim DEBUG: function logging_init_env DEBUG: function open_screen DEBUG: Open connection to screen. DEBUG: Logfilename : fw.log DEBUG: Record Separator : | DEBUG: Resolve Addresses: No DEBUG: Show Filenames : No DEBUG: FW1-2000 : No DEBUG: Online-Mode : No DEBUG: Audit-Log : No DEBUG: Show Fieldnames : Yes DEBUG: function get_fw1_logfiles splunk internal call command: $SPLUNK_HOME/bin/splunk _internal call /servicesNS/nobody/Splunk_TA_opseclea_linux22/opsec/opsec_conf/ splunk output: QUERYING: 'https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_opseclea_linux22/opsec/opsec_conf/' HTTP Status: 200. Content: https://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_opseclea_linux22/opsec/opsec_conf2015-08-14T13:31:37-03:00Splunk1300CheckPoint_Internethttps://127.0.0.1:8089/servicesNS/nobody/Splunk_TA_opseclea_linux22/opsec/opsec_conf/CheckPoint_Internet2015-08-14T13:31:37-03:00admin0Splunk_TA_opseclea_linux221111111adminadminadmin1appSplunk_TA_opseclea_linux22nobody77018184sslca10.1.4.41fw11CN=SensorSplunk,0=mngt-blackhole..rq9q26cn=cp_mgmt,o=mngt-blackhole..rq9q26../certs/newFile.p12 mode: fw addFilter: product=VPN-1 & FireWall-1 DEBUG: function string_duplicate -v opsec_sic_name cn=cp_mgmt,o=mngt-xxx26-v opsec_sslca_file ../certs/newFile.p12 -v lea_server ip 10.1.4.41 -v lea_server auth_port 18184 -v lea_server auth_type sslca -v lea_server opsec_entity_sic_name CN=SensorSplunk,0=mngt-xxx26 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Env Configuration: ( :type (opsec_info) :lea_server ( :opsec_entity_sic_name ("CN=SensorSplunk,0=mngt-blackhole..rq9q26") :auth_type (sslca) :auth_port (18184) :ip (10.1.4.41) ) :opsec_sslca_file ("../certs/newFile.p12") :opsec_sic_name ("cn=cp_mgmt,o=mngt-blackhole..rq9q26") ) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Could not find info for ...opsec_shared_local_path... [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Could not find info for ...opsec_sic_policy_file... [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Could not find info for ...opsec_mt... [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_init: multithread safety is not initialized [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] cpprng_opsec_initialize: path is not initialized - will initialize [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] cpprng_opsec_initialize: full file name is ops_prng [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] cpprng_opsec_initialize: dev_urandom_poll returned 0 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_file_is_intialized: seed is initialized [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] cpprng_opsec_initialize: seed init for opsec succeeded [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_create: version 5301. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_add_name_to_group: finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_set_local_names: () names. finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_create: finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_add_name_to_group: finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_set_local_names: (local_sic_name) names. finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_add_name_to_group: finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_set_local_names: (127.0.0.1) names. finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_add_name_to_group: finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_set_local_names: ("cn=cp_mgmt,o=mngt-blackhole..rq9q26") names. finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_apply_default_dn: ca_dn = [O=mngt-blackhole..rq9q26]. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_apply_default_dn: calling PM_policy_DN_conversion .. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_apply_default_dn: finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] sslcaInitCP_Ex: failed to create keyholder [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_init_sslca: no key holder - symmetric SSLCA not started [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] sslcaInitCP_Ex: using asym client without ca cert [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] ckpSSLctx_New: prefs = 12 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] CkpRegDir: Environment variable CPDIR is not set. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] GenerateGlobalEntry: Unable to get registry path [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] sslcaInitCP_Ex: using asym client without ca cert [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] ckpSSLctx_New: prefs = 32 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] sslcaInitCP_Ex: using asym client without ca cert [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] ckpSSLctx_New: prefs = 11 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] sslcaInitCP_Ex: using asym client without ca cert [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] ckpSSLctx_New: prefs = 31 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_init_sic_id_internal: Added sic id (ctx id = 0) DEBUG: OPSEC LEA conf file is lea.conf DEBUG: Authentication mode has been used. DEBUG: Server-IP : 10.1.4.41 DEBUG: Server-Port : 18184 DEBUG: Authentication type: sslca DEBUG: OPSEC sic certificate file name : ../certs/newFile.p12 DEBUG: Server DN (sic name) : CN=SensorSplunk,0=mngt-blackhole..rq9q26 DEBUG: OPSEC LEA client DN (sic name) : cn=cp_mgmt,o=mngt-blackhole..rq9q26 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_init_entity_sic: called for the client side [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Configuring entity lea_server [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Could not find info for ...conn_buf_size... [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Could not find info for ...no_nagle... [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Could not find info for ...port... [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_entity_add_sic_rule: adding rules: apply_to: ME, peer: CN=SensorSplunk,0=mngt-blackhole..rq9q26, d_ip: NULL, dport 18184, svc: lea, method: sslca [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_entity_add_sic_rule: adding INBOUND rule [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_entity_add_sic_rule: adding OUTBOUND rule [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] fwDN_add_CN: new dn is illegal [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_get_comm: creating comm for ent=8cf3e68 peer=8ceae48 passive=0 key=2 info=0 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] c=0x8cf3e68 s=0x8ceae48 comm_type=4 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Could not find info for ...opsec_client... [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_get_comm: Creating session hash (size=256) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_get_comm: ADDING comm=0x8cf6968 to ent=0x8cf3e68 with key=2 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_env_get_context_id_by_peer_sic_name: illegal DN of sic name: CN=SensorSplunk,0=mngt-blackhole..rq9q26 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] OPSEC_SET_ERRNO: err = 4 Argument is NULL or lacks some data (pre = 0) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_sic_connect: failed to get context id for connection [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_get_comm: error in opsec_sic_connect [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] destroying comm 0x8cf6968 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Destroying comm 0x8cf6968 with 0 active sessions [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] pulling dgtype=ffffffff len=-1 to list=0x8cf6984 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] REMOVING comm=0x8cf6968 from ent=0x8cf3e68 with key=2 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Unable to make session ERROR: failed to create session (Argument is NULL or lacks some data) DEBUG: function cleanup_fw1_environment [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Destroying entity 1 with 0 active comms [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_destroy_entity_sic: deleting sic rules for entity 0x8cf3e68 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] Destroying entity 2 with 0 active comms [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_destroy_entity_sic: deleting sic rules for entity 0x8ceae48 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] IpcUnMapFile: unmapping file (handle=0x8cea748) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] IpcUnMapFile: unmapping file (handle=0x8cea7f8) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] IpcUnMapFile: unmapping file (handle=0x8cea8a8) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] IpcUnMapFile: unmapping file (handle=0x8cea948) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] IpcUnMapFile: unmapping file (handle=0x8cea9c8) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] PM_policy_destroy: finished successfully. [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_destroy_sic_id_internal: Destroyed sic id (ctx id=0) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] opsec_env_destroy_sic_id_hash: Destroyed sic id hash [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] fwd_env_destroy: env 0x8ccdfa0 (alloced = 1) [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] T_env_destroy: env 0x8ccdfa0 [ 627 4149561024]@localhost.localdomain[14 Aug 13:31:37] do_fwd_env_destroy: really destroy 0x8ccdfa0 DEBUG: function exit_loggrabber DEBUG: function free_lfield_arrays DEBUG: function free_afield_arrays DEBUG: function free_lfield_arrays DEBUG: function free_afield_arrays

Why is the Splunk Python SDK export running twice for large searches?

$
0
0
I am using Splunk's Python SDK to try to export a search. I am referencing this code: http://docs.splunk.com/Documentation/Splunk/6.2.3/Search/Exportsearchresults#Python_SDK This is my actual code: rr = splunklib.results.ResultsReader(jobs.export(filetostring('train_command.spl'), **kwargs)) for result in rr: if isinstance(result, splunklib.results.Message): print '%s: %s' % (result.type, result.message) elif isinstance(result, dict): print result['username'] else: print "Unknown type: ", type(result) This is my search that filetostring is returning: search source="goodUsers" OR source="badUsers" | iplocation allfields=true ip | eval date_hour = floor(date_hour / 6) | eval user_status = if(source="goodUsers", 0, 1) | eval shortened_ip = if(type="AUTHN_LDAP", shortened_ip, null()) | rename ua_string AS http_user_agent | lookup user_agents http_user_agent | eval browser=ua_family | eval os=ua_os_family."-".ua_os_major | table username, user_status, type, City, Region, Country, Continent, status, shortened_ip, date_hour, date_wday, root_url, os, browser | foreach type, City, Region, Country, Continent, status, shortened_ip, date_hour, date_wday, root_url, os, browser [eval <>-{<>}=1] | table username, user_status, *-* | fillnull | stats avg(*) by username, user_status It is important to note that this search returns somewhere around 5000 columns, and changes based on the data. This is intended. What I am trying to do is simply print out all of the usernames for now. I will need all of the columns later. The problem I have is that the search appears to be run twice. For speed reasons, I obviously only want it run once. One thing I discovered is that it only runs once if I add '| head 2500' or less to the first line, so I'm wondering if it has to do with the size of the results or runtime. Here is my truncated output: user0@university.edu user1@university.edu ... usern-1@calpoly.edu usern-0@calpoly.edu DEBUG: Configuration initialization for /opt/splunk/etc took 9ms when dispatching a search (search ID: 1439569961.1153) DEBUG: base lispy: [ OR source::badusers source::goodusers ] DEBUG: search context: user="myusername", app="search", bs-pathname="/opt/splunk/etc" user0@university.edu user1@university.edu ... usern-1@university.edu usern-0@university.edu Any help would be greatly appreciated.

After mapping groups to roles configuring Splunk to allow LDAP authentication, why am I unable to log in with any of those users?

$
0
0
I'm trying to configure Splunk to allow LDAP authentication. I select "Configure Splunk to use LDAP and map groups" and then complete the LDAP strategy. I then select Map groups and map roles to groups. I am currently using one group as a test that has two users in it. I can see all the groups and my target group. I select my target group and give them a role. For testing purposes, I gave them the power role. I saved, backed out, and checked the user section, but they were not there. I reloaded authentication configuration and they were still not there. When I attempt to login with one of those users I receive the following errors: -0400 ERROR UserManagerPro - LDAP Login Failed, could not find a valid user="xxx" on any configured servers -0400 ERROR AuthenticationManagerLDAP - Could not find user ="xxx" with strategy="LDAP" Also watching TCPdump on the server I can see the traffic going to the LDAP server while attempting to log in. In short, I mapped groups to roles, but I am unable to login with any of those users.

After renaming a sourcetype, why is it only being applied to new data and not already indexed data?

$
0
0
Hi Guys: I have renamed a sourcetype, but after renaming the sourcetype and recycling the indexers, I only see new data being allocated to the new sourcetype. Historical data still seems to be allocated to the old sourcetype. How do I address this problem? Do I need to clean all the eventdata from index on the indexer, stop Universal Forwarder, delete the fishbucket folder, start the indexer, and restart the forwarder? Please let me know. Thanks

How do I change the owner of a saved search or view in a search head cluster environment?

$
0
0
I need to change the owner of a search or dashboard view. Using the deployer merges changes from local.meta back to default.meta on the SHC members when the bundle get distributed and the original local.meta on the SHC members still overrides the default.meta configuration. I also want the configuration to get replicated across all search head cluster members

How to create a field based on cidrmatch in Splunk 6.2?

$
0
0
Hello, I am using Splunk 6.2 and I am trying to use `|eval cidrmatch` in a search to identify a series of subnets by a common name. I am using the following: some search highlighting individual IP's by field clientIP | eval voipnet=cidrmatch("111.111.0.0/16",clientIP) | eval tecnet=cidrmatch("222.222.0.0/16",clientIP) | eval secnet=cidrmatch("333.333.0.0/16",clientIP) | table clientIP,clientSplunkName,clientNetworkName,voipnet,tecnet,secnet | dedup clientIP But I keep getting the error: Error in 'eval' command: Fields cannot be assigned a boolean result. Instead, try if([bool expr], [expr], [expr]) Based on the reference documentation, it looks like my search *may have worked in v.5. Any recommendations on how to do this in version 6.2?

How to view records/data horizontally by host?

$
0
0
Hey is it possible to view data/records from a file horizontally by host. For example, I have a search string like this: search "event1234" | table host, value1,value2, value3 The result looks like this: hostname1, value1 hostname1, value2 hostname1, value3 What I want to see is: hostname, value1, value2, value3 I want the records to be grouped by hostname and the data to grow horizontally for each hostname. Is this possible? C

How do i search for IPv6 addresses from my src_ip field.

$
0
0
I'm trying to do a search that finds IPv6 addresses. Currently our field src_ip has both IPv4 and IPv6 in it. How can i search so only events with IPv6 addresses are returned?

How to get Postal Code (and other new fields) from paid version of GeoIP2-City.mmdb using iplocation or geoip?

$
0
0
I purchased a paid version of the Maxmind GeoIP2-City database, because I want to map zip code information to the IP addresses. I changed the db_path in limits.conf to point to the new version. [iplocation] # Location of GeoIP database in MMDB format db_path = /opt/splunk/share/GeoIP2-City_20150811.mmdb I restarted splunk, but when I search, I just get the standard fields from the free version. index=badge_index | iplocation prefix=mmdb_ allfields=true clientip | fields mmdb_* How do I get iplocation to return postal codes? Some of your documentation advises modifying iplocation.py, but the version I find at `/etc/apps/search/bin/iplocation.py` is clearly not the version being used by search. It's hitting an API for results, not a database. Does anyone have a version of iplocation or geoip that returns the zip code and other fields contained in the paid version of the MAXMIND db?

Time window -24 hours.

$
0
0
I'm trying to write a query that displays a time window -specified time. Example If I use the search tool to provide me results for the last 15 minutes. I would also like it to provide me results for the last 15 minutes; yesterday. Trying to build a dashboard to compare results from multiple days. Thank you!

How can I split an event into two or more events according to two multi-value fields?

$
0
0
The raw data is like : FieldA | FieldB | FieldC | FieldD 14-51-P-1216;14-52-P-0258;14-52-P-0053;14-52-P-0054 | 99DF-E8FF-DA0F-5F6D;1B33-9DAE-7B47-A7B4;FCFF-8F4A-106F-5894;5864-CDA1-7400-AD33 | 2015-07-14 | 2015-11-13 14-50-L-0892;14-50-L-0891 | E934-DD3D-86C9-1D5B;F64B-3125-1D75-1D53 | 2015-08-14 | 2015-09-01 FieldA & FieldB are both multi-value fields, and how many values of one field is indefinite. But, there is a one - to - one relationship between the two fields. I want to split the two events into 6 events as listed below: FieldA | FieldB | FieldC | FieldD 14-51-P-1216 | 99DF-E8FF-DA0F-5F6D | 2015-07-14 | 2015-11-13 14-52-P-0258 | 1B33-9DAE-7B47-A7B4 | 2015-07-14 | 2015-11-13 14-52-P-0053 | FCFF-8F4A-106F-5894 | 2015-07-14 | 2015-11-13 14-52-P-0054 | 5864-CDA1-7400-AD33 | 2015-07-14 | 2015-11-13 14-50-L-0892 | E934-DD3D-86C9-1D5B | 2015-08-14 | 2015-09-01 14-50-L-0891 | F64B-3125-1D75-1D53 | 2015-08-14 | 2015-09-01

How to compare multiple fields in 2 indexes and return the differences

$
0
0
I'm currently trying to compare 3 fields (ID, Start_time, Log_time) from 2 different indexes, and to get the differences when any of the 3 attributes are unmatched. How can I go about doing this? Thank you.

How to integrate VMware MIBs in SNMP Modular Input?

$
0
0
Hello together, I’m using the SNMP Modular Input and have my IP cameras, my ESXi servers and Unitrends Enterprise Backup connected to it. But I have some questions regarding the MIBs: How can I see which MIBs are included? The SPLUNK_HOME/etc/apps/snmp_ta/bin/mibs/pysnmp_mibs-0.1.4-py2.7.egg is an binary file. So opening with nano doesn’t help and fgrep is only saying that are matches but there’s no output of the found matches. I would like to import MIBs for VMware vSphere. But there are about 50 MIB files per ESXi version. So the manual import as described above isn’t a good solution because I’ve got two different versions of ESXi. This means importing 100 files, right? Or which ones do I need to import when using the free hypervisor? Thank You

Pfsense 2.2 to splunk & home monitor

$
0
0
Good day. Can't get any info about how i can do this. When i add input from UDP, Can't see pfsense Sourcetype. Only syslog. Ok. i Added syslog. But, home monitor won't recognize it. And i can't find any info what does meant what in pfsense firewall log (no explanation to numbers) ok, i found Source, Dest, ports, Action. but that's all. Which is not enough. Is there any way to automate or provide source type to splunk? P.S: Searched for 2 days. can't find anything. All info is old...

error configuring Stream 6.3.2 on splunk 6.2.5 osx 10.x

$
0
0
Seeing this error in splunkd.log when I try to configure data inputs for Wire Data 08-15-2015 18:22:24.921 +0100 WARN ModularInputs - Validation for scheme=streamfwd failed: The script returned with exit status 2. 08-15-2015 18:22:24.921 +0100 INFO ModularInputs - The script returned with exit status 2. It is all running as root (started splunk manually using sudo ) Any suggestions?
Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>