Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Where is the installation documentation for Deployment Monitor?

$
0
0

I'm trying to figure out if I need to install the deployment monitor on my indexers. There doesn't seem to be any instructions on how to install this app.


How to move old and new indexes to a new location?

$
0
0

I moved my index location from the web's general settings and restarted splunk. I hadn't yet realized this would not move the old data.

I've stopped splunk and would like both old and new indexes in the new location but am unsure at this point how to best do that.

Making a symbolic link seems like the easiest choice but maybe there's a cleaner way.

There are some conflicts if I were to merge the two directories that make me think that would be a bad idea. No bucket IDs conflict but the .bucketManifest files differ in each of my indexes. I could easily merge these. There are some binary files that differ that I am worried about:

  • fishbucketsplunk_private_dbsnapshotbtree_index.dat
  • fishbucketsplunk_private_dbsnapshotbtree_records.dat
  • fishbucketsplunk_private_dbbtree_index.dat
  • fishbucketsplunk_private_dbbtree_records.dat
  • fishbucketrawdata*
  • fishbucket*
  • persistentstoragefschangemanager_state

What do you suggest to do from here?

Can I rename the Week value from strftime(_time," %W")?

$
0
0

Hi,

I am charting counts by Week. I would like to have Wk-1 or something like that instead of a number like 34 which is the 34th week from the beginning of this year. i have tried rename option but that did not work. Any suggestions?

eval Date=strftime(_time," %W")

Does Splunk Enterprise rely on OS for proxy configuration or does it have its own settings for proxy configuration?

$
0
0

My customer is using proxy with authentication to access the internet for users and applications. Does Splunk Enterprise rely on OS for proxy configuration or does it have its own settings for proxy configuration?

How to fix my universal forwarders' configuration to monitor and forward syslog data?

$
0
0

Hello

I have this schema :

[syslog:received_514;forward_1514]
[SplunkUF:received_1514;forward_2000]
[SplunkUF2:received_2000;forward_3000]
[SplunkUF3:received_3000;forward_4000]
[Syslog:received_4000;forward_to_file]

With tcpdump on SplunkUF, I see the data arrived by syslog. But, the splunk forward failed.

The configuration files are :

SplunkUF - inputs.conf:

# Default
[default]
    index= default
    _rcvbuf = 1572864
    host = $decideOnStartup

[tcp://1514]         
    sourcetype = syslog
    queueSize=1MB
    persistentQueueSize=4GB
    _TCP_ROUTING = syslog-src

[monitor://$SPLUNK_HOME/var/log/splunk]
    index = _internal
    disabled = true

SplunkUF - outputs.conf:

[tcpout]
    backoffOnFailure = 5
    channelReapInterval = 60000
    channelReapLowater = 10
    channelTTL = 60
    compressed = true
    defaultGroup = syslog-src
    dnsResolutionInterval = 300
    negotiateNewProtocol = true
    readTimeout = 900
    useACK = true
    writeTimeout = 5
    indexAndForward = 0

[tcpout:syslog-src]
    server = SplunkUF2:2000
    maxQueueSize = 10MB
    dropEventsOnQueueFull = -1

SplunkUF2 - inputs.conf:

[default]
    index= default
    _rcvbuf = 1572864
    host = $decideOnStartup

[splunktcp://2000]
    compressed = true
    connection_host = IP_SplunkUF
    queueSize=1MB
    persistentQueueSize=4GB
    _TCP_ROUTING = syslog-src

[monitor://$SPLUNK_HOME/var/log/splunk]
    index = _internal
    disabled = true

SplunkUF2 - outputs.conf:

[tcpout]
    backoffOnFailure = 5
    channelReapInterval = 60000
    channelReapLowater = 10
    channelTTL = 60
    compressed = true
    defaultGroup = syslog-src
    dnsResolutionInterval = 300
    negotiateNewProtocol = true
    readTimeout = 900
    useACK = true
    writeTimeout = 5
    indexAndForward = 0

[tcpout:syslog-src]
    server = SplunkUF3:3000
    maxQueueSize = 10MB
    dropEventsOnQueueFull = -1

SplunkUF3 - inputs.conf:

[default]
    index= default
    _rcvbuf = 1572864
    host = $decideOnStartup

[splunktcp://3000]
    compressed = true
    connection_host = IP_SplunkUF2
    queueSize=1MB
    persistentQueueSize=4GB
    _TCP_ROUTING = syslog-src

[monitor://$SPLUNK_HOME/var/log/splunk]
    index = _internal
    disabled = true

SplunkUF3 - outputs.conf:

[tcpout]
    defaultGroup = syslog-src
    indexAndForward = 0

[tcpout:syslog-src]
    server = IP_Syslog:4000
    sendCookedData = False

Someone have an idea ?

Thanks

Specifying class while reloading deploy-server not working in Splunk 6

$
0
0

/opt/splunk/bin $ /opt/splunk/bin/splunk reload deploy-server -class MyClass An error occurred: Argument "class" is not supported by this handler.

Is this still supported in Splunk 6?

How to search users who are logged in from 2 or more IP addresses within a span of 10 minutes?

$
0
0

I am looking to parse apache logs to locate all users who are logged in from two or more IP addresses within a 10 minute time span.

The search I am performing appears not to be taking the timeframe into consideration or is including records with the same user and same IP within a 10 minute timeframe.

user=* clientip=* | iplocation clientip | bucket _time span=10m | stats dc(clientip) as dc_clientip values(clientip) as clientip values(City) as City values(Region) as Region values(Country) as Country by user | where dc_clientip > 1

Any assistance would be greatly appreciated.

Thanks.

How would a non-Admin user be able to view private views/saved searches in order to clone them?

$
0
0

We'd like to give a role access to be able to view private views and searches without granting admin access. Is this possible?

Currently it seems that the user assigned to this role can't even see any private content.


Anybody get the Add-on for jBoss to work on AIX?

$
0
0

I can connect to our JBoss servers running on AIX using jconsole but I am unable to get this add-on to work. It's installed in the splunkforwarder path on the AIX server where JBoss runs but we are consistently getting the following error:

[/opt/splunkforwarder/etc/apps/ta-jboss/bin] ./jmx-config.sh -u [user] -p [password] -i [hostname] -w

Trying service:jmx:rmi://[hostname]/jndi/rmi://localhost:9999/jmxrmi
Failed to find profile file..
Trying service:jmx:remoting-jmx://[hostname]:9999
Failed to find profile file..
Failed to find jmx uri...

I removed the parms for security reasons.

I also tried manually creating the inputs.conf and that didn't work either.

We'd really like to get this up and running.

Splunk ES - Merging identities not happening

$
0
0

Hello,

I have created a new identity list in Splunk ES following the documentation, but the news identities doesn't show in Identity Center.

I have checked that the new lookup is working ("| inputlookup new_ident)lookup" gives me the list) and that it is picked up identity_manager.py script (can see in the logs that it has found the table file). However, no merge and identities_expanded.csv remains the same (without my new list).

Any idea on how to debug this?

Regards, Olivier

Time Range Picker

$
0
0

Hi - I'm using the Time Range Picker panel on one of my dashboards, all the panel searches have been set to the Time Range [All Time]. When a select a new value from the Time Range Picker the search times remain the same even if I refresh the browser. I need some help on how I call the new time setting into my searches?

Cheers, Gillz

How to deploy app to (and from) a multi-tiered deployment server

$
0
0

I'm setting up a multi-tiered Splunk deployment, with a primary and a secondary deployment server (where the secondary is a client of the primary). I want to:

  1. Deploy the Unix TA from the primary to the secondary, so that the add-on will run on the secondary.
  2. Deploy the Unix TA from the secondary to all of its clients, so that the add-on will run on them.

On my primary deployment server, I have the following in serverclass.conf:

# Drop the Unix TA into the secondary's deployment-apps folder,
# so it can deploy the add-on to its clients.
[serverClass:secondaryDeploymentServersDeploymentApps]
targetRepositoryLocation = $SPLUNK_HOME/etc/deployment-apps
whitelist.0 = SecondaryDeploymentServer
stateOnClient = noop

[serverClass:secondaryDeploymentServersDeploymentApps:app:Splunk_TA_nix]

# Install the Unix TA onto the secondary, so that we can collect
# its host metrics.
[serverClass:secondaryDeploymentServers]
whitelist.0 = SecondaryDeploymentServer
stateOnClient = enabled

[serverClass:secondaryDeploymentServers:app:Splunk_TA_nix]

When I start up the primary deployment server and view the forwarder management page, I see this message:

The forwarder management interface does not support some settings in your serverclass.conf file. The interface is now read-only.

When I click the link on the word settings, I get taken to a search with a single event, whose message reads:

Attribute unsupported by UI: stanza=serverClass:secondaryDeploymentServersDeploymentApps property=stateOnClient reason='2+ distinct values at this level'

I figured this was allowable, per the Splunk documentation, but I guess not. So, what is the correct way of setting this up?

Thawed buckets: ERROR ClusterSlaveBucketHandler - Failed to trigger replication

$
0
0

I have updated a cluster splunk to 6.3 version and I have a problem with cluster replication and thawed buckets. After upgrade I restored archived buckets from S3 storage with shuttl. After some days I had need of restart cluster peers. I used "apply rolling restart cluster peers" from the master node. After this operation the cluster never returned complete because of the index with thawed buckets. In the log of all the cluster peers there are these errors:

08-29-2014 11:57:18.861 +0200 INFO  CMReplicationRegistry - Starting replication: bid=kannel~122~F591FF7C-4140-4AC5-BB8F-29122221A60E src=F591FF7C-4140-4AC5-BB8F-29122221A60E target=6FEA8B9A-C28E-44E7-988F-82439A068E84
08-29-2014 11:57:18.861 +0200 WARN  DatabaseDirectoryManager - unable to parse bucket type from the pathname='/opt/splunk/var/lib/splunk/kannel/thaweddb/db_1392106608_1391194240_122_F591FF7C-4140-4AC5-BB8F-29122221A60E'
08-29-2014 11:57:18.861 +0200 ERROR BucketReplicator - Unable to parse bucket name for bucketType=/opt/splunk/var/lib/splunk/kannel/thaweddb/db_1392106608_1391194240_122_F591FF7C-4140-4AC5-BB8F-29122221A60E
08-29-2014 11:57:18.862 +0200 INFO  CMReplicationRegistry - Finished replication: bid=kannel~122~F591FF7C-4140-4AC5-BB8F-29122221A60E src=F591FF7C-4140-4AC5-BB8F-29122221A60E target=6FEA8B9A-C28E-44E7-988F-82439A068E84
08-29-2014 11:57:18.862 +0200 ERROR ClusterSlaveBucketHandler - Failed to trigger replication (err='Unable to parse bucket name for bucketType=/opt/splunk/var/lib/splunk/kannel/thaweddb/db_1392106608_1391194240_122_F591FF7C-4140-4AC5-BB8F-29122221A60E')

These errors are repeated for each thawed bucket. It seems that the cluster try to replicate thawed bucket without success. But, thawed buckets should not be affected by the replication, or am I wrong?

In splunk 5.x there wasn't this problem.

ADD EXPORT BUTTON TABLE VIEW

$
0
0

Hi Everyone,

I created my dashboard using the framework, All my tables is in Tableview , I've noticed that there is no export button in the table unlike in TableELement. How can i add export button without recoding my dashboard/codes?

Please help.

Thanks in Advance!

using earliest twice in one search

$
0
0

will it work: (earliest=-1d@d latest=@d sourcetype=a) OR (earliest=-1d@d sourcetype=b) ?


How to join large tables with more than 50,000 rows in Splunk?

$
0
0

How do you join large tables?

It is impossible to join tables with more than 50k rows in splunk, so I'm using some tricks, and these tricks are extremely annoying.

Is there any "normal way"?

How to set up an alert if a user makes a change to a group?

$
0
0

I am fairly new to splunk but I am trying to create a search that would send out an alert whenever a member of a certain group makes a change to any data in the group they are in.

For example, if group x has 10 members, I would want to search for any members in that group that would either add or delete a user into that group.

Thanks for the help.

Can field values be used as a macro name?

$
0
0

Can be used as a macro name field value?

EX)

index=_internal | table sourcetype | `sourcetype`

I have a 500 type I want to use each type of each macro.

What should I do?

How to write a search to merge logs with transaction where OR if?

$
0
0

Hi there A query, you can do something like a "transaction where" For example, all of the following logs, merged with the exception of those with the "dst" field

Aug 27 17:42:40 172.24.20.35 sessionid=53f2b45b0526 sender=jorge@domain.com
Aug 27 17:42:40 172.24.20.35 sessionid=53f2b45b0526 subject="regards"
Aug 27 17:42:40 172.24.20.35 sessionid=53f2b45b0526 size=452132
Aug 27 17:42:40 172.24.20.35 sessionid=53f2b45b0526 dst=luis@example.com
Aug 27 17:42:40 172.24.20.35 sessionid=53f2b45b0526 dst=jhon@example.com
Aug 27 17:42:40 172.24.20.35 sessionid=53f2b45b0526 dst=alex@example.com

Whereas should continue to show the logs have "dst"

PS: Skip APPEND

How to write regex for path in inputs.conf?

$
0
0

I need to configure inputs.conf for forwarding a file like below,

G:\BlackBerry Enterprise Server\Logs\20140827\MCLCOVBB61VWIN_MAGT_01_20140827_0001.txt

my inputs.conf looks like this,

[monitor://G:\BlackBerry Enterprise Server\Logs\%Y%m%d\*_MAGT_*_%Y%m%d_*.txt]
disabled = false
followTail = 0
index = coreops
sourcetype = bes_magt

Anything iam doing wrong here, i dont see data coming into splunk, how do i check whether the given regex is parsing out for the right log file?

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>