Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

Hunk sizing

$
0
0

Hi

I am doing an application in Splunk that processes that processes 200K records per second fetched from Hadoop. What is the sizing that I need to look at for the licensing. I could see in Hunk that virtual indexes are created for Hadoop data to be processed. Does the indexing of data using these virtual indexes count on the per day data limit in Hunk license. Can someone help me on this

Thanks in Advance

Regards Subbu


How would I display the number of events on a pie chart?

$
0
0

I have a dashboard that displays a weekly summary of detected signatures, but I would like to be able to show the number of events per signature on the chart. Is this possible?

Current simple XML:

<?xml version='1.0' encoding='utf-8'?> <dashboard> <row> <chart> <title>AV Detect Report (7 day)</title> <searchString> MY_SEARCH </searchString> <earliestTime>-7d</earliestTime> <latestTime>now</latestTime> <option name="charting.chart">pie</option> </chart> </row> </dashboard>

Thanks.

Java Bridge - Active

$
0
0

I don't know how to activated java bridge?

UA strings not captured in lookup

$
0
0

I have this running but it is returning "Unknown" for these http_user_agent values:

1 "Mozilla/5.0+(compatible;+MSIE+9.0;+Windows+NT+6.1;+Trident/5.0)" 2 "Mozilla/5.0+(Windows+NT+6.1;+WOW64)+AppleWebKit/537.36+(KHTML,+like+Gecko)+Chrome/31.0.1650.63+Safari/537.36" 3 "Mozilla/5.0+(X11;+U;+Linux+i686)+Web-Security/1.0(it's+for+a+research+study,if+you+have+questions,plz+contact+me+liangw@cs.wisc.edu)" 4 "Mozilla/5.0+(Windows;+U;+Windows+NT+5.1;+de;+rv:1.9)+Gecko/2008052906+Firefox/3.0"

Do you know why?

FIELD_NAMES for Missing Headers of CSV

$
0
0

I have a comma separated csv file with missing headers. From the props.conf.spec below it has the configuration setting in your props.conf file:

FIELD_NAMES = [ <string>,..., <string>] * Some CSV and structured files might have missing headers. This attribute tells Splunk to specify the header field names directly.

My problem is I have been unable to get this to work. I push this into the props.conf file and when the logs are indexed I cannot find the field names.

Example csv file looks like this:

1,2,3,4,5
6,7,8,9,10

The headers should be a,b,c,d,e, so what should I set FIELD_NAMES equal to? FIELD_NAMES = [a,b,c,d,e] or FIELD_NAMES = ["a","b","c","d","e"] or FIELD_NAMES = [<a>,<b>,<c>,<d>,<e>] or FIELD_NAMES = [<"a">,<"b">,<"c">,<"d">,<"e">] or some other variation? I tried running btool check on my configurations but it doesn't reject what I have tried.

Plotting points on a Splunk 6 map

$
0
0

My data is already coming into splunk lat/lon encoded. I don't need to do any ip geo lookup or anything like that. Each event has a latitude and longitude field. I want to plot each event onto a map. I don't want to group them or do any fancy aggregation. I just pwant oints plottted to a map or maybe possibly a heat map. Is this possible?

How to plot number of scheduled jobs on a hourly time scale by user

$
0
0

Hi,

How do we list out all of the saved scheduled jobs on a Splunk setup by user, by day, by search, by tittle of the saved search?

Also, I wanted to plot in a days view of scheduled jobs -- ie.. 0-23 HRS scale with 1HR, all the jobs for a given user. Basically, I wanted to know how many are scheduled at the same time in a given day and by users.

I've the following REST call:

| rest /services/saved/searches | stats values(search) as QueryName, values(title) as JobName, values(cron_schedule) as ScheduledAt by eai:acl.owner, title, cron_schedule

But, not sure how to transform cron-expression into a time to plot.

thanks..

Intermediate forwarder not forwarding _internal data

$
0
0

I am using Universal Forwarder as Intermediate forwarder, it is forwarding the monitored data without any issues but it is not forwarding any data _internal index or Splunk logs.

Intermediate Forwarder Configuration: Outputs.conf [tcpout:index] server=sra-index-01:9997,sra-index-02:9997,sra-index-03:9997,sra-index-04:9997,sra-index-05:9997

inputs.conf [splunktcp://9997] disabled=0


Quoted escape characters when searching a field

$
0
0

"2013-12-19 11:13:23", "[INFO]", "30927", "MainProcess", "SSMITH"

My data is coming into Splunk in this format, and when I select to look at it in raw form this is an example of one of my logs. The issue I am having is that when I want to search for a field I have to search for it in the following way or it wont show up:

levelname=""[INFO]""

I need the initial quotes around each field because some of the fields may have commas in them and the delimiter is also a comma. Is there a config I can use so I don't have to escape the quotes when searching for a field value? Or any advice besides changing the delimiter to fix the issue?

Parsing mutlivalued field

$
0
0

I have two fields, say foo and bar. They both have the same format. An example of the fields could be

foo="{a=3, b=4, c=11}"
bar="{x=1, y=5, z=3}"

I want to parse and use these multivalued fields. That is, I want to be able to extract and use a, b,... and use them in calculations (using eval). Can anyone tell me whether this is even possible, and if it is, how I do it?

If you want to know all of it, what I wish to calculate is this: (a*x + b*y + c*z)/(x + y + z). In the above example, the result of this calculation would be 7.

Oh, and what makes this even more difficult (I think), is that I might actually have more fields, containing say a, b and c. So, there might as well be the field

baz="{a=23, b=1, c=6}"

I'll have to be sure I don't wind up using these values of a, b and c in the calculation.

BundleArchiver - Filtered nothing out of local.meta, but size still changed

$
0
0

I keep getting this message every few minutes for the a specific app that I haven't changed in months.

"WARN BundleArchiver - Filtered nothing out of Splunketcappsmyappmetadatalocal.meta, but size still changed: original_size=122, filtered_size=117, cosmetic_bytes=0"

How can I get rid of this error?

What is the default port on Splunk Universal Forwarder for Deployment Server to send data

$
0
0

All configurations will be pushed by Deployment Server to Forwarder running on linux box.

What is the default port opened on Forwarder which is used by Server to push the data to forwarder?

Are there any other ports that needs to be opened at Forwarder side? or only one port is sufficient.

My forwarder is inside a hardware appliance. I need to open ports for server to talk to agent.

IIS log user count

$
0
0

My purpose is to count currently logged in user for a web site

Easiest way to get this is something like | stats dc(cs_username)

However, that really does not reflect true numbers that I am after as there could be one username logged in from different client machines simultaneously. Also, it is possible that users from outside agency can log on to the web site (through our load balancer that rewrite client IP as its own IP) and if the external agency uses proxy, it will only report one client IP anyway.

Somehow, I don't see cs_cookie in the extracted field, which could have been helpful.

Any idea what is the best way to approach this?

[indexer] Streamed search execute failed because: User 'nobody' could not act as:

$
0
0

Can someone please tell me what this means, and where I can look to fix this? Thanks!

Splunk is adding weird strings like "_linebreaker\x00\x00" to my events, what is going on?

$
0
0

Before forwarding data I checked to see if it was indexing properly and it seemed to have no problems. However, once I turned on forwarding, the data shows up like so in the primary instance of Splunk:

_linebreaker\x00\x00\x00\x00\x6_time\x00\x00\x00\x00\xB1294233707\x00\x00\x00\x00\x6_conf\x00\x00\x00\x00gsource::/var/log/folder/SG1_main__10105132644.log|host::a-a.host.domain.com|bcoat_proxysg|\x00\x00\x00\x00\x10MetaData:Source\x00\x00\x00\x007source::/var/log/bcftpupload/SG1_main__10105132644.log\x00\x00\x00\x00\xEMetaData:Host\x00\x00\x00\x00!host::a-a.host.domain.com\x00\x00\x00\x00\x14MetaData:Sourcetype\x00\x00\x00\x00\x1Asourcetype::bcoat_proxysg\x00\x00\x00\x00\x10_MetaData:Index\x00\x00\x00\x00\x8default\x00\x00\x00\x00\x6_meta\x00\x00\x00\x00\xE0timestartpos::0
timeendpos::14 _subsecond::.171 date_second::47 date_hour::13
date_minute::21 date_year::2011 date_month::january date_mday::5
date_wday::wednesday date_zone::0
punct::.______..._/___://..//.?=&=&=&=_-_/.._/\x00\x00\x00\x00\x6_path\x00\x00\x00\x00//var/log/folder/SG1_main__10105132644.log\x00\x00\x00\x00 disabled\x00\x00\x00\x00\x6false\x00\x00\x00\x00\x8_rcvbuf\x00\x00\x00\x00\x81572864\x00\x00\x00\x00 _charSet\x00\x00\x00\x00\x6UTF-8\x00\x00\x00\x00\x00\x00\x00\x00\x5_raw\x00\x00\x00\x4G\x00\x00\x00\xE\x00\x00\x00\x5_raw\x00\x00\x00\x1

I am trying to forward data from splunk (forward-only) to splunk (our primary instance). I have setup a listener on the primary instance in inputs.conf:

[tcp://34002]
connection_host = none
host = bluecoat
sourcetype = bcoat_proxysg

The forwarder indexes the log data like so:

[monitor:///var/log/folder/]
disabled = false
whitelist = SG
sourcetype = bcoat_proxysg

What is going on here?


search query - iterations of search criteria

$
0
0

I'm trying to search for multiple rule event hits in my historical data:

Date 1, Rule A, NumAlerts 15 Date 1, Rule B, NumAlerts 0 Date 1, Rule C, NumAlerts 15000 Date 2, Rule A, NumAlerts 16000 Date 2, Rule B, NumAlerts 16 Date 3, Rule C, NumAlerts 1

How would I structure a query for any given date range (Last 3 days) Rule A - 16015 Rule B - 16 Rule C - 15001

Inconsistent Predict results

$
0
0

Hi

When I compare the dashboard results for these two simultaneously executed searches below:

(i) malware in last 60 minutes

(ii) malware in last 4 hours

and view the count of occurrences for the same date/timestamp, the occurences count is reported very differently, as follows:

(i) malware in last 60 minutes -> count=49

(ii) malware in last 4 hours -> count=106

Attached are the screenshots below:

![4 hours][C:Temp4_hrs.jpg]

![60 mins][C:Temp60_mins.jpg]

Why this discrepancy?

REX SED Help, need to replace namespaces from xml field

$
0
0

Hi,

I have a xml field which holds values like below. It contains namespaces for each element which I want to remove:

...message="<h:Envelope xmlns:s="http://schemas.xmlsoap.org/soap/envelope/">
<h:Header>
<h:creationTimestamp>2013-12-09T16:58:57.2450018+05:30</h:creationTimestamp>
<h:applicationId>XYZ</h:applicationId>
<h:hostName>Myhost</h:hostName>
</h:Header>
</h:Envelope>"

Obviously there could be more/different namespace values ("h:" prefix) in the the logs, so can't hardcode the value to replace it. I believe REX SED would be the appropriate method for my requirement.

I am very new to Regex so not able to start with REX SED command. Could anyone provide me some direction/example how to go about it?

Can I run splunk on btrfs?

$
0
0

Hello,

I just downloaded splunk today to try it out on a few of our servers, but found out very quickly that it doesn't support btrfs:

Filesystem type is not supported: buf.f_type = 0x9123683e
  1. Why does splunk care about the file system anyway?
  2. Is there a way to "force" btrfs support, maybe with reduced functionality?
  3. Is official support for btrfs planned?

The output of locktest looks like this:

~/splunk]% bin/locktest                                 
Could not create a lock in the SPLUNK_DB directory.
Filesystem type is not supported: buf.f_type = 0x9123683e
If supporting this filesystem type is important to you, please file an Enhancement Request with Splunk Support with the fs info number listed.

~/splunk]% ls $SPLUNK_DB 
audit/  authDb/  blockSignature/  defaultdb/  fishbucket/  hashDb/  historydb/  _internaldb/  sample/  summarydb/  test.ijKHJ9  test.R0jT0h  test.T65SU0

Output of strace:

~/splunk]% strace bin/locktest
execve("bin/locktest", ["bin/locktest"], [/* 32 vars */]) = 0
brk(0)                                  = 0x245b000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8ad5120000
access("/etc/ld.so.preload", R_OK)      = -1 ENOENT (No such file or directory)
open("/etc/ld.so.cache", O_RDONLY)      = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=27312, ...}) = 0
mmap(NULL, 27312, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7f8ad5119000
close(3)                                = 0
open("/lib64/libc.so.6", O_RDONLY)      = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0p\355\1\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1832712, ...}) = 0
mmap(NULL, 3664040, PROT_READ|PROT_EXEC, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7f8ad4b84000
mprotect(0x7f8ad4cf9000, 2097152, PROT_NONE) = 0
mmap(0x7f8ad4ef9000, 20480, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_DENYWRITE, 3, 0x175000) = 0x7f8ad4ef9000
mmap(0x7f8ad4efe000, 18600, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_FIXED|MAP_ANONYMOUS, -1, 0) = 0x7f8ad4efe000
close(3)                                = 0
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8ad5118000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8ad5117000
mmap(NULL, 4096, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7f8ad5116000
arch_prctl(ARCH_SET_FS, 0x7f8ad5117700) = 0
mprotect(0x7f8ad4ef9000, 16384, PROT_READ) = 0
mprotect(0x7f8ad5121000, 4096, PROT_READ) = 0
munmap(0x7f8ad5119000, 27312)           = 0
stat("/home/xx/splunk/var/lib/splunk", {st_mode=S_IFDIR|0711, st_size=258, ...}) = 0
umask(0777)                             = 022
umask(066)                              = 0777
gettimeofday({1290061806, 661323}, NULL) = 0
getpid()                                = 15124
open("/home/xx/splunk/var/lib/splunk/test.dHc3Nt", O_RDWR|O_CREAT|O_EXCL, 0600) = 3
statfs("/home/xx/splunk/var/lib/splunk/test.dHc3Nt", {f_type=0x9123683e,     f_bsize=4096, f_blocks=2228224, f_bfree=1430113, f_bavail=1128033, f_files=0, f_ffree=0, f_fsid={196237592, 245698777}, f_namelen=255, f_frsize=4096}) = 0
statfs("/home/xx/splunk/var/lib/splunk/test.dHc3Nt", {f_type=0x9123683e, f_bsize=4096, f_blocks=2228224, f_bfree=1430113, f_bavail=1128033, f_files=0, f_ffree=0, f_fsid={196237592, 245698777}, f_namelen=255, f_frsize=4096}) = 0
statfs("/home/xx/splunk/var/lib/splunk", {f_type=0x9123683e, f_bsize=4096, f_blocks=2228224, f_bfree=1430113, f_bavail=1128033, f_files=0, f_ffree=0, f_fsid={196237592, 245698777}, f_namelen=255, f_frsize=4096}) = 0
write(2, "Could not create a lock in the S"..., 52Could not create a lock in the SPLUNK_DB directory.) = 52
statfs("/home/xx/splunk/var/lib/splunk/test.dHc3Nt", {f_type=0x9123683e, f_bsize=4096, f_blocks=2228224, f_bfree=1430113, f_bavail=1128033, f_files=0, f_ffree=0, f_fsid={196237592, 245698777}, f_namelen=255, f_frsize=4096}) = 0
statfs("/home/xx/splunk/var/lib/splunk/test.dHc3Nt", {f_type=0x9123683e, f_bsize=4096, f_blocks=2228224, f_bfree=1430113, f_bavail=1128033, f_files=0, f_ffree=0, f_fsid={196237592, 245698777}, f_namelen=255, f_frsize=4096}) = 0
write(2, "Filesystem type is not supported"..., 201Filesystem type is not supported: buf.f_type = 0x9123683e
If supporting this filesystem type is important to you, please file an Enhancement Request with Splunk Support with the fs info number listed.) = 201
exit_group(9)    

How to track a specific user login and logoff the past 30 days

$
0
0

Please excuse my lack of knowledge with Splunk but I need to track a user by login/logoff for the past 30 days. I looked through some of the answers but can't seem to get this to work. Appreciate your help!

Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>