Quantcast
Channel: Latest Questions on Splunk Answers
Viewing all 13053 articles
Browse latest View live

If you specify sid (job id), you cannot also specify q or s (saved search name)

$
0
0

Custom applications created in Splunk 4.3 or 5.0 with Application Views that contain dashboard panels which contain charts that offer "Open in Search" icons produce a 500 error when clicked, after an upgrade to Splunk 6.0.

146537


How to install *Nix app

$
0
0

Hi gang, I have been trying to install the newer Splunk App for Unix and Linux (5.0.0) on my Splunk recievers. I tried "upgrading" the Nix 4.6 and also tried to install it by itself. The download file from Splunk is a zip file, I was expecting a .tgz file. When I try to install or upgrade from the Splunk GUI, it just gives me a /etc ? app. This directory called etc does get loaded under /opt/splunk/etc/apps. I am not sure how to get this to work. I have NIX 4.6 working fine and have the Splunk_TA_nix running fine on my forwarders.

I am running Splunk 6.0 on my receivers and forwarders.

Search works manually but not in dashboard

$
0
0

Below is a search I am using in a dashboard in a HiddenSearch module:

search index=techsecu_summary source="Top-Internet-connection-permitted" | top asa_srcip, asa_dstip, asa_dstport | eval Connection="(" . asa_srcip . ", " . asa_dstip . ", " . asa_dstport . ")" | fields Connection, count, percent

The dashboard shows "No results found."

When I hit "Inspect", I get a message like this:

This search has completed and found 11,549,745 matching events. However, the transforming commands in the highlighted portion of the following search:

the search string shown above with everything after the first | highlited.

over the time range:

[12/8/13 12:00:00.000 AM – 12/13/13 11:10:30.000 AM]

generated no results.

But if I copy the search string to the "search" app and run it over the same time period (Week to date), I do get results.

Looks like I am missing something really simple but I am not able to see. Your insights are much appreciated.

minimum permissions required for using http simple receiver

$
0
0

what are the minimum permissions required to add data to splunk using the http simple receiver http://docs.splunk.com/Documentation/Splunk/latest/RESTAPI/RESTinput#receivers.2Fsimple

the example shows the admin user. i created a test user with a role of user and then changed the role to power user. but both return insufficient permissions.

i messed around with a custom user role adding/removing capabilities. but couldn't arrive at the right permission. is there a way to create a user not in the admin role with some minimum set of permissions to add data via the simple http receiver ?

my test attempt is below:

curl -k -u test:test "https://localhost:8089/services/receivers/simple?source=www&sourcetype=web_event" -d "Sun Jul 10 15:56:02 PDT 2011 User vishalp logged in successfully." <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="WARN">insufficient permission to access this resource</msg> </messages> </response>

Streamed search execute failed because: User '' could not act as: XXX

$
0
0

Hey, All my users except admin are getting this error: Streamed search execute failed because: User '' could not act as: XXX

With XXX being the user in question. I've checked all the permissions and even tried changing some, but all the users in questions have full rights to run searches. I can't see anything in the error history that indicates why my users can't execute searches. Really keen to get this sorted, as the product is essentially unusable right now.

Thanks!

Who do I contact for help with my Splunk license?

$
0
0

Who do I contact for help with my Splunk license? We apparently exceeded the amount one too many times?

Error while validating databases

$
0
0

The server is a fresh installation of Fedora 19 x86_64, it is a completely minimal install with nothing else really added other than vim and git. I've set SELinux to permissive, and my firewall is still blocking all incoming traffic other than SSH but I did test with it turned off and the issue remains. After installing splunk-6.0.something I ran

cd /opt/splunk/bin/
./splunk --accept-license --answer-yes

and I get

Splunk> Be an IT superhero. Go home early.

Checking prerequisites...
    Checking http port [8000]: open
    Checking mgmt port [8089]: open
    Checking configuration...  Done.
        Creating: /opt/splunk/var/run/splunk
        Creating: /opt/splunk/var/run/splunk/appserver/i18n
        Creating: /opt/splunk/var/run/splunk/appserver/modules/static/css
        Creating: /opt/splunk/var/run/splunk/upload
        Creating: /opt/splunk/var/spool/splunk
        Creating: /opt/splunk/var/spool/dirmoncache
        Creating: /opt/splunk/var/lib/splunk/authDb
        Creating: /opt/splunk/var/lib/splunk/hashDb
    Checking critical directories...    Done
    Checking indexes...
homePath='/opt/splunk/var/lib/splunk/audit/db' of index=_audit on unusable filesystem.
Validating databases (splunkd validatedb) failed with code '1'.  If you cannot resolve the issue(s) above after consulting documentation, please file a case online at http://www.splunk.com/page/submit_issue

I'm not really sure where it gets the idea the the filesystem is unusable, I did after all just install it to the same location that it's writing to. I did not however setup a new splunk user account, but I don't think I've ever done that in the past when I've used previous versions.

Gracias.

Client CORS proxy problem with express.js and Javascript SDK

$
0
0

The simple browser UI examples that work in the javascript SDK (using node sdkdo runserver) don't work in my express/node project, because I am not handling the proxy properly to get around the cross-origin resource sharing limitation. I have tried to do so, but haven't gotten it to work. In my app.js file I added

var request = require('request'); 
var splunkjs = require('splunk-sdk');

and then above my express app.get(...) routes, I added:

app.all('/proxy/*', function(req, res) {
    var error = {d: { __messages: [{ type: "ERROR", text: "Proxy Error", code: "PROXY"}] }};
    var writeError = function() {
        res.writeHead(500, {});
        res.write(JSON.stringify(error));
        res.end();
    };
    try {
        var body = "";
        req.on('data', function(data) {
            body += data.toString("utf-8");
        });
        req.on('end', function() {
            var destination = req.headers["X-ProxyDestination".toLowerCase()];
            var options = {
                url: destination,
                method: req.method,
                headers: {
                    "Content-Length": req.headers["content-length"],
                    "Content-Type": req.headers["content-type"],
                    "Authorization": req.headers["authorization"]
                },
                followAllRedirects: true,
                body: body,
                jar: false
            };
            try {
                request(options, function(err, response, data) {
                    try {
                        var statusCode = (response ? response.statusCode : 500) || 500;
                        var headers = (response ? response.headers : {}) || {};
                        res.writeHead(statusCode, headers);
                        res.write(data || JSON.stringify(err));
                        res.end();
                    }
                    catch (ex) {
                        writeError();
                    }
                });
            }
            catch (ex) {
                writeError();
            }
        });
    }
    catch (ex) {
        writeError();
    }
});

In the debugger I've verified that the app.all() function is entered, but the req.on('end') handler is never invoked. The browser sends

localhost:8888/proxy/services/auth/login?output_mode=json

but it hangs, pending, until it times out or I kill the web server.

How should I handle this?


How to force deployment server to recognize specific forwarder IP address

$
0
0

I appologize if this is a double post. I don't know what happened to my previous attemt :P

In my environment the servers are configured with multiple IP addresses to add flexibility when moving services between hosts. bond0 is the host IP address, bond0.1 is the "service volume".

My forwarder is getting recognized from the bond0 ip address. I want it to be recognized from the bond0.1 address. I've tried the following change to the $SPLUNK_HOME/etc/splunk-launch.conf but this did not work:

SPLUNK_BINDIP={volume_IP_address}

-Thank you for any insight you can provide.

joining across field matrix

$
0
0

Hi - I am trying to wrap my head around the following search - looking at join, appendcols and map commands to get the job done, but I am at a loss.

I have about 3000 IP address pairs(endpoints of IP connection) I want to join to network device logs. I would like to find the earliest match in the device logs. The join needs to happen across a matrix of fields to capture all events.

IP address pairs Time,Id,HostA,HostB 12/14/2013 05:01:00,1,1.1.1.1,2.2.2.1 12/14/2013 06:02:00,2,1.1.1.2,2.2.2.2 12/14/2013 07:03:00,3,2.2.2.3,1.1.1.3 12/14/2013 08:03:00,4,1.1.1.4,2.2.2.4 ...

Fields from network device TimeSeen,LocalIP,RemoteIP,OtherFields 12/14/2013 05:01:11,1.1.1.1,2.2.2.1,foo 12/14/2013 05:02:22,2.2.2.2,1.1.1.2,bar 12/14/2013 05:03:33,1.1.1.3,2.2.2.3,foobar 12/14/2013 05:01:05,2.2.2.1,1.1.1.1,bar ...

How could I join both data sources across the fields with IP data? The logic would need to compare 2 different field sets. IE - HostA=LocalIP HostB=RemoteIP OR HostA=RemoteIP HostB=LocalIP

And output a single event for each of the IP address pairs with the earliest event found in the network device logs, with fields from the both sources? IE - Time,Id, HostA, HostB,TimeSeen,LocalIP, RemoteIP, OtherFields "12/14/2013:05:01:00",1,1.1.1.1,2.2.2.1,"12/14/2013:05:01:05",2.2.2.1,1.1.1.1,bar

Maybe I need to make a multi-value field out of each IP pair and join on that.

I am also unclear on how to find the earliest event based on 3 fields(TimeSeen,LocalIP,RemoteIP), then output all fields in the event. stats earliest() only seems to accept 1 field.

Any help is greatly appreciated.

Thanks, Joe

Is it possible to sort or reorder a multivalue field?

$
0
0

Anyone have any thoughts as to how to reorder a multi-valued field? Ideally I'd like to be able to do a "sort" or in my specific use case, a "reverse" would be perfect.

Say you have the following search:

my search | stats list(myfield) as myfields by id

The list() stats operator preserves all values of "myfield" in the events and preserves order, which is what I want. However, I'd really like to see the values of "myfield" in time order (not reverse time order.) I know I can stick a | reverse in there, but I was trying to figure out if there was a better approach that only modifies the "myfields" field, and doesn't require screwing with event order.

(In my non-trivial version of this search, I'm using a transaction command as well, and it has issues when you start messing with time-order. That's just one example of why re-ordering the events is not ideal.)

Can I have the TA for windows auto install when pushing the Windows universal forwarder.

$
0
0

When you manually run the Windows Universal Forwarder .msi installer on a windows workstation, part of the setup process asks you to install the Technology Add-On for windows, (built in to the forwarder installer,) at the same time. After the manual install completes the forwarder starts sending windows event log data to the indexer specified during the install process. When I try to push the installer using command line options the TA does not get installed and there is no option in the documentation that I can find to get it to install. This causes the forwarder to check in with the indexer as specified on the command line but not start sending windows event log data as also specified on the install command line.

Is there a way to have the universal forwarder install also install the TA like it does when you manually run the installer or do I have push the TA as a separate process?

split function in calculated fields

$
0
0

When i try to save in Splunk Web calculated fields that contains split function i have a "Encountered the following error while trying to save: In handler 'props-eval': Bad function" message. Why i can't use this function in calculated fields? There is no word about this limitation here in Splunk Documentation, Examples of Eval expression that are not working:

split(anyfield,";")

or

split("x:x",":")

But in conjunction with eval in Search these are working fine.

Splunk Version............................................6.0 Splunk Build............................................182037

How do I find then number of elements in a comma delimited list?

$
0
0

Given the following log entry how would a find the number of host entries and assign it to a field?

Thanks!

FINEST|1137/0|Service KOALA-MANGOES|13-12-14 00:13:35|INFO: Available nodes: [host :htti://10.0.46.107:5555 time out : 30000, host :htti://10.0.46.103:5555 time out : 30000, host :htti://10.0.46.106:5555 time out : 30000, host :htti://10.0.49.52:5555 time out : 30000, host :htti://10.0.49.176:5555 time out : 30000, host :htti://10.0.49.53:5555 time out : 30000, host :htti://10.0.39.21:5555 time out : 30000, host :htti://10.0.39.17:5555 time out : 30000, host :htti://10.0.39.19:5555 time out : 30000, host :htti://10.0.49.51:5555 time out : 30000, host :htti://10.0.39.20:5555 time out : 30000, host :htti://10.0.33.62:5555 time out : 30000, host :htti://10.0.39.18:5555 time out : 30000, host :htti://10.0.46.105:5555 time out : 30000, host :htti://10.0.50.102:5555 time out : 30000, host :htti://10.0.46.104:5555 time out : 30000, host :htti://10.0.49.54:5555 time out : 30000]

maxmind geo database with splunk 6

$
0
0

Would it be possible to use the Maxmind IPv4 database in substitute with Splunk 6's ipv4 database for the maps function?


can I create a saved search with php sdk like with javascript sdk?

$
0
0

hi, can I create saved searches with php sdk like with javascript sdk?

Windows FileTime timestamp to human readable

$
0
0

I tried a lot to convert windows filetime timestamp [web]support.microsoft.com/kb/188768) to human readable using TIME_FORMAT, but was not able to. One sample timestamp is 130308696850032106. This is supposed to be Saturday, December 7, 2013 1:01:25am. I get this when choosing input format as 'filetime' at [web]silisoftware.com/tools/date.php

Can anyone give me any hint/pointers as to what TIME_FORMAT be set to? I tried with %s%9N but it renders as something else. I have tried convert mstime and ctime, but doesn't help.

CSV imports, headers as fields?

$
0
0

All,

I have been following this documentation; http://docs.splunk.com/Documentation/Splunk/6.0/Data/Extractfieldsfromfileheadersatindextime

No combination of props.conf settings appears to be working. Here is the data template of the file I am attempting to bring in;

Header1|Header2|Header3|Header4    
DataA|DataB|DataC|DataD

My assumed props.conf;

[SourceTypeName]
FIELD_DELIMITER=|
NO_BINARY_CHECK=1
SHOULD_LINEMERGE=false

Any ideas? I can edit the props.conf during the data add "wizard" and it updates the local conf file appropriately, however each event is still shown as a single line without field information;

Can't post images so replace the 'xx' with 'tt'; hxxp://i.imgur.com/vAKOJFY.png

where to find Splunk Data Visualizations Manual for splunk 5.0.3

$
0
0

where to find Splunk Data Visualizations Manual for splunk 5.0.3

thanks,

time.sleep not working in modular input ?

$
0
0

I modified the helloworld in the python modular input example, to poll a website, and calculate the latency.

I don't understand why it is not working when I add a time.sleep, without, it is workint !!?

def do_run():
    config = get_input_config()  
    #TODO , poll for data and print output to STD OUT
    vl=1
    while vl==1 :
        start_timer = time.time()
        resp = urllib2.urlopen('http://www.google.com')
        content = resp.read()
        latency = time.time() - start_timer
        print_xml_single_instance_mode( "time=" + str(start_timer)  + " latency=" + str(latency) )
        #assert (resp.code == 200), 'Bad HTTP Response'
        #assert ('Example Web Page' in content), 'Failed Content Verification'
        time.sleep(float(60))
Viewing all 13053 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>