GithubHelp home page GithubHelp logo

sumologic / sumologic-aws-lambda Goto Github PK

View Code? Open in Web Editor NEW
151.0 46.0 135.0 79.18 MB

A collection of lambda functions to collect data from Cloudwatch, Kinesis, VPC Flow logs, S3, security-hub and AWS Inspector

License: Other

JavaScript 22.40% Python 71.62% Shell 5.98%
kinesis kinesis-firehose cloudwatch-logs s3 vpc-flow-logs aws-lambda cloudformation security-hub

sumologic-aws-lambda's Issues

using _sumo_metadata with non json logs

hello, we'd like to use _sumo_metadata, but we currently send our logs as text only. would it be possible to send a message like this:

{
  _sumo_metadata: {...},
  message: 'stuff'
}

and have the log appear as just 'stuff' as text, instead of {message: 'stuff'} on sumo? if not, then would you be open to adding a flag and some code to facilitate it?

Thanks!

Estimates, when the new release can be expected

Hi guys,

Any estimates when the new release can be expected ? We are automating build process to use your lambda cloudwatchlogs function and would like to rely on tag number rather then just copy stuff from master.

Thanks

AWS Lambda: Node.js v4.x is now EOL

AWS is removing support for Node.js v4.x.

New Lambda functions using Node.js v4.x will no longer be able to be created on July 31st, 2018.
Existing Lambda functions will no longer allow code updates on October 31st, 2018.

Requesting project to be updated to a newer Node.js version, preferably v8.

Will the cloudwatchlogs_lambda.js lambda support TLS 2.1

Hi,

Sumologic is moving its http endpoints to TLS 1.2 so I wanted to ask if there would be any change in the cloudwatchlogs_lambda.js code to accomodate that? Sorry I am not a node/js developer but I am mainting a project that uses your code. I have an idea of what change might be required to explicitly provide the version but I cannot find what the default bahaviour is.

Sumologic will stop supporting older version on 20th of June of this year.

Thanks!

kinesis firehose processor

I have installed the kinesis firehose processor but seeing weird characters in SumoLogic.

I have used the readme from the following github url https://github.com/SumoLogic/sumologic-aws-lambda/tree/master/kinesisfirehose-processor to implement the kinesis firehose processor.

When collecting the data from S3 in SumoLogic I see the following result. Please help figuring out what is going on...

�5�[��0��
��+v|ޢ�ݶ���D�@q5e"��fQ����ۇy9gt��w#�O��7x� _����sY��˒�I����d���a܈<����2��!9^Cv��1�MW�H�Xӡ#����m��.c�w�&����)ߘߠ�l����GrG�m�������l��*�eY��9�P�S��BsTRjt���"�Ņ��S�ss�d���O���C"�G���wq�o�9�J!5�\HõB��R0�Ų4`�)%� �R(51�M"�MPxRr�2���?�_Vź����KZ�\̸���Բ<��:O������0ۤ�/��'�������q$��

Any way to honor the JSON Object in val.Message and not stringify it?

Confusing title, but our Lambda functions are logging valid JSON objects and not strings like so:

{
    "fields": {
        "MediaPath": {
            "Path": "1466672936684.jpg",
        },
        "RequestID": "a7188dc9-6968-11a6-be87-3326b0d7d49d",
        "app": "purge"
    },
    "level": "info",
    "timestamp": "2016-08-23T19:36:39.382610474Z",
    "message": "success"
}

When it's passed over to cloudwatchlogs_lambda.js Sumo then sees this:

{
   "id":"32826273204296444651588478853452971566044640956554084352",
   "timestamp":1471980999382,
   "message":"{\"fields\":{\"MediaPath\":{\"Path\":\"1466672936684.jpg\"},\"RequestID\":\"a7188dc9-6968-11a6-be87-3326b0d7d49d\",\"app\":\"purge\"},\"level\":\"info\",\"timestamp\":\"2016-08-23T19:36:39.382610474Z\",\"message\":\"success\"}
",
  "requestID":null,
  "logStream":"2016/08/23/[$LATEST]8e1c1bde6beb44df800fb15f6444263a",
  "logGroup":"/aws/lambda/purge"
}

Which doesn't make it ideal to set up search queries in Sumo. Is there anyway to persist the JSON object through and have the "message" be JSON

Kinesis Function produces duplicate messages

messageList needs to be empty before each run through the record loop.

https://github.com/SumoLogic/sumologic-aws-lambda/blob/master/kinesis/node.js/k2sl_lambda.js#L139

2017-10-16T23:32:45.844Z	d3a969ca-3cdd-4639-bcc6-7f603a0affcd	Kinesis Records: 3
2017-10-16T23:32:45.890Z	d3a969ca-3cdd-4639-bcc6-7f603a0affcd	Log events: 997
2017-10-16T23:32:45.928Z	d3a969ca-3cdd-4639-bcc6-7f603a0affcd	Log events: 997
2017-10-16T23:32:45.968Z	d3a969ca-3cdd-4639-bcc6-7f603a0affcd	Log events: 997
2017-10-16T23:32:46.447Z	d3a969ca-3cdd-4639-bcc6-7f603a0affcd	messagesSent: 1 messagesErrors: 0
2017-10-16T23:32:46.803Z	d3a969ca-3cdd-4639-bcc6-7f603a0affcd	messagesSent: 1 messagesErrors: 0

To test this I've created a paragraph-sized message which gets spit out 10000 times. If the logs are smaller you don't see this error. In my test, referencing the log above, I get 2x messages after message 997 in sumo and 3x messages after 1994.

Fix is simple. Declare messageList inside the loop

support for unicode characters for logging

The data extracted from the stream to be sent to sumo is being done in 'ascii' vs 'utf-8' so the lambda fails when a log statement contains unicode (see offending code below)

var awslogsData = JSON.parse(buffer.toString('ascii'));

cc @jefftrudeau

not defined warning

I'm getting final_event is not defined warning in lambda for cloudwatchevents.js code

Support filter pattern in loggroup lambda connector

Currently the filter pattern is default to empty ('') and not configurable.

async function createSubscriptionFilter(lambdaLogGroupName, destinationArn, roleArn) {
    if (destinationArn.startsWith("arn:aws:lambda")){
        var params = {
            destinationArn: destinationArn,
            filterName: 'SumoLGLBDFilter',
            filterPattern: '',
            logGroupName: lambdaLogGroupName
        };
    } else {
        var params = {
            destinationArn: destinationArn,
            filterName: 'SumoLGLBDFilter',
            filterPattern: '',
            logGroupName: lambdaLogGroupName,
            roleArn: roleArn
        };
    }

Can we add support to define our filter pattern so that we can selectively ingest the logs?

Thanks

First event into new logGroup won't invoke the subscribing function.

If new log group creation is binded with new event creation, for example, in case of first run of new lambda function. Only after creation of Log group, event rule will be invoked to attach it to target lambda function.
So first event will not be captured in target lambda function. Only subsequent events will be captured.

The containing application will need be updated before it can be deployed

Hi,

I'm getting the following error when trying to deploy the AWS Lambda function for "sumologic-guardduty-benchmark — version 1.0.5"

The following nested application arn:aws:serverlessrepo:us-east-1:956882708938:applications/sumologic-app-utils and semantic version 1.0.5 was recreated after the containing application arn:aws:serverlessrepo:us-east-1:956882708938:applications/sumologic-guardduty-benchmark was published. The containing application will need be updated before it can be deployed

Please advise/fix.

Regards,
Suhail.

CloudFormation stack fails

The CF stack launched by this application fails to complete, rolling back with an error that the IAM role aws-serverless-repository-CloudWatchEventFunctionR-1IATRDMOBCJVM doesn't exist
image

Feature Request: Retries

@g-wattz We use cloudwatchlogs_lambda.js to send CloudWatch Logs to SumoLogic. We found that this Lambda fails intermittently and the response was {"errorMessage":"errors: HTTP Return code 504"}. This looks like SumoLogic is having trouble consuming the log message in these occasions.

Is there a plan to implement retries based on this error code?

Thanks.

Slow Log Processing with DLQ Lambda

I'm seeing very slow processing of the CloudWatch dead letter queue by the SumoCWProcessDLQLambda.

The logs repeatedly show errors similar to this one:

2019-05-03T16:05:44.119Z bfc53687-23d5-4155-87f8-1804134eabfa TypeError: Cannot read property 'data' of undefined at /var/task/DLQProcessor.js:19:73 at Response.<anonymous> (/var/task/DLQProcessor.js:41:19) at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:364:18) at Request.callListeners (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:109:20) at Request.emit (/var/task/node_modules/aws-sdk/lib/sequential_executor.js:81:10) at Request.emit (/var/task/node_modules/aws-sdk/lib/request.js:683:14) at Request.transition (/var/task/node_modules/aws-sdk/lib/request.js:22:10) at AcceptorStateMachine.runTo (/var/task/node_modules/aws-sdk/lib/state_machine.js:14:12) at /var/task/node_modules/aws-sdk/lib/state_machine.js:26:10 at Request.<anonymous> (/var/task/node_modules/aws-sdk/lib/request.js:38:9)

It appears there is some data in the queue that is causing the lambda to get hung up. Previously, the queue could be emptied very quickly.

Lambda connector - create subscription fails due to throttling

When there are a large number of existing log groups, and USE_EXISTING_LOG_GROUPS=True then the putSubscriptionFilter AWS API call sometimes fails due to hitting throttling limits. We see errors similar to the one below:

2020-01-21T12:23:02.436Z	a30ab1f3-8cfe-4f5c-84e4-0d404da581cc	INFO	Error in subscribing /aws/lambda/dev-somefunction { LimitExceededException: Resource limit exceeded.
    at Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:51:27)
    at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
    at Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
    at Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)
    at Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)
    at AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)
    at /var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10
    at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)
    at Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)
    at Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
  message: 'Resource limit exceeded.',
  code: 'LimitExceededException',
  time: 2020-01-21T12:23:02.377Z,
  requestId: 'a2066014-38a4-43b3-934e-0309185b910f',
  statusCode: 400,
  retryable: false,
  retryDelay: 38.911558105243586 }

Re-trying the invocation usually causes the throttle limits to be hit in the same places, so doesn't help to resolve the situation.

Security Hub Collector and Forwarder are both missing from the Serverless Application Repository

I'm attempting to setup the collector and forwarder for AWS Security Hub. The documentation(https://help.sumologic.com/07Sumo-Logic-Apps/01Amazon_and_AWS/AWS_Security_Hub/1-Ingest-findings-into-AWS-Security_Hub and https://help.sumologic.com/07Sumo-Logic-Apps/01Amazon_and_AWS/AWS_Security_Hub/2-Collect-findings-for-the-AWS-Security-Hub-App) sends you to the Serverless Application Repo for both of those apps, but neither of them show up when I search for sumo. I checked in multiple regions to make sure it wasn't a region thing, but got the same results.

image

cloudwatch_lambda.js uses old, deprecated `context` object

The Problem

In the current cloudwatch_lambda.js (pinned version link), the lambda is being marked as complete by calling context.succeed:

However, this is very old syntax. It was deprecated with the 0.11.x runtime and backfilled (poorly) into the new runtimes.

Since the docs now recommend running Node 8.10, this function should be rewritten using the more understandable async/await syntax. Here are the docs stating that it's available in the 8.10 runtime.

Rewriting the function to use async await will transition the runtime to use the explicit callback parameter, marked by when the function resolves the promise returned by the exports.handler method.

How I Discovered This

Because we're defining this lambda in our Infrastructure-as-Code repository (using Terraform), we encrypt the SUMO_ENDPOINT URL since we don't want anyone other than the lambda to be able to see it (a malicious actor could POST logs to the endpoint and run up our bill).

To do this, I made the exports.handler asynchronous and decrypt the KMS key before parsing the message and sending it to sumo.

const AWS = require('aws-sdk');
async function getSumoUrl() {
  var kms = new AWS.KMS();
  var kmsDecryptParams = {
    CiphertextBlob: Buffer.from(process.env.ENCRYPTED_SUMO_ENDPOINT, 'base64')
  };
  var data = await kms.decrypt(kmsDecryptParams).promise();
  var decryptedSumoEndpoint = data.Plaintext.toString('utf-8');
 
  return decryptedSumoEndpoint;
}

// existing code unchanged

// NOTE that this is async now
exports.handler = async function (event, context) {
  SumoURL = await getSumoUrl();

  // existing code unchanged
}

However, this did not work because of the reliance on the old, unsupported context.succeed call. This would have been a whole lot easier if the rest of the function used the newer Node 8 async/await syntax to start with.

Conclusion / Suggestions

It seems like this lambda was written a long time ago, based on the APIs it's using. Node has come quite a long way and rewriting with Node 8 standards would make it more readable and extensible for users.

loggroup-lambda-connector: connect existing log groups?

Hello!

I was wondering if there's a good way to subscribe existing log groups in addition to auto-subscribing new log groups that match the regex?

Maybe even just an example of how to invoke the lambda manually to subscribe if that's possible?

nodejs12 support?

Hello,

Are there plans to upgrade the nodejs runtime from 10 -> 12?

Update descripttion of logstream prefix

"LogStreamPrefix": {
"Type": "String",
"Description": "(Optional) Enter comma separated list of logStream name prefixes to filter by logStream. Please note this is seperate from a logGroup. This is used to only send certain logStreams within a cloudwatch logGroup(s). LogGroups still need to be subscribed to the created Lambda funciton, regardless of what is input for this value.",
"Default": ""
}

How to Set Up Sumo Logic App for Amazon VPC Flow Logs

Hi, for lack of a better place to ask this, I came across this github while googling. How in the world do you set up the "Sumo Logic App for Amazon VPC Flow Logs" application, referenced in Sumologic docs here: https://service.sumologic.com/help/Default.htm#Amazon_VPC_Flow_Logs_App.htm%3FTocPath%3DApps%7CSumo%2520Logic%2520App%2520for%2520Amazon%2520VPC%2520Flow%2520Logs%7C_____0

Is that what this lambda streaming setup described in this github for? Any help or documentation would be much appreciated!

Regex is not working

This doesn't seem to be working:

// Regex used to detect logs coming from lambda functions.
// The regex will parse out the requestID and strip the timestamp
// Example: 2016-11-10T23:11:54.523Z	108af3bb-a79b-11e6-8bd7-91c363cc05d9    some message
var consoleFormatRegex = /^\d{4}-\d{2}-\d{2}T\d{2}:\d{2}:\d{2}.\d{3}Z\t(\w+?-\w+?-\w+?-\w+?-\w+)\t/;

A simple message like this: "START RequestId: be746bac-efd6-11e7-9740-0f2037ef62c0 Version: $LATEST\n"; is returning null.

No data displaying in SumoLogic

Running "cloudwatchlogs_lambda.js" as-is wasn't displaying any logs when searching sumologic. I added "toString" to the request, e.g.:
res.on('data', function(chunk) { body += chunk.toString(); });
on line 32 and now see logs. I'm not super familiar with nodejs, but this seems to have worked for me.

Tests do not seems to be working.

Running the tests seems to be problematic when following the readme.

First, I needed to specify an environment variable export SumoEndPointURL=https://endpoint2.collection.us2.sumologic.com/receiver/v1/http/Zxxxxxxxxx...

Even with that in place, it appears to be hardcoded to attempt to upload the content to the bucket 'appdevstore-us-east-2'

Potential SAR Vulnerability

Hi @SumoSourabh

The Readme.md for the CloudWatchEvents deployments needs an update, as it contains an AWS SAR security vulnerability that has been recently discovered. We wrote a detailed explanation on the vulnerability.

It is important to add a link to the source account that deploys to the bucket by adding an additional condition.

            Condition:
              StringEquals:
                "aws:SourceAccount":  <AWS::AccountId>

@SumoSourabh I am tagging you as I have seen that you recently modified the file.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.