GithubHelp home page GithubHelp logo

awslabs / logstash-output-amazon_es Goto Github PK

View Code? Open in Web Editor NEW
375.0 24.0 103.0 155 KB

Logstash output plugin to sign and export logstash events to Amazon Elasticsearch Service

License: Apache License 2.0

Ruby 100.00%

logstash-output-amazon_es's Introduction

Logstash Output Plugin

This plugin is now in maintenance mode. We will supply bug fixes and security patches for v7.2.X, older versions are no longer supported. This change is because the OpenSearch Project created a new Logstash output plugin logstash-output-opensearch which ships events from Logstash to OpenSearch 1.x and Elasticsearch 7.x clusters, and also supports SigV4 signing. Having similar functionality plugins can be redundant, so we plan to eventually replace this logstash-output-amazon_es plugin with the logstash-output-opensearch plugin.

To help you migrate to logstash-output-opensearch plugin, please find below a brief migration guide.

Migrating to logstash-output-opensearch plugin

This guide provides instructions for existing users of logstash-output-amazon_es plugin to migrate to logstash-output-opensearch plugin.

Configuration Changes

  • The plugin name will change from amazon_es to opensearch.
  • If using HTTPS this must be explicitly configured because opensearch plugin does not default to it like amazon_es does:
    • The protocol must be included in hosts as https (or option ssl added with value true)
    • port must explicitly specified as 443
  • A new parameter auth_type will be added to the Config to support SigV4 signing.
  • The region parameter will move under auth_type.
  • Credential parameters aws_access_key_id and aws_secret_access_key will move under auth_type.
  • The type value for auth_type for SigV4 signing will be set to aws_iam.

For the Logstash configuration provided in Configuration for Amazon Elasticsearch Service Output Plugin , here's a mapped example configuration for logstash-output-opensearch plugin:

output {
   opensearch {
          hosts => ["https://hostname:port"]
          auth_type => {
              type => 'aws_iam'
              aws_access_key_id => 'ACCESS_KEY'
              aws_secret_access_key => 'SECRET_KEY'
              region => 'us-west-2'
          }
          index  => "logstash-logs-%{+YYYY.MM.dd}"
   }
}

Installation of logstash-output-opensearch plugin

This Installation Guide has instructions on installing the logstash-output-opensearch plugin in two ways: Linux (ARM64/X64) OR Docker (ARM64/X64).

To install the latest version of logstash-output-opensearch, use the normal Logstash plugin installation command:

bin/logstash-plugin install logstash-output-opensearch

Using the logstash-output-amazon_es plugin

The remainder of this document is for using or developing the logstash-output-amazon_es plugin.

Overview

This is a plugin for Logstash which outputs to Amazon OpenSearch Service (successor to Amazon Elasticsearch Service) using SigV4 signing.

License

This library is licensed under Apache License 2.0.

Compatibility

The following table shows the versions of logstash and logstash-output-amazon_es plugin was built with.

logstash-output-amazon_es Logstash
6.0.0 < 6.0.0
6.4.2 >= 6.0.0
7.0.1 >= 7.0.0
7.1.0 >= 7.0.0
8.0.0 >= 7.0.0

Also, logstash-output-amazon_es plugin versions 6.4.0 and newer are tested to be compatible with Elasticsearch 6.5 and greater.

logstash-output-amazon_es Elasticsearch
6.4.0+ 6.5+

Installation

To install the latest version, use the normal Logstash plugin script.

bin/logstash-plugin install logstash-output-amazon_es

If you want to use old version of logstash-output-amazon_es, you can use the --version flag to specify the version. For example:

bin/logstash-plugin install --version 6.4.2 logstash-output-amazon_es

Starting in 8.0.0, the aws sdk version is bumped to v3. In order for all other AWS plugins to work together, please remove pre-installed plugins and install logstash-integration-aws plugin as follows. See also logstash-plugins/logstash-mixin-aws#38

# Remove existing logstash aws plugins and install logstash-integration-aws to keep sdk dependency the same
# https://github.com/logstash-plugins/logstash-mixin-aws/issues/38
/usr/share/logstash/bin/logstash-plugin remove logstash-input-s3
/usr/share/logstash/bin/logstash-plugin remove logstash-input-sqs
/usr/share/logstash/bin/logstash-plugin remove logstash-output-s3
/usr/share/logstash/bin/logstash-plugin remove logstash-output-sns
/usr/share/logstash/bin/logstash-plugin remove logstash-output-sqs
/usr/share/logstash/bin/logstash-plugin remove logstash-output-cloudwatch

/usr/share/logstash/bin/logstash-plugin install --version 0.1.0.pre logstash-integration-aws
bin/logstash-plugin install --version 8.0.0 logstash-output-amazon_es

Configuration for Amazon Elasticsearch Service Output Plugin

To run the Logstash Output Amazon Elasticsearch Service plugin, simply add a configuration following the below documentation.

An example configuration:

output {
    amazon_es {
        hosts => ["foo.us-east-1.es.amazonaws.com"]
        region => "us-east-1"
        # aws_access_key_id and aws_secret_access_key are optional if instance profile is configured
        aws_access_key_id => 'ACCESS_KEY'
        aws_secret_access_key => 'SECRET_KEY'
        index => "production-logs-%{+YYYY.MM.dd}"
    }
}

Required Parameters

  • hosts (array of string) - the Amazon Elasticsearch Service domain endpoint (e.g. ["foo.us-east-1.es.amazonaws.com"])
  • region (string, :default => "us-east-1") - region where the domain is located

Optional Parameters

  • Credential parameters:

    • aws_access_key_id, :validate => :string - optional AWS access key
    • aws_secret_access_key, :validate => :string - optional AWS secret key

    The credential resolution logic can be described as follows:

    • User passed aws_access_key_id and aws_secret_access_key in amazon_es configuration
    • Environment variables - AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (RECOMMENDED since they are recognized by all the AWS SDKs and CLI except for .NET), or AWS_ACCESS_KEY and AWS_SECRET_KEY (only recognized by Java SDK)
    • Credential profiles file at the default location (~/.aws/credentials) shared by all AWS SDKs and the AWS CLI
    • Instance profile credentials delivered through the Amazon EC2 metadata service
  • template (path) - You can set the path to your own template here, if you so desire. If not set, the included template will be used.

  • template_name (string, default => "logstash") - defines how the template is named inside Elasticsearch

  • port (string, default 443) - Amazon Elasticsearch Service listens on port 443 for HTTPS (default) and port 80 for HTTP. Tweak this value for a custom proxy.

  • protocol (string, default https) - The protocol used to connect to the Amazon Elasticsearch Service

  • max_bulk_bytes - The max size for a bulk request in bytes. Default is 20MB. It is recommended not to change this value unless needed. For guidance on changing this value, please consult the table for network limits for your instance type: https://docs.aws.amazon.com/elasticsearch-service/latest/developerguide/aes-limits.html#network-limits

After 6.4.0, users can't set batch size in this output plugin config. However, users can still set batch size in logstash.yml file.

Advanced Optional Parameters

Starting logstash-output-amazon_es v7.1.0, we have introduced the following optional parameters to resolve specific use cases:

  • service_name (string, default => "es") - Users can define any service name to which the plugin will send a SigV4 signed request
  • skip_healthcheck (boolean, default => false) - Boolean to skip healthcheck API and set the major ES version to 7
  • skip_template_installation (boolean, default => false) - Boolean to allow users to skip installing templates in usecases that don't require them

Developing

1. Prerequisites

To get started, you can install JRuby with the Bundler gem using RVM

rvm install jruby-9.2.5.0

2. Plugin Development and Testing

Code

  1. Verify JRuby is already installed

    jruby -v
  2. Install dependencies:

    bundle install

Test

  1. Update your dependencies:

    bundle install
  2. Run unit tests:

    bundle exec rspec

3. Running your unpublished plugin in Logstash

3.1 Run in a local Logstash clone

  1. Edit Logstash Gemfile and add the local plugin path, for example:

    gem "logstash-output-amazon_es", :path => "/your/local/logstash-output-amazon_es"
  2. Install the plugin:

    # Logstash 2.3 and higher
    bin/logstash-plugin install --no-verify
    
    # Prior to Logstash 2.3
    bin/plugin install --no-verify
  3. Run Logstash with your plugin:

    bin/logstash -e 'output {amazon_es {}}'

At this point any modifications to the plugin code will be applied to this local Logstash setup. After modifying the plugin, simply re-run Logstash.

3.2 Run in an installed Logstash

Before build your Gemfile, please make sure use JRuby. Here is how you can know your local Ruby version:

rvm list

Please make sure you current using JRuby. Here is how you can change to JRuby

rvm jruby-9.2.5.0

You can use the same 3.1 method to run your plugin in an installed Logstash by editing its Gemfile and pointing the :path to your local plugin development directory. You can also build the gem and install it using:

  1. Build your plugin gem:

    gem build logstash-output-amazon_es.gemspec
  2. Install the plugin from the Logstash home. Please be sure to check the version number against the actual Gem file. Run:

    bin/logstash-plugin install /your/local/logstash-output-amazon_es/logstash-output-amazon_es-7.0.1-java.gem
  3. Start Logstash and test the plugin.

Contributing

All contributions are welcome: ideas, patches, documentation, bug reports, and complaints.

logstash-output-amazon_es's People

Contributors

adamgoucher avatar aetter avatar alien2150 avatar alongmuaz avatar asemt avatar austintag avatar carlmeadows avatar cgansen avatar dlvenable avatar hardproblems avatar hyandell avatar jpeddicord avatar malpani avatar msfroh avatar nosnilmot avatar salamanderrex avatar sbayer55 avatar sshivanii avatar tbenade avatar tobiasbayer avatar wanix avatar xyq164202 avatar yadavcbala avatar zhaojunz avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

logstash-output-amazon_es's Issues

Requests fail silently when templates contain invalid field names on ES 2.3

One of the changes between Elasticsearch 1.x and 2.x is that in 2.x, dots are no longer allowed in field names. I made the mistake of creating a template that included dotted fields, and configuring an amazon_es output to use this template. As a result, no documents were created, but even with debug logs there was no sign that anything was failing.

I then tried two things:

  • Set up an ES 1.5 domain with the same configuration. These documents were added without any problems.
  • With the 2.3 domain, use the regular elasticsearch plugin instead of amazon_es. In this case I saw mapping_parser_exception warnings in the logs about dotted fields, which allowed me to find the bad template and make documents show up again.

Ideally, amazon_es should be detecting these errors.

Timeout Error using Amazon AWS Linux

I've got a set of machines that were running this plugin successfully until about ten days ago. The entire cluster now tells me it's timing out attempting to connect to the ES cluster.

Here's what I've got so far:

  • Opened up the access policy on the ES cluster to anyone (no auth required)
  • Created a new ES cluster
  • Verified I can curl the endpoint including creating and indexes and documents.
  • Tried logstash 1.4 through 2.2
  • Tried the logstash-output-amazon_es plugin in various versions (0.2, 0.3)
  • jruby 1.7.23 (1.9.3p551) 2015-11-24 f496dd5 on OpenJDK 64-Bit Server VM 1.7.0_91-mockbuild_2015_10_27_19_01-b00 +jit [linux-amd64]
  • logstash 2.2.2

Here's the error that I'm getting, it appears very straight forward except for the fact that I cannot reproduce it outside of logstash:

Attempted to send a bulk request to Elasticsearch configured at '["https://search-fudge-lvxtdxfbo2acev3jktqqidmmxa.us-west-2.es.amazonaws.com:443"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:client_config=>{:hosts=>["https://search-fudge-lvxtdxfbo2acev3jktqqidmmxa.us-west-2.es.amazonaws.com:443"], :region=>"us-west-2", :aws_access_key_id=>nil, :aws_secret_access_key=>nil, :transport_options=>{:request=>{:open_timeout=>20, :timeout=>60}, :proxy=>nil}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"Read timed out", :error_class=>"Faraday::TimeoutError", :backtrace=>nil, :level=>:error, :file=>"logstash/outputs/amazon_es.rb", :line=>"372", :method=>"flush"}

Any thoughts on how to begin troubleshooting this?

Broken pipe after hours of successful operation

Hi,
I have had logstash 2.4.1 running outputting logs to hosted amazon es for several months without any issues. Last week I updated the amazon_es plugin, and since then I'm getting connection issues after several hours of successful operation. Restarting logstash fixes the problem temporarily.

The error I get in logs is:
{:timestamp=>"2017-03-20T06:25:02.788000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"https://search-XXX.eu-west-1.es.amazonaws.com:443\"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :client_config=>{:hosts=>["https://search-XXX.eu-west-1.es.amazonaws.com:443"], :region=>"eu-west-1", :aws_access_key_id=>nil, :aws_secret_access_key=>nil, :transport_options=>{:request=>{:open_timeout=>0, :timeout=>60}, :proxy=>nil}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"https", :user=>nil, :password=>nil, :port=>443}}, :error_message=>"Broken pipe", :error_class=>"Faraday::ConnectionFailed", :backtrace=>nil, :level=>:error}

The output config is like this:

        {
            hosts => ["search-XXX.eu-west-1.es.amazonaws.com"]
            region => "eu-west-1"
            index => "logstash-%{+YYYY.MM.dd}"
        }

I have configured the acl to allow the iam role that is running the ec2 instance (which works since the plugin is successfully sending events for several hours before the problem starts).
I tried to downgrade to the 1.0 version of the plugin yesterday, but I got the same issue again this morning.

Any ideas?

BR / Jonas

Logstash instances all crash at the same time with the same error

Hi all,

We're using logstash as a GELF shipper to AWS ES.
Each of our instances receives its own traffic and thus generates its own logs.

All the logstash, from every instance, crash at the very same time with the following error:

[2017-10-13T08:24:58,525][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<LogStash::Error: timestamp field is missing>, :backtrace=>["org/logstash/ext/JrubyEventExtLibrary.java:202:in sprintf'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:302:in receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "org/jruby/RubyArray.java:1613:in each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:22:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:47:in multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:420:in output_batch'", "org/jruby/RubyHash.java:1342:in each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:419:in output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:365:in worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:330:in start_workers'"]}

Any idea or hint of what could be causing the problem?

Thank you very much for your support,
Regards.

Not compatible with logstash 2.1.1

During startup of the logstash process there are error messages and the bulk processing fails when messages are received.

The error message is

{:timestamp=>"2015-12-17T08:09:23.785000+0000", :message=>"Failed to install template: undefined method `credentials' for nil:NilClass", :level=>:error}

The full log.

{:timestamp=>"2015-12-17T08:09:20.187000+0000", :message=>"Using version 0.1.x output plugin 'amazon_es'. This plugin isn't well supported by the community and likely has no maintainer.", :level=>:info}
{:timestamp=>"2015-12-17T08:09:20.441000+0000", :message=>"Registering file input", :path=>["/app01/log/log.out"], :level=>:info}
{:timestamp=>"2015-12-17T08:09:20.449000+0000", :message=>"No sincedb_path set, generating one based on the file path", :sincedb_path=>"/app01/logstash-config/.sincedb_88c97fc8a6eef0029496c2abc666ae3c", :path=>["/app01/log/log.out"], :level=>:info}
{:timestamp=>"2015-12-17T08:09:23.556000+0000", :message=>"Automatic template management enabled", :manage_template=>"true", :level=>:info}
{:timestamp=>"2015-12-17T08:09:23.754000+0000", :message=>"Using mapping template", :template=>{"template"=>"logstash-*", "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"_all"=>{"enabled"=>true, "omit_norms"=>true}, "dynamic_templates"=>[{"message_field"=>{"match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"string", "index"=>"analyzed", "omit_norms"=>true, "fields"=>{"raw"=>{"type"=>"string", "index"=>"not_analyzed", "ignore_above"=>256}}}}}], "properties"=>{"@version"=>{"type"=>"string", "index"=>"not_analyzed"}, "geoip"=>{"type"=>"object", "dynamic"=>true, "properties"=>{"location"=>{"type"=>"geo_point"}}}}}}}, :level=>:info}
{:timestamp=>"2015-12-17T08:09:23.785000+0000", :message=>"Failed to install template: undefined method `credentials' for nil:NilClass", :level=>:error}
{:timestamp=>"2015-12-17T08:09:23.786000+0000", :message=>"New Elasticsearch output", :hosts=>["my-elk-AWS-servicename.eu-west-1.es.amazonaws.com"], :port=>443, :level=>:info}
{:timestamp=>"2015-12-17T08:09:23.792000+0000", :message=>"Pipeline started", :level=>:info}

Im running this in a docker image based on phusion/baseimage with openjdk 8.

If support for AWS is not included in the standard plugin, then this will be a slight obstacle.

I notice that its waring about v 0.1.0 but the source has upped version to 0.2.0, is there a way to get 0.2.0 installed without building from source?

Plugin incompatible with new Logstash version 2.0

New Logstash version has been released yesterday and it looks like this plugin is incompatible with it.
When trying to install with the usual method of 'bin/plugin install..." I get the following multiple errors:

Validating logstash-output-amazon_es
Installing logstash-output-amazon_es
Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "logstash-core":
  In snapshot (Gemfile.lock):
    logstash-core (= 2.0.0)

  In Gemfile:
    logstash-output-http (>= 0) java depends on
      logstash-mixin-http_client (< 3.0.0, >= 2.0.2) java depends on
        logstash-core (< 3.0.0, >= 2.0.0.beta2) java

    logstash-output-http (>= 0) java depends on
      logstash-mixin-http_client (< 3.0.0, >= 2.0.2) java depends on
        logstash-core (< 3.0.0, >= 2.0.0.beta2) java

Result is not consistent

This plugin result is not consistent. For e.g. if I 100 documents are getting pushed then sometimes 90 documents are written to the cluster , sometimes its 85. Why is such behaviour anything which I need to take care of ? @malpani @msfroh @joshuaspence

Task based Roles are not supported

I'm working on deploying a cluster with ECS. As part of this, I wanted to have the Logstash shipping agent performed once per instance. The ECS Documentation had a useful article on Starting a Task at Container Instance Launch Time. Unfortunately, it appears that the amazon_es output does not support task based role permissions, as when I start the container, it complains that the machine role does not have permission to perform ESHTTP* on the ElasticSearch Service Domain.

Error installing on ubuntu 14

Hi , i'm using logstash 2.3 and ubuntu 14 , i'm facing an error while trying to install :

ubuntu@ip-172-31-26-183:/logstash-output-amazon_es$ gem build logstash-output-amazon_es.gemspec
WARNING: WARNING: license value 'apache-2.0' is invalid. Use a license identifier from
http://spdx.org/licenses or 'Nonstandard' for a nonstandard license.
WARNING: open-ended dependency on concurrent-ruby (>= 0) is not recommended
if concurrent-ruby is semantically versioned, use:
add_runtime_dependency 'concurrent-ruby', '> 0'
WARNING: open-ended dependency on elasticsearch (>= 1.0.10) is not recommended
if elasticsearch is semantically versioned, use:
add_runtime_dependency 'elasticsearch', '
> 1.0', '>= 1.0.10'
WARNING: open-ended dependency on logstash-input-generator (>= 0, development) is not recommended
if logstash-input-generator is semantically versioned, use:
add_development_dependency 'logstash-input-generator', '> 0'
WARNING: open-ended dependency on logstash-devutils (>= 0, development) is not recommended
if logstash-devutils is semantically versioned, use:
add_development_dependency 'logstash-devutils', '
> 0'
WARNING: open-ended dependency on longshoreman (>= 0, development) is not recommended
if longshoreman is semantically versioned, use:
add_development_dependency 'longshoreman', '~> 0'
WARNING: See http://guides.rubygems.org/specification-reference/ for help
ERROR: While executing gem ... (Errno::EACCES)
Permission denied @ rb_sysopen - logstash-output-amazon_es-1.0.gem

Would somebody show me the right way?

thank you

Plugin installation failure

Description

The plugin installation fails with:

Bundler could not find compatible versions for gem "aws-sdk"

Error message

---- /opt/logstash [15:49:48]
$ sudo bin/plugin install logstash-output-amazon_es
Validating logstash-output-amazon_es
Installing logstash-output-amazon_es
Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "aws-sdk":
  In snapshot (Gemfile.lock):
    aws-sdk (= 2.1.2)

  In Gemfile:
    logstash-input-sqs (>= 0) java depends on
      aws-sdk (>= 0) java

    logstash-input-sqs (>= 0) java depends on
      aws-sdk (>= 0) java

    logstash-input-sqs (>= 0) java depends on
      aws-sdk (>= 0) java

    logstash-input-s3 (>= 0) java depends on
      logstash-mixin-aws (>= 0) java depends on
        aws-sdk (~> 2.1.0) java

    logstash-output-amazon_es (>= 0) java depends on
      aws-sdk (>= 2.1.14, ~> 2.1) java

Running `bundle update` will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.

Following the good advice in the error message, I have installed bundler:

$ sudo gem install bundler
Fetching: bundler-1.11.2.gem (100%)
Successfully installed bundler-1.11.2
1 gem installed
Installing ri documentation for bundler-1.11.2...
Installing RDoc documentation for bundler-1.11.2...

And then tried to run bundle update, with no luck:

---- /opt/logstash [15:55:07]
$ bundle update
Fetching gem metadata from https://rubygems.org/........
Fetching version metadata from https://rubygems.org/...
Fetching dependency metadata from https://rubygems.org/..
Could not find gem 'logstash-core (= 1.5.2.2)' in any of the gem sources listed in your Gemfile or available on this machine.

Environment

I am working on an EC2 Ubuntu 14.04. Logstash is installed in /opt/logstash.

What have I tried

  • Running bundle update

Possible to run local

We are using Logstash to add content to ES from DynamoDb using the Logstash plugin logstash-input-dynamodb and it works great both local and in production, but we now want to sing our requests to ES using this plugin. But we canยดt get it to work in local. First of we do not provide any credentials in local to the Logstash input dynamodb plugin.

The error message: Failed to install template: undefined method 'credentials'.

Is it possible to run this from local without actual credentials?

Allow support for Logstash v5

Logstash is now 5.0.0-rc1. Would be nice to start testing this out and be ready for the 5.0.0 release.

Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "logstash-core":
  In snapshot (Gemfile.lock):
    logstash-core (= 5.0.0.pre.rc1)

  In Gemfile:
    logstash-core-plugin-api (>= 0) java depends on
      logstash-core (= 5.0.0.pre.rc1) java

    logstash-output-amazon_es (= 1.0) java depends on
      logstash-core (< 3.0.0, >= 1.4.0) java

    logstash-core (>= 0) java

Unable to install plugin: "Can only install contrib at this time... Exiting"

Unable to install on clean distribution of Debian Linux, following all the instructions:

  1. Install logstash official package following the instrucitons at https://www.elastic.co/guide/en/logstash/current/package-repositories.html
  2. cd to logstash directory: cd /opt/logstash
  3. Run command as instructed: bin/plugin install logstash-output-amazon_es

Output:
Can only install contrib at this time... Exiting.

How do I get this installed?

Incompatible with Logstash >= 5.2.1

Yay, dependency management!

This plugin has a dependency on an old version of eleasticsearch. The current version is 5.2.1

The logstash-input-elasticsearch plugin (default plugin that ships with Logstash) depends on a more recent version. This change was made 7 days ago. Logstash 5.2.1 was released today.

Validating logstash-output-amazon_es
Installing logstash-output-amazon_es
Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "elasticsearch":
  In snapshot (Gemfile.lock):
    elasticsearch (= 5.0.3)

  In Gemfile:
    logstash-input-elasticsearch (>= 0) java depends on
      elasticsearch (< 6.0.0, >= 5.0.3) java

    logstash-output-amazon_es (>= 0) java depends on
      elasticsearch (>= 1.0.10, ~> 1.0) java

Running `bundle update` will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.
Bundler could not find compatible versions for gem "logstash-core":
  In snapshot (Gemfile.lock):
    logstash-core (= 5.2.1)

  In Gemfile:
    logstash-core-plugin-api (>= 0) java depends on
      logstash-core (= 5.2.1) java

    logstash-output-amazon_es (>= 0) java depends on
      logstash-core (< 2.0.0, >= 1.4.0) java

    logstash-core (>= 0) java

Running `bundle update` will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.

Logstash Input

Hi,

Any plans for a logstash input from amazon es? It would be helpful for re-indexing documents as well replaying back logs.

Thanks!

Token timeout when using profile to authenticate

I use the following Logstash config on an EC2 instance:

input {
  syslog {
    type => "syslog"
    port => 5514
  }
}

output {
  amazon_es {
      hosts => ["..."]
      region => "eu-central-1"
  }
}

There are no AWS credentials provided on the EC2 instance, i.e. the machine's profile is used to authenticate/sign the requests to the AWS Elasticsearch instance. This works fine - for some hours. After some time, the log file gets flooded with such messages:

{
:timestamp=>"2015-10-15T08:49:17.699000+0200",
:message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://...:80\"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?",
:client_config=>{:hosts=>["http://...:80"], :region=>"eu-central-1", :aws_access_key_id=>nil, :aws_secret_access_key=>nil, :transport_options=>{:request=>{:open_timeout=>0, :timeout=>60}, :proxy=>nil}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false},
:error_message=>"[403] {\"message\":\"The security token included in the request is expired\"}",
:error_class=>"Elasticsearch::Transport::Transport::Errors::Forbidden",
:backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/transport/base.rb:135:in `__raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/transport/base.rb:227:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es/aws_transport.rb:45:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.12/lib/elasticsearch/transport/client.rb:119:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.12/lib/elasticsearch/api/actions/bulk.rb:80:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es/http_client.rb:53:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:319:in `submit'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:316:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:349:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:309:in `receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/outputs/base.rb:88:in `handle'", "(eval):27:in `output_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:244:in `outputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:166:in `start_outputs'"],
:level=>:error
}

The error message "The security token included in the request is expired" looks a lot like the security token not getting refreshed properly.

I'd be glad to provide a pull request to fix this issue, however, I'm not really a Ruby guy. So if someone could give me hint where to start looking for the problem, any help is appreciated!

Installation Failure on Logstash-2.3.2

I am new to this software and could very well be doing something wrong, but I could not get the plugin to install on my instance of logstash-2.3.2

Followed these steps to generate scp file (taken from innovia comment in issue #27 ):

  1. git clone https://github.com/awslabs/logstash-output-amazon_es.git
  2. cd logstash-output-amazon_es
  3. gem build logstash-output-amazon_es.gemspec
  4. upload the file (scp) to the logstash instance

Plugin does not install successfully on logstash-2.3.2:
C:\logstash-2.3.2\bin>plugin install logstash-output-amazon_es-0.3.gem
"The use of bin/plugin is deprecated and will be removed in a feature release. Please use bin/logstash-plugin."
io/console not supported; tty will not be manipulated
Validating logstash-output-amazon_es-master\logstash-output-amazon_es-0.3.gem
Installing logstash-output-amazon_es
Error Bundler::HTTPError, retrying 1/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 2/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 3/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 4/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 5/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 6/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 7/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 8/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 9/10
Could not fetch specs from https://rubygems.org/
Error Bundler::HTTPError, retrying 10/10
Could not fetch specs from https://rubygems.org/
Too many retries, aborting, caused by Bundler::HTTPError
ERROR: Installation Aborted, message: Could not fetch specs from https://rubygems.org

Output using "--local" and "--no-verify":
C:\logstash-2.3.2\bin>plugin install --local --no-verify logstash-output-amazon_es-0.3.gem
"The use of bin/plugin is deprecated and will be removed in a feature release. Please use bin/logstash-plugin."
io/console not supported; tty will not be manipulated
Installing logstash-output-amazon_es
Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions
for gem "faraday_middleware":
In Gemfile:
logstash-output-amazon_es (= 0.3) java depends on
faraday_middleware (> 0.10.0) java
Could not find gem 'faraday_middleware (
> 0.10.0) java', which is required by gem 'logstash-output-amazon_es (= 0.3) java', in any of the sources.

Java version:
C:\logstash-2.3.2\bin>java -version
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b15, mixed mode)

Plugin does however install successfully on logstash-2.3.1:
C:\logstash-2.3.1\bin>plugin install logstash-output-amazon_es-0.3.gem
"The use of bin/plugin is deprecated and will be removed in a feature release. Please use bin/logstash-plugin."
io/console not supported; tty will not be manipulated
Validating logstash-output-amazon_es-master\logstash-output-amazon_es-0.3.gem
Installing logstash-output-amazon_es
Installation successful

Support instance profiles?

I see in the README this is expected in the next release. Any idea when this release has expected? I haven't seen any activity here in a month or so... Thanks!

Merge with upstream?

I'd like to propose this repository be abandoned in favor of merging efforts with the Logstash Elasticsearch output as well as the elasticsearch-ruby library.

The benefits here to authors is reduction in overall effort, and the benefit to users is less confusing interface, less feature chasing, and higher stability.

Need for AWS Access key even when the Access policy of ES is Open

Logstash config requires AWS Access and secret keys even when the Access policy of ES is Open

_Access policy:_

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "*"
      },
      "Action": "es:*",
      "Resource": "arn:aws:es:us-east-1:#######:domain/###/*"
    }
  ]
}

_Logstash config:_

output {
    amazon_es {
        hosts => ["search-######.us-east-1.es.amazonaws.com"]
        region => "us-east-1"
        aws_access_key_id => '#####'
        aws_secret_access_key => '#####
        index => "sample"
    }
}

AWS Security Groups are open for the required ports

Issue due manticore version

Trying to install on latest logstash I get:

ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "manticore":
In snapshot (Gemfile.lock):
manticore (= 0.5.2)

In Gemfile:
logstash-output-http (>= 0) java depends on
logstash-mixin-http_client (< 3.0.0, >= 2.2.0) java depends on
manticore (< 1.0.0, >= 0.5.2) java

logstash-output-http (>= 0) java depends on
logstash-mixin-http_client (< 3.0.0, >= 2.2.0) java depends on
manticore (< 1.0.0, >= 0.5.2) java

logstash-output-amazon_es (>= 0) java depends on
manticore (~> 0.4.2) java

maybe it's worth increasing the dependency version with manticore?

Heartbeats not supported

Maybe that's not even an issue of this output plugin, but let me explain the problem:

I'm using this plugin to write messages into our ElasticSearch Service (ESS) on AWS. Also I configured in Logstash the heartbeat input plugin following this guide. To quote that guide:

If any one of your outputs backs up, the entire pipeline stops flowing. The heartbeat plugin takes advantage of this to help you know when the flow of events slows, or stops altogether.

However I encountered a situation where the ESS cluster was filled up - no free storage space left - and this plugin did not signal the problem and the heartbeat messages kept flowing.

So the question is: Does this plugin miss the functionality to signal errors writing to ESS or should it be already contained / taken care by the Logstash server itself?

Retry policy?

Does" amazon_es" follow the same Retry policy as standard "elasticsearch" plugin.
I.e. "All other response codes are retried indefinitely", except 400 and 404 which are sent to the dead letter queue (if enabled) and 409 which are dropped?
Basically I'm wondering if it uses the same, or an equivalent, dead letter process when messages could not be sent.

cannot use a dns cname for aws elastic search endpoints

Hi,

I attempted to use a dns cname for our aws elastic search endpoint in our logstash-output-amazon_es configuration, but logstash-output-amazon_es fails with an ssl cert error.

I cannot find a config attribute to ignore certificate checks for logstash-output-amazon_es logstash outputs that are pushing data into aws elastic search endpoints, which would make configuration much easier.

Add 403 to RETRYABLE_CODES

We allowed the space available in our AWS ES domain to fall below 10% (due to bad alerting and aging policies). This caused the domain to stop accepting new data. However, it appears the plugin just failed silently.

The HTTP return code we received from AWS ES during this time was 403 with the message: "ClusterBlockException[blocked by: [FORBIDDEN/8/index write (api)];]". While this looks like a simple permissions issue, as soon as disk space was freed, writes were able to continue after some time.

The big problem we had is that we actually "lost" data because we use Kafka as the first layer input (we can recover from this by manual offset setting). It looks like the silent failure of this plugin allowed our offsets to increment even though writes were not successful.

support to parent-child relationship

Can we have support to linking parent-child relationships, just as we have for
logstash-plugins/logstash-output-elasticsearch#307? It is a very simple addition of the parent field, as you can see there.

The parent setting is described here:
https://www.elastic.co/guide/en/logstash/current/plugins-outputs-elasticsearch.html#plugins-outputs-elasticsearch-parent

So using elasticsearch plugin we can do:

elasticsearch {
  index => "alarms"
  document_type => "alarm"
  parent => "%{source_id}"
}

Plugin version conflict While installing Plugin as part of Docker build

Here's the relevant portion of my docker build output

`Validating logstash-output-amazon_es
Installing logstash-output-amazon_es
Plugin version conflict, aborting
ERROR: Installation Aborted, message: Bundler could not find compatible versions for gem "elasticsearch":
In snapshot (Gemfile.lock):
elasticsearch (= 5.0.3)

In Gemfile:
logstash-input-elasticsearch (>= 0) java depends on
elasticsearch (< 6.0.0, >= 5.0.3) java

logstash-output-amazon_es (>= 0) java depends on
  elasticsearch (>= 1.0.10, ~> 1.0) java

Running bundle update will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.
Bundler could not find compatible versions for gem "logstash-core":
In snapshot (Gemfile.lock):
logstash-core (= 5.2.1)

In Gemfile:
logstash-core-plugin-api (>= 0) java depends on
logstash-core (= 5.2.1) java

logstash-output-amazon_es (>= 0) java depends on
  logstash-core (< 2.0.0, >= 1.4.0) java

logstash-core (>= 0) java

Running bundle update will rebuild your snapshot from scratch, using only
the gems in your Gemfile, which may resolve the conflict.
The command '/bin/sh -c logstash-plugin install logstash-output-amazon_es' returned a non-zero code: 1
`
Steps take to resolve the issue

Installed bundler and ran bundle update (after installing jruby) and ran a build as suggested in the documentation. When i tried to install the resulting gem through the Dockerfile build command

RUN logstash-plugin install /home/ubuntu//projects/elkstack/docker/second_test/logstash-output-amazon_es-1.1.0-java.gem

Got this output

Step 3/5 : RUN logstash-plugin install /home/ubuntu//projects/elkstack/docker/second_test/logstash-output-amazon_es-1.1.0-java.gem ---> Running in df5205c15666 Validating /home/ubuntu//projects/elkstack/docker/second_test/logstash-output-amazon_es-1.1.0-java.gem Plugin /home/ubuntu//projects/elkstack/docker/second_test/logstash-output-amazon_es-1.1.0-java.gem does not exist ERROR: Installation aborted, verification failed for /home/ubuntu//projects/elkstack/docker/second_test/logstash-output-amazon_es-1.1.0-java.gem The command '/bin/sh -c logstash-plugin install /home/ubuntu//projects/elkstack/docker/second_test/logstash-output-amazon_es-1.1.0-java.gem' returned a non-zero code: 1

Any suggestions would be welcome

Failed to flush outgoing items.

Can't write to my AWS ES. Below is my config and the error output. Unsure how to resolves. The access policy for aws-es is IP based. I have done it through curl on my ec2 no problem. When I run the below config then echo some lines in to that txt file it repeatedly errors with the below. I've tried without the access keys too.

logstash version 5.2.0

Config
input { file { path => "/tmp/logstash.txt" } } output { amazon_es { hosts => ["real_host"] region => "us-east-1" # aws_access_key_id, aws_secret_access_key optional if instance profile is configured aws_access_key_id => 'xxxxxxxxxxxxxx' aws_secret_access_key => 'xxxxxxxxxx' index => "production-logs-%{+YYYY.MM.dd}" }

Error
[2017-07-19T20:00:43,766][WARN ][logstash.outputs.amazones] Failed to flush outgoing items {:outgoing_count=>1, :exception=>"Manticore::ResolutionFailure", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/faraday/adapter/manticore.rb:93:in call'", "org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/manticore/response.rb:79:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.6.0-java/lib/faraday/adapter/manticore.rb:97:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es/aws_v4_signer_impl.rb:49:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.2/lib/faraday/rack_builder.rb:139:in build_response'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.2/lib/faraday/connection.rb:377:in run_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es/aws_transport.rb:52:in perform_request'", "org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/transport/base.rb:257:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es/aws_transport.rb:48:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.1.0/lib/elasticsearch/transport/client.rb:128:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.1.0/lib/elasticsearch/api/actions/bulk.rb:93:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es/http_client.rb:53:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:321:in submit'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:318:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:351:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:311:in receive'", "/opt/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "org/jruby/RubyArray.java:1613:in each'", "/opt/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "/opt/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in multi_receive'", "/opt/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in multi_receive'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:336:in output_batch'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:335:in output_batch'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:293:in worker_loop'", "/opt/logstash/logstash-core/lib/logstash/pipeline.rb:263:in start_workers'"]}
`

Logstash logs a message about version 0.1.x when 0.2.0 is installed

I'm running logstash version 2.1.1 on Windows Server 2012 with version 0.2.0 of this plugin, and get this message in the logstash log file (when --verbose option is used):

Using version 0.1.x output plugin 'amazon_es'. This plugin isn't well supported by the community and likely has no maintainer.

My output config looks like this:

output {
    amazon_es {
        hosts => ["myurl.es.amazonaws.com"]
        region => "us-west-1"
        aws_access_key_id => '[MYID]'
        aws_secret_access_key => '[MYKEY]'
        index => "logstash-%{+YYYY.MM.dd}"
    }
}

When run the logstash using the service it fail to send data to aws elasticsearch

Hello

When I use the following command to start logstash and send data to AWS elasticsearch it works:

sudo /opt/logstash/bin/logstash -f /etc/logstash/conf.d/01-logstash.conf

however when I use the following command:

sudo service logstash start

I got some errors like:

{:timestamp=>"2016-06-14T14:17:35.040000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '["https://mhxqojiy.us-west.es.amazonaws.com:443"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :client_config=>{:hosts=>["https://jiy.us-west.es.amazonaws.com:443"], :region=>"us-west-1", :aws_access_key_id=>nil, :aws_secret_access_key=>nil, :transport_options=>{:request=>{:open_timeout=>0, :timeout=>60}, :proxy=>nil}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"https", :user=>nil, :password=>nil, :port=>443}}, :error_message=>"undefined method credentials' for nil:NilClass", :error_class=>"NoMethodError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/aws-sdk-core-2.1.36/lib/aws-sdk-core/signers/v4.rb:24:ininitialize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es/aws_v4_signer_impl.rb:36:in signer'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es/aws_v4_signer_impl.rb:48:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.2/lib/faraday/rack_builder.rb:139:in build_response'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/faraday-0.9.2/lib/faraday/connection.rb:377:inrun_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es/aws_transport.rb:49:in perform_request'", "org/jruby/RubyProc.java:281:incall'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/transport/base.rb:257:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es/aws_transport.rb:45:inperform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.17/lib/elasticsearch/transport/client.rb:128:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.17/lib/elasticsearch/api/actions/bulk.rb:88:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es/http_client.rb:53:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es.rb:322:insubmit'", "org/jruby/ext/thread/Mutex.java:149:in synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es.rb:319:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es.rb:352:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:inbuffer_flush'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.3-java/lib/logstash/outputs/amazon_es.rb:312:inreceive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/outputs/base.rb:83:in multi_receive'", "org/jruby/RubyArray.java:1613:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/outputs/base.rb:83:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/output_delegator.rb:130:inworker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/output_delegator.rb:114:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:301:inoutput_batch'", "org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:301:inoutput_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:232:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.3.2-java/lib/logstash/pipeline.rb:201:instart_workers'"], :level=>:error}

Version: logstash 2.3.2

Can you guys faced it?
thanks!

How to Install on Ubuntu / Debian / RPM Setup

I have Logstash 5.6.0 running on an AWS EC2 instance using Ubuntu 16.04.3 LTS. Logstash was installed using Debian/RPM. What's the easiest way to install this plugin on that setup? The instructions in the README.md file don't seem to work.

Couldn't find any output plugin name amazon_es

I have just installed the plugin and it shows up in logstash-plugin list.

However when I restart logstash to pick up new configuration:

output { amazon_es { hosts => ["vpc-endpoint.eu-west-2.es.amazonaws.com"] region => "eu-west-2" index => "development-logs-%{+dd.MM.YYYY}" } }

I get the following error

[2017-12-11T21:38:17,325][ERROR][logstash.agent ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::PluginLoadingError", :message=>"Couldn't find any output plugin named 'amazon_es'. Are you sure this is correct? Trying to load the amazon_es output plugin resulted in this error: Problems loading the requested plugin named amazon_es of type output. Error: NameError NameError", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/plugins/registry.rb:185:in lookup_pipeline_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/plugin.rb:140:in lookup'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:143:in plugin'", "(eval):135:in <eval>'", "org/jruby/RubyKernel.java:994:in eval'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:82:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:215:in initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:35:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:335:in block in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:332:in block in converge_state'", "org/jruby/RubyArray.java:1734:in each'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:319:in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in block in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:362:in block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in block in initialize'"]} `

Any help would be greatly appreciated.

Capitalized index name throws error

After trying out this plugin and kept getting the same error, I managed to narrow down the problem to one thing: capitalised index name.

In my initial setup i had set

index => "USA-redirect-logs-%{+YYYY.MM.dd}"

This unfortunately throws a nasty error in the logstash logs which doesn't give any useful information:

{:timestamp=>"2015-10-06T12:43:56.195000+0000", :message=>"Attempted to send a bulk request to Elasticsearch configured at '[\"http://search-spike-elk-xxxxxxxxxxxxxxxx.eu-west-1.es.amazonaws.com:80\"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided?", :client_config=>{:hosts=>["http://search-spike-elk-xxxxxxxxxxxxxxxxxx.eu-west-1.es.amazonaws.com:80"], :region=>"eu-west-1", :aws_access_key_id=>"xxxxxxxxxxxxxxxxxx", :aws_secret_access_key=>"xxxxxxxxxxxxxxxxxxxxx", :transport_options=>{:request=>{:open_timeout=>0, :timeout=>60}, :proxy=>nil}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :error_message=>"undefined method `each_with_index' for nil:NilClass", :error_class=>"NoMethodError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:324:in `submit'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:316:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:349:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:309:in `receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/outputs/base.rb:88:in `handle'", "(eval):131:in `output_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:244:in `outputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:166:in `start_outputs'"], :level=>:error}
{:timestamp=>"2015-10-06T12:43:56.208000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>25, :exception=>"NoMethodError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:324:in `submit'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:316:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:349:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:159:in `buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:309:in `receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/outputs/base.rb:88:in `handle'", "(eval):131:in `output_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:244:in `outputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.4-java/lib/logstash/pipeline.rb:166:in `start_outputs'"], :level=>:warn}

When i changed the index name to the following, everything started working fine.

index => "usa-redirect-logs-%{+YYYY.MM.dd}"

After a bit of googling I found out that this is not entirely the plugin's fault but it's actually related to a limitation of elasticsearch that does not allow capital letters in it's index name.
Maybe this should throw a more meaningful error in the logs? Or maybe documented better somewhere? It took me a while to figure out what was the cause of the problem

ElasticsearchIllegalArgumentException[explicit index in bulk is not allowed]

I'm getting error while trying to send data from logstash. I'm probably doing something silly.

Created an aws es cluster with

"rest.action.multi.allow_explicit_index" = false

Logstash version : 2.3.2

Logstash config

input {

stdin {}

}

output {
  stdout
  {
    codec => rubydebug
  }

  amazon_es {
    hosts => ["foo.us-east-1.es.amazonaws.com:"]
    region => "us-east-1"
    aws_access_key_id => "ACCESS_KEY"
    aws_secret_access_key => "SECRET_KEY"
    index => "logs"
}


}

Output error
https://gist.githubusercontent.com/snandam/8c935a678f7366aac2e76f12323ec4eb/raw/63d3dd542763d81e53990ad79ab78d3deb900b74/gistfile1.txt

Any help is appreciated.

Delete action throws an errors

On Logstash version logstash 2.3.4 targeting an Elasticsearch version 2.3 cluster the delete action throws an error, for example with this Logstash configuration:

input { stdin { } }

output {
    amazon_es {
            hosts => ["valid-domain.es.amazonaws.com"]
            action => "delete"
            document_id => "1"
            document_type => "tweet"
            index => "tweets"
    }
}

Logstash returns the following error:

Attempted to send a bulk request to Elasticsearch configured at '["https://valid-domain.es.amazonaws.com:443"]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:client_config=>{:hosts=>["https://valid-domain.es.amazonaws.com:443"], :region=>"us-east-1", :aws_access_key_id=>nil, :aws_secret_access_key=>nil, :transport_options=>{:request=>{:open_timeout=>0, :timeout=>60}, :proxy=>nil}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::AWS, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false, :http=>{:scheme=>"https", :user=>nil, :password=>nil, :port=>443}}, :error_message=>"[400] {\"error\":{\"root_cause\":[{\"type\":\"illegal_argument_exception\",\"reason\":\"Malformed action/metadata line [2], expected START_OBJECT or END_OBJECT but found [VALUE_STRING]\"}],\"type\":\"illegal_argument_exception\",\"reason\":\"Malformed action/metadata line [2], expected START_OBJECT or END_OBJECT but found [VALUE_STRING]\"},\"status\":400}", :error_class=>"Elasticsearch::Transport::Transport::Errors::BadRequest", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:201:in `__raise_transport_error'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/transport/base.rb:312:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/aws_transport.rb:45:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.18/lib/elasticsearch/transport/client.rb:128:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.18/lib/elasticsearch/api/actions/bulk.rb:90:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es/http_client.rb:53:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:321:in `submit'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:318:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-1.0-java/lib/logstash/outputs/amazon_es.rb:351:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1479:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:error}

Changing the action from delete to index works and the document shows up on the Elasticsearch cluster.

This might be related to the issue described here: logstash-plugins/logstash-output-elasticsearch#195

Bulk insert errors are not reported in any way

Took switching to http/80 and tracing to finally see why my inserts were not working - the plugin was getting an error return from the _bulk endpoint and not reporting it in logs or anything:

{"took":2,"errors":true,"items":[{"create":{"_index":"logstash-2016.11.19","_type":"json","_id":"AVh-s-0uBVMjC8QgcXDb","status":400,"error":{"type":"mapper_parsing_exception","reason":"failed to parse [facility]","caused_by":{"type":"number_format_exception","reason":"For input string: \"user\""}}}}]}

POST /_bulk HTTP/1.1

Error trying to send logs to AWS Elastic

I'm trying to send logs to elastic running in AWS. When login spins up, it throws this error:

{:timestamp=>"2016-07-19T13:08:47.469000+0000", :message=>"Failed to install template: https: unknown error", :level=>:error}

Any ideas? Some searching around suggests that I change the protocol to http which I can't do in AWS (or don't want to do).

signature does not match

I keep getting this error (I've extracted what I think is the relevant parts)

Attempted to send a bulk request to Elasticsearch ... but an error occurred and it failed!
403, The request signature we calculated does not match the signature you provided. Check your AWS Secret Access Key and signing method.

The strange thing is that it works sometimes, but not others (with the same configuration). I can't be certain, but it seems to be more likely to work when there are fewer logs being sent. But once I get this error, nothing seems to go through to Elasticsearch.

HTTPS support missing

It appears that this plugin does not support HTTPS at all?

While I like the features that this offers (AWS Signature), the lack of HTTPS support makes it largely unusable.

logstash.outputs.amazones] Failed to flush outgoing items

Hello using

AWS elasticsearch 5.1
logstash 5.2.2-2

everything works if I fully open the permissions policy on ES. then I made it so the ec2 instance assumes a role that is allowed "fully" to speak to ES... so was not working.. then I found out I needed to sent the token with the request etc this is when I found this plugin..
so I installed:

/usr/share/logstash/bin/logstash-plugin list | grep amazon_es
logstash-output-amazon_es

but I keep getting this in the logs.. :(

[2017-04-04T17:48:53,709][WARN ][logstash.outputs.amazones] Failed to flush outgoing items {:outgoing_count=>1, :exception=>"Elasticsearch::Transport::Transport::Errors::BadRequest", :backtrace=>["/usr/share/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-5.0.3/lib/elasticsearch/transport/transport/base.rb:201:in `__raise_transport_error'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-5.0.3/lib/elasticsearch/transport/transport/base.rb:318:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es/aws_transport.rb:48:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-5.0.3/lib/elasticsearch/transport/client.rb:131:in `perform_request'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-5.0.3/lib/elasticsearch/api/actions/bulk.rb:95:in `bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es/http_client.rb:53:in `bulk'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:321:in `submit'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:318:in `submit'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:351:in `flush'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:216:in `buffer_flush'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.22/lib/stud/buffer.rb:159:in `buffer_receive'", "/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-2.0.1-java/lib/logstash/outputs/amazon_es.rb:311:in `receive'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "org/jruby/RubyArray.java:1613:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:92:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:19:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:43:in `multi_receive'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:414:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:413:in `output_batch'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:371:in `worker_loop'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:331:in `start_workers'"]}

my config:

input {
  file { 
    path => ["/var/log/suricata/eve.json"]
    sincedb_path => ["/var/lib/logstash/since.db"]
    codec =>   json 
    type => "SuricataIDPS" 
  }

}

filter {
  if [type] == "SuricataIDPS" {
    date {
      match => [ "timestamp", "ISO8601" ]
    }
    #ruby {
    #  code => "if ['event_type'] == 'fileinfo'; event.set('[fileinfo][type]', event.set('[fileinfo][magic].to_s.split(',')[0]; end;')"
    #}
  }

  if [src_ip]  {
    geoip {
      source => "src_ip" 
      target => "geoip" 
      #database => "/etc/logstash/GeoLiteCity.dat" 
      #add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
      #add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
    }
    mutate {
      convert => [ "[geoip][coordinates]", "float" ]
    }
    if ![geoip.ip] {
      if [dest_ip]  {
        geoip {
          source => "dest_ip" 
          target => "geoip" 
          #database => "/etc/logstash/GeoLiteCity.dat" 
          #add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
          #add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}"  ]
        }
        mutate {
          convert => [ "[geoip][coordinates]", "float" ]
        }
      }
    }
  }
}

output { 
  amazon_es {
    hosts => ["search-HERE_INSTANCE_NAME_ETC.us-east-1.es.amazonaws.com"]
    region => "us-east-1"
    index => "logstash-suricata-%{+YYYY.MM.dd}"
    template => "/etc/logstash/templates/suricata-template.json"
    template_overwrite => true
  }
}

I'm stuck on this and I do find similar issues but they are from 2-3 years a go, diff versions or diff configurations etc.. Any help will be appreciated

Template not installed by logstash-output-amazon_es

I am using logstash-output-elasticsearch to ingest data into my local elasticsearch instance. I however have to do similar set up on Amazon es so I am using logstash-output-amazon_es plugin. I have extended my former configs (logstash-output-elasticsearch) with aws access and secret keys. The other config params remain the same, thus:

logstash.conf

input {
    jdbc {
        jdbc_driver_library => '/opt/postgresql-9.4.1208.jar'
        jdbc_driver_class => 'org.postgresql.jdbc.driver'
        jdbc_connection_string => 'jdbc:postgresql://db.region.rds.amazonaws.com:5432/udb'
        jdbc_user => 'xxx'
        jdbc_password => 'xxx'
        statement_filepath => './sql.txt'
    }
}

output {
    amazon_es {
        hosts => ['xxx.region.es.amazonaws.com']
        region => 'region'
        aws_access_key_id => 'xxx'
        aws_secret_access_key => 'xxx'
        index => 'profile'
        document_type => 'profile'
        manage_template => true
        template => './template.json'
        template_name => 'profile'
        template_overwrite => true
        document_id => '%{id}'
    }
}

template.json

{
    "template": "profile*",
    "settings" : {
        "index.mapper.dynamic": false,
        "index" : { ... },
        "analysis": {
            "char_filter": { ... },
            "tokenizer": { ... },
            "filter": { ... },
            "analyzer": { ... }
        }
    },
    "mappings": {
        "profile": {
            "_all": { "enabled": false },
            "properties": {
                ...
            }
        }
    }
}

The data is indexed alright but not with my custom mappings in the template.json when using amazon_es as output (template is installed with elasticsearch as output). On further investigation, I see the two plugins both install template from the source code here and here.

Their respective log messages are:

amazon_es:

Automatic template management enabled {:manage_template=>"true", :level=>:info}
Using mapping template {:template=>{"template"=>"profile*" ...

elasticsearch:

Using mapping template from {:path=>"./template.json", :level=>:info}
Attempting to install template {:manage_template=>{"template"=>"profile*", ...

My logstash version is 2.3.4 and elasticsearch version is 2.3.2. No compatibility issues based on this.

I need help to figure out what might be amiss.

Logstash randomly stops flushing outgoing logs

I have a latest logstash installed on Debian Jessie (64bit):

logstash:
  Installed: 1:1.5.4-1
  Candidate: 1:1.5.4-1
  Version table:
 *** 1:1.5.4-1 0
        500 http://packages.elastic.co/logstash/1.5/debian/ stable/main amd64 Packages
        100 /var/lib/dpkg/status

Plugin was installed by bin/plugin install logstash-output-amazon_es command. The configuration is following:

output {
    amazon_es {
        hosts => ["search-logs-.......us-west-1.es.amazonaws.com"]
        region => "us-west-1"
        aws_access_key_id => 'ACCESS_KEY'
        aws_secret_access_key => 'SECRET_KEY' 
        index => "logstash-%{+YYYY.MM.dd}"
    }
}

It points to a newly created AWS Elasticsearch cluster of 3 m3.medium nodes with 50 Gb EBS storage space each and default settings. This configuration worked for some hours, then logstash began logging the following exception:

{:timestamp=>"2015-10-10T09:10:14.785000+0000", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"NoMethodError", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:324:in `submit'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:316:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-amazon_es-0.1.0-java/lib/logstash/outputs/amazon_es.rb:349:in `flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:219:in `buffer_flush'", "org/jruby/RubyHash.java:1341:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:216:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:193:in `buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:112:in `buffer_initialize'", "org/jruby/RubyKernel.java:1511:in `loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:110:in `buffer_initialize'"], :level=>:warn}

Restart of logstash has no impact on this.

Release transport implementation in the rubygems

Do you have a plan to release transport layer as a library?
I'm a maintainer of Fluentd and I received many questions When fluentd supports AES?.
If you release transport layer as a library, we can easy to support AES with existing elasticsearch plugin.

If we have an other better way, please let me know. Thanks.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.