GithubHelp home page GithubHelp logo

kibana-docker's Introduction

This repository is no longer used to generate the official Kibana Docker image from Elastic.

To build Kibana docker images for pre-6.6 releases, switch branches in this repo to the matching release.

kibana-docker's People

Contributors

andrewmclagan avatar azasypkin avatar benqua avatar conky5 avatar dliappis avatar giannello avatar jakommo avatar jarpy avatar jbudz avatar jonahbull avatar mcascallares avatar mgreau avatar mistic avatar remik avatar sorenlouv avatar wallies avatar watson avatar weltenwort avatar ycombinator avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

kibana-docker's Issues

Provide Alpine base image

Since this is now housing the official Dockerfile could you please provide one based on the much smaller Alpine image instead of the Ubuntu distro.

Problem pulling kibana 5.4.0 from docker for Mac

Mac OS Sierra 10.12.4
Docker Version 17.03.1-ce-mac5 (16048)
Channel: stable
b18e2a50cc
docker-compose version 1.11.2, build dfed245

stack-docker < docker-compose up
Creating network "stackdocker_stack" with the default driver
Pulling kibana (docker.elastic.co/kibana/kibana:5.4.0)...
5.4.0: Pulling from kibana/kibana
59e69571f6c7: Already exists
1d183283a292: Pull complete
cb96f1762d96: Downloading [========================> Take long time ] 34.06 MB/52.9 MB
443c0035bf3c: Download complete
44521dcfe57b: Download complete
b5ed3e53dd29: Download complete
7e6ba66696b9: Download complete
64f9b10fdaa1: Download complete
ERROR: for create_logstash_index_pattern Container "8987df0a59c5" is unhealthy.
ERROR: for import_dashboards Container "8987df0a59c5" is unhealthy.
ERROR: for set_default_index_pattern Container "8987df0a59c5" is unhealthy.
ERROR: Encountered errors while bringing up the project.
< docker ps
8987df0a59c5 docker.elastic.co/kibana/kibana:5.4.0 "/bin/sh -c /usr/l..." 18 minutes ago Up 18 minutes (unhealthy) 127.0.0.1:5601->5601/tcp stackdocker_kibana_1

Kubernetes with docker tag 5.6.2

I'm having an issue when using the official image of 5.6.2 to run in a pod.

I'm using helm to install the package.

And how should the liveness and readiness be setup? Currently I have this:

livenessProbe:
  httpGet:
    path: /
    port: http
  initialDelaySeconds: 30
readinessProbe:
  httpGet:
    path: /
    port: http
  initialDelaySeconds: 10

But instead I get this error:

Readiness probe failed: Get http://10.0.1.42:5601/: dial tcp 10.0.1.42:5601: getsockopt: connection refused

And when it does run, I get this issue:

image

Error: EACCES: permission denied

Hello,

Is anyone really using this image?
The x-pack plugin is installed as root then user context switches to kibana but some files in /usr/share/kibana/optimize/ remain owned by root and causes service startup to fail with

Error: EACCES: permission denied, open '/usr/share/kibana/optimize/bundles/graph.entry.js'

Tried 5.0.1 and 5.1.1

Note: it seems to be happening only if SERVER_BASEPATH is being specified.

Thank you

Issues with kibana5.5.1 image

Hi team,

I have pulled kibana 5.5.1 image docker.elastic.co/kibana/kibana:5.5.1.

When i'm strating conatiner locally i'm not getting webui for kibana Dashboard, Please help
image

This isssue is with kibana 5.5.1 only, latest version is working fine.
But we need to use kibana 5.5.1 only please help

Tribe Support

tribe support, e.g:
elasticsearch.tribe.url: localhost:9200

Test for missing Environment Variables

Is there some way we can fail a test if we're missing a Kibana yml setting as environment variable? For example, SERVER_DEFAULTROUTE ( server.defaultRoute) does not appear to be supported.

Ensure East Asian languages render correctly.

Kibana requires correct fonts to be installed for rendering reports in various languages. Ensure that they are present and that reports render correctly for (at least) the major East Asian languages.

Compose 2.1 variable substition for XPACK_SECURITY_SECURECOOKIES in kibana docker?

I have a compose 2.1 file and i'm trying to do XPACK_SECURITY_SECURECOOKIES as a variable because i need it to be false when run on my laptop locally but true when run behind an ELB that handles the ssl termination.

So I tried XPACK_SECURITY_SECURECOOKIES: "$${XPACK_SECURITY_SECURECOOKIES:-false}" under environment. I also tried it without quotes: XPACK_SECURITY_SECURECOOKIES: $${XPACK_SECURITY_SECURECOOKIES:-false}

But I get this stack trace either way:

sa-demo-cyclops-kibana | {"type":"log","@timestamp":"2017-10-15T07:30:53Z","tags":["fatal"],"pid":1,"level":"fatal","message":"child "xpack" fails because [child "security" fails because [child "secureCookies" fails because ["secureCookies" must be a boolean]]]","error":{"message":"child "xpack" fails because [child "security" fails because [child "secureCookies" fails because ["secureCookies" must be a boolean]]]","name":"ValidationError","stack":"ValidationError: child "xpack" fails because [child "security" fails because [child "secureCookies" fails because ["secureCookies" must be a boolean]]]\n at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)\n at _validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)\n at root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)\n at Config._commit (/usr/share/kibana/src/server/config/config.js:114:35)\n at Config.set (/usr/share/kibana/src/server/config/config.js:84:10)\n at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:57:10)\n at /usr/share/kibana/src/server/plugins/plugin_collection.js:19:12\n at next (native)\n at step (/usr/share/kibana/src/server/plugins/plugin_collection.js:49:191)\n at /usr/share/kibana/src/server/plugins/plugin_collection.js:49:361"}}
sa-demo-cyclops-kibana | FATAL { ValidationError: child "xpack" fails because [child "security" fails because [child "secureCookies" fails because ["secureCookies" must be a boolean]]]
sa-demo-cyclops-kibana | at Object.exports.process (/usr/share/kibana/node_modules/joi/lib/errors.js:181:19)
sa-demo-cyclops-kibana | at _validateWithOptions (/usr/share/kibana/node_modules/joi/lib/any.js:651:31)
sa-demo-cyclops-kibana | at root.validate (/usr/share/kibana/node_modules/joi/lib/index.js:121:23)
sa-demo-cyclops-kibana | at Config._commit (/usr/share/kibana/src/server/config/config.js:114:35)
sa-demo-cyclops-kibana | at Config.set (/usr/share/kibana/src/server/config/config.js:84:10)
sa-demo-cyclops-kibana | at Config.extendSchema (/usr/share/kibana/src/server/config/config.js:57:10)
sa-demo-cyclops-kibana | at /usr/share/kibana/src/server/plugins/plugin_collection.js:19:12
sa-demo-cyclops-kibana | at next (native)
sa-demo-cyclops-kibana | at step (/usr/share/kibana/src/server/plugins/plugin_collection.js:49:191)
sa-demo-cyclops-kibana | at /usr/share/kibana/src/server/plugins/plugin_collection.js:49:361
sa-demo-cyclops-kibana | isJoi: true,
sa-demo-cyclops-kibana | name: 'ValidationError',
sa-demo-cyclops-kibana | details:
sa-demo-cyclops-kibana | [ { message: '"secureCookies" must be a boolean',
sa-demo-cyclops-kibana | path: 'xpack.security.secureCookies',
sa-demo-cyclops-kibana | type: 'boolean.base',
sa-demo-cyclops-kibana | context: [Object] } ],
sa-demo-cyclops-kibana | _object:
sa-demo-cyclops-kibana | { pkg:
sa-demo-cyclops-kibana | { version: '5.5.1',
sa-demo-cyclops-kibana | branch: '5.x',
sa-demo-cyclops-kibana | buildNum: 15405,
sa-demo-cyclops-kibana | buildSha: '32ff80dbccdff23911015425bfaf4ae32ea0c0c1' },
sa-demo-cyclops-kibana | dev: { basePathProxyTarget: 5603 },
sa-demo-cyclops-kibana | pid: { exclusive: false },
sa-demo-cyclops-kibana | cpu: { cgroup: [Object] },
sa-demo-cyclops-kibana | cpuacct: { cgroup: [Object] },
sa-demo-cyclops-kibana | server:
sa-demo-cyclops-kibana | { name: 'kibana',
sa-demo-cyclops-kibana | host: '0.0.0.0',
sa-demo-cyclops-kibana | port: 5601,
sa-demo-cyclops-kibana | maxPayloadBytes: 1048576,
sa-demo-cyclops-kibana | autoListen: true,
sa-demo-cyclops-kibana | defaultRoute: '/app/kibana',
sa-demo-cyclops-kibana | basePath: '',
sa-demo-cyclops-kibana | ssl: [Object],
sa-demo-cyclops-kibana | cors: false,
sa-demo-cyclops-kibana | xsrf: [Object] },
sa-demo-cyclops-kibana | logging:
sa-demo-cyclops-kibana | { silent: false,
sa-demo-cyclops-kibana | quiet: false,
sa-demo-cyclops-kibana | verbose: false,
sa-demo-cyclops-kibana | events: {},
sa-demo-cyclops-kibana | dest: 'stdout',
sa-demo-cyclops-kibana | filter: {},
sa-demo-cyclops-kibana | json: true },
sa-demo-cyclops-kibana | ops: { interval: 5000 },
sa-demo-cyclops-kibana | plugins: { scanDirs: [Object], paths: [], initialize: true },
sa-demo-cyclops-kibana | path: { data: '/usr/share/kibana/data' },
sa-demo-cyclops-kibana | optimize:
sa-demo-cyclops-kibana | { enabled: true,
sa-demo-cyclops-kibana | bundleFilter: '!tests',
sa-demo-cyclops-kibana | bundleDir: '/usr/share/kibana/optimize/bundles',
sa-demo-cyclops-kibana | viewCaching: true,
sa-demo-cyclops-kibana | lazy: false,
sa-demo-cyclops-kibana | lazyPort: 5602,
sa-demo-cyclops-kibana | lazyHost: 'localhost',
sa-demo-cyclops-kibana | lazyPrebuild: false,
sa-demo-cyclops-kibana | lazyProxyTimeout: 300000,
sa-demo-cyclops-kibana | useBundleCache: true,
sa-demo-cyclops-kibana | profile: false },
sa-demo-cyclops-kibana | status: { allowAnonymous: false, v6ApiFormat: false },
sa-demo-cyclops-kibana | map: { manifestServiceUrl: 'https://catalogue.maps.elastic.co/v1/manifest' },
sa-demo-cyclops-kibana | tilemap: { options: [Object] },
sa-demo-cyclops-kibana | regionmap: {},
sa-demo-cyclops-kibana | uiSettings: { enabled: true },
sa-demo-cyclops-kibana | i18n: { defaultLocale: 'en' },
sa-demo-cyclops-kibana | xpack:
sa-demo-cyclops-kibana | { xpack_main: [Object],
sa-demo-cyclops-kibana | graph: [Object],
sa-demo-cyclops-kibana | monitoring: [Object],
sa-demo-cyclops-kibana | reporting: [Object],
sa-demo-cyclops-kibana | security: [Object] } },
sa-demo-cyclops-kibana | annotate: [Function] }
sa-demo-cyclops-kibana exited with code 1

Any ideas? This is with Kibana 5.6.3.

run docker.elastic.co/kibana/kibana:6.2.2 image error

It seems there is an error in the image
$ docker run -d -p5601:5601 docker.elastic.co/kibana/kibana:6.2.2

$ docker logs 948cfd241698
module.js:96
    throw e;
    ^

SyntaxError: Error parsing /usr/share/kibana/node_modules/babel-register/package.json: Unexpected token

Unable to start Kibana 5.3.0 without X-Pack

Our ElasticSearch 5.3.0 cluster does not have X-Pack installed, and I have no plans to install it.

However, when running Kibana 5.3.0 Docker image, it consistently switches to Red status due to the cluster not supporting X-Pack. How do you disable it completely?

I am running Kibana locally as below...

docker run -e ELASTICSEARCH_URL=http://10.2.8.243:9200 -e KIBANA_INDEX=.kibana -e XPACK_SECURITY_ENABLED=false -e XPACK_MONITORING_ENABLED=false -e SERVER_NAME:localhost -p 5601:5601 docker.elastic.co/kibana/kibana:5.3.0

Docker container exiting with code 0 & no logs

I have elasticsearch up and running in a Docker container and accessible from other containers with http://elasticsearch. I'm trying to connect up Kibana and no matter what I try, the container simply exits with error code 0 and with no logs whatsoever (i.e. when running docker logs [id])

This is the command I'm using:

docker run --network private \
  -e ELASTICSEARCH_URL=http://elasticsearch:9200 \
  -e XPACK_MONITORING_ENABLED=false \
  -p 5601:5601 \
  docker.elastic.co/kibana/kibana:6.0.0

I've tried both the kibana and kibana-oss versions. I've tried diabling x-pack, I've tried turning on verbose logging. Always with the same results.

I've also tried attaching onto /bin/bash within the container and running the kibana start command directly. Again, nothing.

Interestingly, I've also tried this:

$ docker run -it --network private \
>   node:6.9.1 \
>   /bin/bash -c "ping elasticsearch"
PING elasticsearch (10.0.1.11): 56 data bytes
64 bytes from 10.0.1.11: icmp_seq=0 ttl=64 time=0.045 ms
64 bytes from 10.0.1.11: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 10.0.1.11: icmp_seq=2 ttl=64 time=0.071 ms
^C--- elasticsearch ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max/stddev = 0.045/0.064/0.079/0.000 ms

and this:

$ docker run -it --network private \
>   docker.elastic.co/kibana/kibana:6.0.0 \
>   /bin/bash -c "ping elasticsearch"
ping: elasticsearch: Name or service not known

Either I've missed something massively obvious or I'm going insane... I've searched in this forum, in GitHub issues and on Google/StackOverflow with no similar scenarios found..

I have also raised it here: https://discuss.elastic.co/t/docker-container-exiting-with-code-0-no-logs/109390

Please help! Thanks.

Troubles using the docker image > SERVER_HOST environment var

I had a lot of troubles using the official docker image. There seems something off with the SERVER_HOST env config. Using localhost or not setting it at all prevents me from accessing the kibana server (it either not reachable or crashes on running the container). I'm using Docker 17.03.1-ce-mac12 (17661)

Here's my working Dockerfile in case anyone else has troubles getting this to work:

# this should match your ES version, in my case 5.1.1
FROM docker.elastic.co/kibana/kibana:5.1.1

# since I don't have X-Pack on my cluster, we remove X-pack from the image
RUN bin/kibana-plugin remove x-pack && \
    kibana 2>&1 | grep -m 1 "Optimization of .* complete" # [1]

# url to the cluster, I specified protocol and port here as well (using bonsai)
ENV ELASTICSEARCH_URL=https://yoururltoescluster:443

# don't know why but this was the only value that worked, server is accessible under http://0.0.0.0:5601/
ENV SERVER_HOST=0.0.0.0

# only set these if you need basic auth
ENV ELASTICSEARCH_USERNAME=YOURUSERNAME
ENV ELASTICSEARCH_PASSWORD=YOURPASS

EXPOSE 5601

Array Type Environment Variables Treated as Strings

Initially posted this issue in Kibana as I was passing my command line args incorrectly in my testing, elastic/kibana#19773.

I had valued the environment variable in my docker-compose.yml as follows:
ELASTICSEARCH_REQUESTHEADERSWHITELIST=[mycustomheader,referer]

I expected both the referer and mycustomheader to be passed to Elasticsearch on requests to it from Kibana, they were not because the configuration was set to an array with a single element in it [mycustomheader,referer].

If I set it as a single value, ELASTICSEARCH_REQUESTHEADERSWHITELIST=mycustomheader, the header is passed, but unfortunately I need to include more than one header.

My workaround right now is to set the configuration within the kibana.yml, but it would be helpful if the array type environment variables are able to be properly parsed.

Kibana docker image doesn't connect to elasticsearch docker image

There are no specific instructions on how to setup kibana to connect, based on the elasticsearch docker image default settings
es ran fine with the default docker run command:
docker run -p 9200:9200 -e "http.host=0.0.0.0" -e "transport.host=127.0.0.1” docker.elastic.co/elasticsearch/elasticsearch:5.2.2

kibana fails with this command:
docker run -e "elasticsearch.url=http://localhost:9200" -e "elasticsearch.password=changeme" -e "elasticsearch.username=elastic" docker.elastic.co/kibana/kibana:5.2.2

trying with this command
docker run -e "ELASTICSEARCH_URL=http://localhost:9200" -e "ELASTICSEARCH_PASSWORD=changeme" -e "ELASTICSEARCH_USERNAME=elastic" docker.elastic.co/elasticsearch/elasticsearch:5.2.2

Failed to obtain artifacts required to build

$ ELASTIC_VERSION=6.2.5 make

Renders:

<...>

Step 6/15 : RUN curl -Ls https://artifacts.elastic.co/downloads/kibana/kibana-6.2.5-linux-x86_64.tar.gz | tar --strip-components=1 -zxf - &&     ln -s /usr/share/kibana /opt/kibana
 ---> Running in beaaff71ad43

gzip: stdin: not in gzip format
tar: Child returned status 1
tar: Error is not recoverable: exiting now
The command '/bin/sh -c curl -Ls https://artifacts.elastic.co/downloads/kibana/kibana-6.2.5-linux-x86_64.tar.gz | tar --strip-components=1 -zxf - &&     ln -s /usr/share/kibana /opt/kibana' returned a non-zero code: 2
Error response from daemon: No such image: docker.elastic.co/kibana/kibana-x-pack:6.2.5
Makefile:40: recipe for target 'build' failed
make: *** [build] Error 1

Does not build on Python > 3.5

Because python3.5 is hardcoded in the venv command, modern macOS devices (I'm currently on High Sierra beta) don't make correctly, although it builds fine if changed to python3.

Support for Plugins

I have the Plugins folder outside the container and my volume is mapped correctly. How ever how do I install plugins when using the this new docker container. The one I am looking at specifically is https://github.com/nreese/kibana-time-plugin.
I did bin/bash into the container but the OS environment is completely different then the previous official/kibana docker container on DockerHub.
For the install of this Plugin I need git and bower installed on the Container.

Unable to open Log file for Write

I'm encountering this issue.

I'm trying to write out the logs to a file on the host, shared with the Docker image. I'd like to have access the logs of kibana instead of having to print it to the screen/stdout.

Here is the error during docker-compose up

kibana_1    |       throw er; // Unhandled 'error' event
kibana_1    |       ^
kibana_1    | 
kibana_1    | Error: EACCES: permission denied, open '/var/log/kibana/kibana.log'
kibana_1    |     at Error (native)
panopticondockerkibana_kibana_1 exited with code 1

One thing to note, is that the elasticsearch docker containers will write out fine, no problem. They create the file structure for the log directory and begin writing. Kibana and Logstash don't seem to have the same luxury.

Here is my docker-compose:

elastic01:
 image: elasticsearch:latest
 ports:
   - "127.0.0.1:9200:9200"
 expose:
   - "9200"
   - "9300"
 command: elasticsearch -Ecluster.name="Elastic" -Ehttp.host="0.0.0.0" -Enetwork.host="0.0.0.0"
 volumes:
   - "./store/elasticsearch/data/e01:/usr/share/elasticsearch/data"
   - "./build/elasticsearch/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties"
   - "./logs/elasticsearch:/usr/share/elasticsearch/logs"

elastic02:
 image: elasticsearch:latest
 links:
   - elastic01:elastic01
 command: elasticsearch -Ediscovery.zen.ping.unicast.hosts=elastic01 -Ecluster.name="Elastic" -Ehttp.host="0.0.0.0" -Enetwork.host="0.0.0.0"
 expose:
   - "9200"
   - "9300"
 volumes:
   - "./store/elasticsearch/data/e02:/usr/share/elasticsearch/data"
   - "./build/elasticsearch/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties"
   - "./logs/elasticsearch:/usr/share/elasticsearch/logs"

elastic03:
 image: elasticsearch:latest
 links:
   - elastic01:elastic01
 command: elasticsearch -Ediscovery.zen.ping.unicast.hosts=elastic01 -Ecluster.name="Elastic" -Ehttp.host="0.0.0.0" -Enetwork.host="0.0.0.0"
 expose:
   - "9200"
   - "9300"
 volumes:
   - "./store/elasticsearch/data/e03:/usr/share/elasticsearch/data"
   - "./build/elasticsearch/log4j2.properties:/usr/share/elasticsearch/config/log4j2.properties"
   - "./logs/elasticsearch:/usr/share/elasticsearch/logs"

logstash:
 image: logstash:latest
 ports:
   - "5044:5044"
   - "127.0.0.1:9600:9600"
 links:
   - elastic01:elastic01
 volumes:
   - "./build/logstash:/etc/logstash"
   - "./build/certs/cert.crt:/etc/pki/tls/certs/cert.crt"
   - "./build/certs/key.key:/etc/pki/tls/private/key.key"
   - "./build/logstash/log4js.properties:/etc/logstash/log4js.properties"
   - "./logs/logstash:/var/log/logstash"

kibana:
 image: kibana:latest
 ports:
   - "5601:5601"
 links:
   - elastic01:elastic01
 volumes:
   - "./build/kibana:/etc/kibana"
   - "./logs/kibana:/var/log/kibana"

Here is my kibana.yml

elasticsearch.url: "http://elastic01:9200"
logging.dest: "/var/log/kibana/kibana.log"

Docker Version / Ubuntu

Linux ubuntu 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
Distributor ID:	Ubuntu
Description:	Ubuntu 16.04.2 LTS
Release:	16.04
Codename:	xenial

Install the SENTINL plug-in

From the docker.elastic.co/kibana/kibana:5.6.3
I am installing the plugin SENTINL
But the installation cannot continue, and the root password is required

root@ubuntu:~# docker exec -it kibana /bin/sh
sh-4.2$ yum install fontconfig freetype                                                                                                                   
Loaded plugins: fastestmirror, ovl
ovl: Error while doing RPMdb copy-up:
[Errno 13] Permission denied: '/var/lib/rpm/__db.001'
You need to be root to perform this command.
sh-4.2$ su - root
Password: 

So, what is the root password

Kibana unable to connect to Elasticsearch even when ssl keys/certs passed in as env variables

Kibana version: 5.5.2

Elasticsearch version: 2.4.1

Server OS version: Red Hat Enterprise Linux Server release 7.4 (Maipo)

Browser version: n/a

Browser OS version: n/a

Original install method (e.g. download page, yum, from source, etc.): docker pull kibana:latest

Description of the problem including expected versus actual behavior: Unable to connect to Elasticsearch at https://<ip_address>:8743
I understand that ES is now proxied by an nginx instance running on the master node alongside ES. It runs on port 8743. This nginx instance has keys/certs that are now needed to get kibana to connect successfully. I was able to get them from the the nginx instance in order to pass in as enviornment variables to the docker run command. But this does not seem to be working. Kibana still cannot connect to ES.

Steps to reproduce:

  1. docker pull kibana:latest
  2. Get the docker container id:
    image
  3. Grab the keys and certs from the docker container, placing them in /tmp:
    docker cp $id:/etc/cfc/conf /tmp/kibana
  4. Docker run Kibana, passing in the keys/certs as env variables:
    docker run -d --name kibana -p 5601:5601 -e ELASTICSEARCH_URL="https://<ip_address>:8743" -e ELASTICSEARCH_SSL_CERT="/tmp/kibana/conf/es/server.pem" -e ELASTICSEARCH_SSL_KEY="/tmp/kibana/conf/es/server-key.pem" -e ELASTICSEARCH_SSL_VERIFY="false" -e ELASTICSEARCH_SSL_CA="/tmp/kibana/conf/es/ca.pem" kibana:latest
  5. Log into kibana dashbaord - Unable to connect to Elasticsearch at https://<ip_address>:8743

Errors in browser console (if relevant): Unable to connect to Elasticsearch at https://<ip_address>:8743

Provide logs and/or server output (if relevant):
kibana_latest_logs.txt
(I have replaced the IP address of the server with <ip_address>)

Support Arbitrary User ID by changing group of /usr/share/kibana to root

Some cloud platform build on Kubernetes, like Openshift, require to run container with arbitrary user ID.

In Kibana, it is not possible because Kibana tries to write /usr/share/kibana/optimize/.babelcache.json even if path.data is set to /tmp. The owner of /usr/share/kibana/optimize/.babelcache.json is kibana:kibana.
It is hence impossible to run Kibana with another userID than the one (1000) defined in the Docker file.

This would be easily fixed by changing the group of the /usr/share/kibana from kibana to root.

RUN chgrp -R 0 /usr/share/kibana && \
    chmod -R g=u /usr/share/kibana

See https://docs.openshift.com/container-platform/3.3/creating_images/guidelines.html#openshift-container-platform-specific-guidelines
(I don't think the part about uid_entrypoint would be necessary in the case of kibana).

kibana-docker user and group ids clash with elasticsearch-docker

The Kibana documentation recommends running Kibana instances on the same hosts that run your Elasticsearch Coordinating Nodes. We run the official Docker images for both Elasticsearch and Kibana.

When you have Kibana configured to write log statements to disk instead of STDOUT (logging.dest), and you have that destination directory mounted as a volume on the docker host so that they survive the container's lifetime, this is where you run into a problem. The elasticsearch-docker Dockerfile runs the Elasticsearch process as a user/group "elasticsearch" with uid/gid 1000. The elasticsearch-kibana Dockerfile runs the Kibana process as a user/group "kibana" with the same uid/gid of 1000. Thus we can't create the same "shadow" accounts on the docker host since the uid/gids will clash.

I can obviously work around this issue by creating a custom image, but it would be nice if this just worked out of the box.

Support for Multiple Kibana Services in Compose

I have docker running in AWS and I have been using the official from Docker Hub until version 5.6.1 which I am in the middle of updating to. I need to run Multiple Kibana's to deal with Instance specific .kibana index's. I know xpack was added to this kibana and I am running these against My Elastic Cloud which does have xpack "b7be70" cluster ID.

`version: "3"

services:
kibanatest1:
image: docker.elastic.co/kibana/kibana:5.6.1
container_name: test
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
volumes:
- ./plugins/kibana-time-plugin:/usr/share/kibana/plugins/kibana-time-plugin:rw
environment:
ELASTICSEARCH_URL: https://......
ELASTICSEARCH_USERNAME: ......
ELASTICSEARCH_PASSWORD: .......
KIBANA_INDEX: .kibana-1
SERVER_DEFAULTROUTE: /app/kibana#/dashboard/7bdc16b0-702d-11e7-88dc-0d54ae3f1f7e?embed=true
ports:
- "9999:5601"
kibanatest2:
image: docker.elastic.co/kibana/kibana:5.6.1
container_name: hvms-bi
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
volumes:
- ./plugins/kibana-time-plugin:/usr/share/kibana/plugins/kibana-time-plugin:rw
environment:
ELASTICSEARCH_URL: https://......
ELASTICSEARCH_USERNAME: ......
ELASTICSEARCH_PASSWORD: .......
KIBANA_INDEX: .kibana-2
SERVER_DEFAULTROUTE: /app/kibana#/dashboard/7bdc16b0-702d-11e7-88dc-0d54ae3f1f7e?embed=true
ports:
- "10001:5601"`

For some reason the xpack will only load successfully on the 1st service. On the kibanatest2 xpack will not load and results in an error on the login screen saying that "Login is currently disabled because the license could not be determined. Please check that Elasticsearch has the X-Pack plugin installed and is reachable, then refresh this page."
Probably due to the errors.
{"type":"log","@timestamp":"2017-09-26T03:16:28Z","tags":["info","optimize"],"pid":1,"message":"Optimization of bundles for graph, monitoring, ml, kibana, stateSessionStorageRedirect, timelion, login, logout and status_page complete in 206.84 seconds"} {"type":"log","@timestamp":"2017-09-26T03:16:29Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:29Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:29Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:29Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:29Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:29Z","tags":["reporting","warning"],"pid":1,"message":"Generating a random key for xpack.reporting.encryptionKey. To prevent pending reports from failing on restart, please set xpack.reporting.encryptionKey in kibana.yml"} {"type":"log","@timestamp":"2017-09-26T03:16:29Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"yellow","message":"Status changed from uninitialized to yellow - Waiting for Elasticsearch","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:30Z","tags":["info","elasticsearch"],"pid":1,"typeName":"graph-workspace","typeMapping":{"properties":{"description":{"type":"text"},"kibanaSavedObjectMeta":{"properties":{"searchSourceJSON":{"type":"text"}}},"numLinks":{"type":"integer"},"numVertices":{"type":"integer"},"title":{"type":"text"},"version":{"type":"integer"},"wsState":{"type":"text"}}},"message":"Adding mappings to kibana index for SavedObject type \"graph-workspace\""} {"type":"log","@timestamp":"2017-09-26T03:16:30Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-09-26T03:16:30Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-09-26T03:16:30Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-09-26T03:16:30Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from yellow to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-09-26T03:16:33Z","tags":["info","elasticsearch"],"pid":1,"typeName":"graph-workspace","typeMapping":{"properties":{"description":{"type":"text"},"kibanaSavedObjectMeta":{"properties":{"searchSourceJSON":{"type":"text"}}},"numLinks":{"type":"integer"},"numVertices":{"type":"integer"},"title":{"type":"text"},"version":{"type":"integer"},"wsState":{"type":"text"}}},"message":"Adding mappings to kibana index for SavedObject type \"graph-workspace\""} {"type":"log","@timestamp":"2017-09-26T03:16:37Z","tags":["info","elasticsearch"],"pid":1,"typeName":"graph-workspace","typeMapping":{"properties":{"description":{"type":"text"},"kibanaSavedObjectMeta":{"properties":{"searchSourceJSON":{"type":"text"}}},"numLinks":{"type":"integer"},"numVertices":{"type":"integer"},"title":{"type":"text"},"version":{"type":"integer"},"wsState":{"type":"text"}}},"message":"Adding mappings to kibana index for SavedObject type \"graph-workspace\""} {"type":"log","@timestamp":"2017-09-26T03:16:40Z","tags":["info","elasticsearch"],"pid":1,"typeName":"graph-workspace","typeMapping":{"properties":{"description":{"type":"text"},"kibanaSavedObjectMeta":{"properties":{"searchSourceJSON":{"type":"text"}}},"numLinks":{"type":"integer"},"numVertices":{"type":"integer"},"title":{"type":"text"},"version":{"type":"integer"},"wsState":{"type":"text"}}},"message":"Adding mappings to kibana index for SavedObject type \"graph-workspace\""} {"type":"log","@timestamp":"2017-09-26T03:16:42Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:42Z","tags":["security","warning"],"pid":1,"message":"Generating a random key for xpack.security.encryptionKey. To prevent sessions from being invalidated on restart, please set xpack.security.encryptionKey in kibana.yml"} {"type":"log","@timestamp":"2017-09-26T03:16:42Z","tags":["security","warning"],"pid":1,"message":"Session cookies will be transmitted over insecure connections. This is not recommended."} {"type":"log","@timestamp":"2017-09-26T03:16:42Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:42Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"yellow","message":"Status changed from red to yellow - Waiting for Elasticsearch","prevState":"red","prevMsg":"[remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - [remote_transport_exception] [tiebreaker-0000000014][172.17.0.9:19672][indices:admin/mapping/put]","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from yellow to green - Ready","prevState":"yellow","prevMsg":"Waiting for Elasticsearch"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","plugin:[email protected]","info"],"pid":1,"state":"green","message":"Status changed from uninitialized to green - Ready","prevState":"uninitialized","prevMsg":"uninitialized"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["listening","info"],"pid":1,"message":"Server running at http://0:5601"} {"type":"log","@timestamp":"2017-09-26T03:16:43Z","tags":["status","ui settings","error"],"pid":1,"state":"red","message":"Status changed from uninitialized to red - Elasticsearch plugin is red","prevState":"uninitialized","prevMsg":"uninitialized"}

On kibanatest1 I do not receive these errors XPack loads fine and I can login with out issues.

I've even tried removing xpack
command: [ "/bin/bash", "-c", "/usr/share/kibana/bin/kibana-plugin remove x-pack; /usr/local/bin/kibana-docker" ]
but I get a error
plugin:[email protected] | [remote_transport_exception] [tiebreaker-0000000014][111.111.111.111:19672][indices:admin/mapping/put]

EACCES: permission denied in the log

Since I upgraded Kibana the 5.4 (docker image docker.elastic.co/kibana/kibana:5.4.0) I see a lot of message like this:

kibana_1 | {"type":"log","@timestamp":"2017-05-17T10:34:33Z","tags":["error","metrics","cgroup"],"pid":1,"level":"error","message":"EACCES: permission denied, open '/sys/fs/cgroup/cpuacct/cpuacct.usage'","error":{"message":"EACCES: permission denied, open '/sys/fs/cgroup/cpuacct/cpuacct.usage'","name":"Error","stack":"Error: EACCES: permission denied, open '/sys/fs/cgroup/cpuacct/cpuacct.usage'\n at Error (native)","code":"EACCES"}}

Indeed, /sys/fs/cgroup/cpuacct/ is owned by root.
I saw the comment here but not sure who to resolve it.

Docker version 1.13.1, build 092cba3

Kibana instances increasing on Docker

I am trying to run kibana5.1 and elasticsearch5.1 on docker using docker-compose.

  1. Using volumes for elastic search data
  2. Using volume for kibana config file

The challenge I am facing is when I use monitoring on kibana dashboard the no.of instances is increasing each time when I bounce docker-compose

• When I clear the elastic volumes for data indices first time the no.of instances show up 1
• Second time its increases to 2 …. So on
• If we clear the elastic volume data and bounce again the instance will be 1 and again each time we bounce the no.of instances increase for kibana

Any help is really appreciated.

Thanks

docker-compose:

version: '2'
services:
  elasticsearch1:
    image: docker.elastic.co/elasticsearch/elasticsearch:5.1.1
    environment:
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
    volumes:
      - /c/Users/dockeroffice/elknode1/data:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - elk_docker
  kibana:
    image: docker.elastic.co/kibana/kibana:5.1.1
    ports:
      - 5601:5601
    networks:
      - elk_docker
    volumes:
      - /c/Users/dockeroffice/kibana/config/kibana.yml:/usr/share/kibana/config/kibana.yml
    depends_on:
      - elasticsearch1
  
 
networks:
  elk_docker:
    driver: bridge

image

Alpine version

An Alpine version would be great to have a smaller image, particularly since the new Kibana image is much bigger than the deprecated one from Docker Hub:

docker.elastic.co/kibana/kibana   5.2.2                 632182dfaaf7        674 MB
kibana                            5.0                   e670e64ab06e        321 MB

Reporting not working, missing fontconfig and freetype

After running a fresh kibana:5.4.0, reporting is complaining about missing fontconfig and freetype.

1
2

Running it from docker-compose as such, together with elasticsearch-docker:

 kibana:
    depends_on:
      - elasticsearch1
    image: docker.elastic.co/kibana/kibana:5.4.0
     environment:
       ELASTICSEARCH_URL: http://elasticsearch1:9200
    ports:
       - 5601:5601
     networks:
       - esnet

Logging to STDOUT is very verbose

The process dumps a lot of output to STDOUT (and thus docker logs).

Is it too much? Is it more than we see with the non-container versions of Kibana?

CTRL-c not effective.

When running Kibana with an attached terminal like:

docker run --rm -it docker.elastic.co/kibana/kibana:6.3.1

the process does not respond to CTRL-c.

Nor does it respond to SIGINT or SIGTERM sent via the kill command.

cc @tbragin

no x-pack?

is this available without x-pack integration or are there options to not expect it?
the docs suggest it is x-pack only.

Problem with rc2 image

I didn't have this issue at first, but I removed all my images and re-pulled everything, is it possible that something was re-built and re-pushed with a typo in the kibana.yml or something?

docker logs -f sa-demo-apachelogs-kibana-build
{"type":"log","@timestamp":"2017-11-04T04:23:43Z","tags":["fatal"],"pid":1,"message":""xpack.monitoring.ui.container.elasticsearch.enabled" setting was not applied. Check for spelling errors and ensure that expected plugins are installed and enabled."}

I don't even set XPACK_MONITORING_UI_CONTAINER_ELASTICSEARCH_ENABLED in my docker-compose file, the kibana portion looks like this:

  sa-demo-apachelogs-kibana-build:
    image: docker.elastic.co/kibana/kibana:6.0.0-rc2
    depends_on:
      - sa-demo-apachelogs-es-build
    container_name: sa-demo-apachelogs-kibana-build
    environment:
      XPACK_MONITORING_ENABLED: "false"
      SERVER_HOST: "0.0.0.0"
      ELASTICSEARCH_URL: http://es:9200
      ELASTICSEARCH_USERNAME: elastic
      ELASTICSEARCH_PASSWORD: changeme
      XPACK_SECURITY_SECURECOOKIES: "true"
    ports:
      - 5601:5601
    networks:
      sa-demo-apachelogs-build-net:
        aliases:
          - kibana

I've recreated it with a minimal docker-compose.yml here:

$ cat docker-compose.yml
version: '2'
services:
  kibana:
    image: docker.elastic.co/kibana/kibana:6.0.0-rc2
    container_name: kibana
    environment:
      XPACK_MONITORING_ENABLED: "false"
      SERVER_HOST: "0.0.0.0"
      ELASTICSEARCH_URL: http://es:9200
      ELASTICSEARCH_USERNAME: elastic
      ELASTICSEARCH_PASSWORD: changeme
      XPACK_SECURITY_SECURECOOKIES: "true"
    ports:
      - 5601:5601
$ docker-compose up
Creating network "tmp_default" with the default driver
Creating kibana
Attaching to kibana
kibana    | {"type":"log","@timestamp":"2017-11-04T04:52:17Z","tags":["fatal"],"pid":1,"message":"\"xpack.monitoring.ui.container.elasticsearch.enabled\" setting was not applied. Check for spelling errors and ensure that expected plugins are installed and enabled."}
kibana exited with code 64

Switch Kibana Docker image to use alpine linux

Official Elasticsearch Docker image is using Alpine linux, it's size is 164 MB, Kibana uses Ubuntu with size of 678 MB.

At least use -slim ubuntu, even compressed size for that is half what normal version has.

Should we recommend limiting memory if it effects VFS cache?

In our documentation, we suggest setting a memory limit for Elasticsearch containers. However, Docker uses the memory.limit_in_bytes property of the kernel Memory Resource Controller to achieve this, which also appears to limit the amount of file cache that the process has access to.

File cache can be beneficial to Elasticsearch performance, so perhaps we should adjust our recommendation. We could either include a note about the limiting effect on file cache or remove the recommendation altogether.

Issues building on MacOS

Not sure if changes could be committed to make building on MacOS easier, but for others who try it down the road this is what I had to do to get make build-from-local-artifacts working:

  • install a timeout alternative
    • brew install coreutils
    • update reference to timeout in Makefile to gtimeout
  • docker for mac doesn't support --network=host
    • replace --network=host with -p 8000:8000 in docker run command that is executing python3 -m http.server

PS: the python server isn't killed when the make file exits with an error, so you have to stop it manually each time.

Optimize runs on startup, even with default settings.

When building with the latest Kibana snapshots, the image runs the Kibana optimize step when starting.

Repro:

VERSION=7.0.0-alpha1-SNAPSHOT

mkdir -p artifacts/kibana/target/
(cd artifacts/kibana/target && \
  wget https://snapshots.elastic.co/downloads/kibana/kibana-${VERSION}-linux-x86_64.tar.gz)

mkdir -p artifacts/x-pack-kibana/build/distributions
(cd artifacts/x-pack-kibana/build/distributions && \
  wget https://snapshots.elastic.co/downloads/kibana-plugins/x-pack/x-pack-${VERSION}.zip)

ARTIFACTS_DIR=$PWD/artifacts ELASTIC_VERSION=${VERSION} make build-from-local-artifacts

docker run --rm -it docker.elastic.co/kibana/kibana-x-pack:${VERSION}
log   [06:28:46.549] [info][optimize] Optimizing and caching bundles for ml, stateSessionStorageRedirect, status_page, timelion, graph, monitoring, login, logout, dashboardViewer, apm and kibana. This may take a few minutes

Too many redirects with docker.elastic.co/kibana/kibana:5.4.0

I am just testing an elk setup for our logging system and running into some issue with the officially recommended image docker.elastic.co/kibana/kibana:5.4.0. I have a super simple Kubernetes setup, that starts the image behind a load balancer and when trying to access to service on it's IP address Chrome is giving me ERR_TOO_MANY_REDIRECTS. However if I switch back to the deprecated image hosted on docker kibana:5.4.0 all works fine. So I wonder, what is the difference between the two? And why is the new official image doing these redirects? (I am also seeing it tries to go to the path /login, which the docker hub image is not doing).

Unable to configure xpack license setting via ENV vars

Problem:

As suggested here:
https://www.elastic.co/guide/en/elasticsearch/reference/6.2/license-settings.html

the setting xpack.license.self_generated.type seems to not take when using XPACK_LICENSE_SELF_GENERATED_TYPE environment variable. Some digging around pointed me at this file (below) which doesn't include xpack.license section:

https://github.com/elastic/kibana-docker/blob/6.3/build/kibana/bin/kibana-docker?ts=4

Expected Behavior

I can set the license type to basic (instead of default of trial) via the XPACK_LICENSE_SELF_GENERATED_TYPE environment variable

Actual Behavior

License type remains on trial.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.