Comments (12)
Could you download the logstash-beats.crt
from https://github.com/spujadas/elk-docker/blob/master/logstash-beats.crt and put it in /etc/pki/tls/certs/logstash-beats.crt
and uncomment the tls
section of your filebeat.yml
?
Then restart filebeat (the init.d
script isn't very verbose so don't worry if nothing seems to happen, you can still check if filebeat is up with ps aux
).
If it still doesn't work, then look at Filebeat's logs (should be in /var/log/filebeat
). You might want to increase Filebeat's log level to info
or debug
(see https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html#configuration-logging) to get more information on what's going on.
from elk-docker.
I did what you said. It's like before. When I start fielbeat it's stays blank and after a second try it's saying /usr/bin/filebeat-god already running.
But logstash is still not filled. Do I have to run a comment like "push logs"? Do I have to configure logstash too? Like to receive logs from filebeat? Or tell logstash to understand the logs that it's receiving?
from elk-docker.
OK, when you start Filebeat, it's blank, and that's completely normal. Something only gets printed out if Filebeat doesn't start normally.
There's no command to "manually" push logs: Filebeat will automatically parse the log files that are (using your configuration) in /log/*.log
and send them to Logstash.
However it will only send the logs that it hasn't already indexed, so if it believes it has already processed your log files (from a previous run) then you'll want to reset the index to force it to process your log files from scratch.
To do that you need to stop Filebeat, remove Filebeat's registry file (the location of which can be set using the registry_file
option, see https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html#_registry_file; by default, as you're using the init.d
script, this file is located at /.filebeat
), then start Filebeat again.
Now look at the registry file to see if it has filled up (i.e. if Filebeat is actually reading and parsing your log files).
If it still doesn't work, you should stop Filebeat, increase the log level in the filebeat.yml
configuration file, remove the registry file, start Filebeat, and take a look at the logs.
(By the way I wrote that they were in /var/log/filebeat
but that's only the case if you set to_files
to true
and didn't set files.path
in the logging
section of the configuration file – see https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html#_to_files; by default you want to look your syslog in /var/log/syslog
.)
Finally, there's no need to configure Logstash for now whilst you're troubleshooting this issue. Out of the box, the latest sebp/elk
image is set up to receive events from Filebeat on port 5044, and it will understand the logs that it receives from Filebeat, so your logs will definitely show up on your http://192.168.99.100:9200/_search?pretty once Filebeat is properly configured.
(Having said that, Logstash will store the raw log entries in the message
field and without filtering/parsing them: that's something you can do later on, once Filebeat and Logstash are communicating, by extending the sebp/elk
image to add a filter
plugin configuration – see https://www.elastic.co/guide/en/logstash/current/filter-plugins.html – that can parse your Tomcat logs based on the Tomcat log format.)
Hope that helps!
from elk-docker.
HI Sébastien,
I'm facing similar problem.
My filebeat.log (debug enabled) shown what my server is sending events to ELK container:
2016-01-18T11:40:01Z INFO Events sent: 12
2016-01-18T11:40:01Z DBG Processing 12 events
2016-01-18T11:40:01Z DBG Write registry file: /.filebeat
2016-01-18T11:40:01Z INFO Registry file updated. 6 states written.
2016-01-18T11:40:09Z DBG Flushing spooler because of timemout. Events flushed: 3
2016-01-18T11:40:09Z DBG send event
2016-01-18T11:40:09Z DBG Start Preprocessing
2016-01-18T11:40:09Z DBG Publish: {
"@timestamp": "2016-01-18T11:40:01.820Z",
"beat": {
"hostname": "wso2-dev-srv-01",
"name": "wso2-dev-srv-01"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "TID: [-1234] [] [2016-01-18 11:39:59,508] INFO {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService} - Configured Registry in 113ms {org.wso2.carbon.registry.core.jdbc.EmbeddedRegistryService}",
"offset": 43783,
"source": "/opt/wso2dss01a/repository/logs/wso2carbon.log",
"type": "log"
}
2016-01-18T11:40:09Z DBG Publish: {
"@timestamp": "2016-01-18T11:40:01.820Z",
"beat": {
"hostname": "wso2-dev-srv-01",
"name": "wso2-dev-srv-01"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "TID: [-1234] [] [2016-01-18 11:39:59,648] INFO {org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent} - Registry Mode : READ-WRITE {org.wso2.carbon.registry.core.internal.RegistryCoreServiceComponent}",
"offset": 43985,
"source": "/opt/wso2dss01a/repository/logs/wso2carbon.log",
"type": "log"
}
2016-01-18T11:40:09Z DBG Publish: {
"@timestamp": "2016-01-18T11:40:01.820Z",
"beat": {
"hostname": "wso2-dev-srv-01",
"name": "wso2-dev-srv-01"
},
"count": 1,
"fields": null,
"input_type": "log",
"message": "TID: [-1234] [] [2016-01-18 11:40:01,020] INFO {org.wso2.carbon.user.core.internal.UserStoreMgtDSComponent} - Carbon UserStoreMgtDSComponent activated successfully. {org.wso2.carbon.user.core.internal.UserStoreMgtDSComponent}",
"offset": 44206,
"source": "/opt/wso2dss01a/repository/logs/wso2carbon.log",
"type": "log"
}
2016-01-18T11:40:09Z DBG Forward preprocessed events
2016-01-18T11:40:09Z DBG output worker: publish 3 events
but from Kibana I can't see nothing.
Have I to set Kibana to read the logs/events?.
Regards.
from elk-docker.
I have got It !
In ELK container > Kibana > Settings
I have added a new Index Pattern
using the next:
- Index name or pattern :
filebeat-*
(instead oflogstash-*
) - Time-field name:
@timestamp
Click on Create
After that, go to Kibana > Discover
and select the recently created filebeat-*
Index Pattern and You will see your logs/events.
Regards.
from elk-docker.
@chilcano Hi Roger, glad to hear you solved this (would have asked you to look at http://localhost:9200/_search?pretty to check if the events were being indexed by Elasticsearch, which should have been the case in your situation, but which I understand is not the case in the original poster's situation).
Anyway, thanks for the tip, I'll update the documentation (should have done it after #13) to provide more guidance, especially to mention that the pattern that should be used in Kibana is now indeed filebeat-*
when using Filebeat with Logstash… as you found out the hard way 😃
from elk-docker.
Thank you for the ELK container, in few minutes you get a running elk instance ready to play.
Regards.
from elk-docker.
@baum1234 Catching up after a few days: any luck getting Filebeat to talk to Logstash? If so, did you figure out what was wrong, and if not, what's working and not working?
from elk-docker.
Hi @baum1234
Its wery abstract issue name, so i decided to ask here =)
So i am trying to deliver logs with filebeat from remote host to host with docker elk. All fine but the certificate =(
if i setup hosts
as IP_ADDRESS:5044 it`s warning that
INFO Connecting error publishing events (retrying): x509: cannot validate certificate for IP_ADDRESS because it doesn't contain any IP SANs
if i setup hosts
as MY_DOMAIN:5044 it`s warning that
INFO Connecting error publishing events (retrying): x509: certificate is valid for *, not MY_DOMAIN
i should just recraete certificate with your instructions or i should write my ip with domain name some where?
from elk-docker.
I have solved that adding an entry in /etc/hosts
with elk
as host name and the IP address of your log sender. The elk
is the host used in the filebeat.yml
I hope this helps you
from elk-docker.
@Erliz The certificate that's packaged in the image uses *
as a hostname, which means that only a hostname without dots will work (see elastic/logstash-forwarder#221 for an extensive discussion), so @chilcano's suggestion (which uses elk
) would work perfectly, but IP addresses won't work, nor will anything like elk.mydomain.com
.
You could of course recreate a certificate using my instructions (but then you'd have to extend the ELK image, which may or may not be convenient).
Of course, generally speaking (especially for anything other than test purposes), you would most likely want to replace this self-signed certificate with a proper/cleaner certificate anyway, involving at the very least a separate CA certificate that issues a server certificate to Logstash, where the server certificate has a subject that includes the FQDN of the server in the CN attribute, and possibly other identities (e.g. alias FQDNs, wildcard FQDNs, IP addresses although that isn't recommended as IP addresses tend to change) as Subject Alternative Names (SANs).
Lastly (haven't tested but should work) you could configure Filebeat to use the insecure
TLS option (see https://www.elastic.co/guide/en/beats/filebeat/current/filebeat-configuration-details.html#_insecure) to disable certificate checking.
I'll add some words in the documentation on this subject.
from elk-docker.
Closing for now, please reopen as needed if the above didn't help solve the initial issue.
from elk-docker.
Related Issues (20)
- ES_HEAP_SIZE doesn't work anymore HOT 2
- Fix log4j2 CVE-2021-44228 HOT 7
- Two more log4j vulnerabilities: CVE-2021-45046 and CVE-2021-45105 HOT 1
- Please update to 7.16.3 HOT 1
- Can't get Elk started HOT 7
- cannot add login page to kibana HOT 1
- How to use environment variable in 30-output.conf file HOT 2
- ELK fails to start on MAC M1 HOT 8
- Setting up APM question HOT 4
- Question: user authentication for https HOT 1
- Error in Security section HOT 1
- Issues installing on TrueNAS Scale HOT 1
- Update ELK to latest version (currently 8.3.3) HOT 4
- Issues running on AWS Fargate HOT 2
- Add sample docker-compose.yml with persistance + traefik configuration HOT 1
- example using image never starts as elasticsearch doesn't start HOT 1
- Kibana refuses connection, nothing in logs HOT 2
- filebeat x509 certificate signed by unknown authority when calling api endpoint HOT 1
- Kibana enrollement token
- Update ELK to 8.9.0 HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from elk-docker.