Medium: https://medium.com/make-it-heady/what-and-why-ekl-stack-378e6c4765b9
The ELK Stack is popular because it fulfills a need in the log management and analytics space. Monitoring modern applications and the IT infrastructure they are deployed on, requires a log management and analytics solution that enables engineers to overcome the challenge of monitoring what are highly distributed, dynamic and noisy environments.
Logs send from multiple distributed servers can be stored at a centralized data store that can scale as data grows, and that provides a set of tools to analyze the data.
Kibana: It is a visualization tool for our Elasticsearch data. It lets you query on the data, build graphs and a lots of other fancy stuffs.
Elasticsearch: This is a database which will store our logs from Logstash.
Logstash: This will accept logs from Filebeat, processes/transforms it and feeds the output to Elasticsearch for indexing.
-
- Elasticsearch
brew install elasticsearch - Logstash
brew install logstash - Kibana
brew install kibana
- Elasticsearch
-
Start Services
- Elasticsearch
brew services start elasticsearch - Logstash
brew services start logstash - Kibana
brew services start kibana
- Configure Kibana to start visualizing logs. open Kibana configuration file and uncomment server.port and elasticsearch.hosts for kibana to start listening on 5601
$ sudo vim /usr/local/etc/kibana/kibana.yml
-
open http://localhost:5601/status. if you have successfully installed ELK stack you should see kibana status as green
-
Sending data to view in Kibana
- Follow file to configure logs files to send log to elasticsearch for visualization.
- copy file to
/usr/local/Cellar/logstash/7.6.1/libexec/config/syslog.conf
- Verify your configuration file
/usr/local/Cellar/logstash/7.6.1/bin/logstash --config.test_and_exit -f /usr/local/Cellar/logstash/7.6.1/libexec/config/syslog.conf
- Restart service
- brew services restart logstash or
- specify config file and start
/usr/local/Cellar/logstash/7.6.1/bin/logstash -f /usr/local/Cellar/logstash/7.6.1/libexec/config/syslog.conf
''''''' change filebeat config file in /usr/local/etc/filebeat/filebeat.yml brew services restart filebeat '''''
we can now view syslogs in Kibana with the index provided in file
-
run cleanup script
-
run nodejs server in cluster mode
NODE_ENV=production ./node_modules/.bin/pm2 start src/index.js -i 10
-
load test with jmeter file
jmeterRequest.jmx
Management --> index pattern --> create index
http://localhost:5601/app/kibana#/home/tutorial/apm?_g=()
sends metric every 30 sec
-
download on mac
curl -L -O https://artifacts.elastic.co/downloads/apm-server/apm-server-6.8.7-darwin-x86_64.tar.gz
location: /Users/deepakpoojari/apm-server-6.8.7-darwin-x86_64 -
Test config
./apm-server test config -
Start APM Server
./apm-server -e -
Configure the agent
require('elastic-apm-node').start({
// Override service name from package.json
// Allowed characters: a-z, A-Z, 0-9, -, _, and space
serviceName: '',
// Use if APM Server requires a token
secretToken: '',
// Set custom APM Server URL (default: http://localhost:8200)
serverUrl: ''
})
- Dockerised container for ELK stack.
- Using Kafka for log collection on load.