Installing the ELK Stack on Docker Container

Installing the ELK Stack on Docker Container
Written by Abhishek JalanAugust 12, 2021
7 min read
DevOps
8 VIEWS 4 LIKES 0 DISLIKES SHARE
4 LIKES 0 DISLIKES 8 VIEWS SHARE
Abhishek Jalan

DevSecOps Engineer

Installing the ElasticSearch Logstash & Kibana (ELK) Stack on Docker with Centos Base system

In this blog, I am using a Dockerized ELK Stack that results in: three Docker containers running in parallel, for Elasticsearch, Logstash, and Kibana, port forwarding set up, and a data volume for persisting Elasticsearch data.

The ELK Stack (Elasticsearch, Logstash, and Kibana) can be installed on a variety of different operating systems and in various different setups. While the most common installation setup is Linux and other Unix-based systems, a less-discussed scenario is using Docker.

To get the default distributions of Elasticsearch and Kibana up and running in Docker, you can use Docker Compose.

Create a docker-compose.yml file for the single node Elastic with Logstash & Kibana. The following example brings up three containers so you can see how things work.

version: '3.2'

services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200:9200"
- "9300:9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: changeme
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
networks:
- elk

logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch

kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch

networks:
elk:
driver: bridge

volumes:
elasticsearch:

Make sure Docker Engine is allotted at least 6-8 GiB of memory. In Docker Desktop.

Run docker-compose to bring up the three docker container Elasticsearch, Logstasg and Kibana:
docker-compose up
Submit a _cat/nodes request to see that the nodes are up and running:
curl -X GET "localhost:9200/_cat/nodes?v&pretty"
Open Kibana to load sample data and interact with the cluster:
http://localhost:5601


When you’re done experimenting, you can tear down the containers and volumes by running

docker-compose down -v

ELK
Elasticsearch
Kibana
ELK on Docker
ELK single node
8 VIEWS 4 LIKES 0 DISLIKES SHARE
4 LIKES 0 DISLIKES 8 VIEWS SHARE
Was this blog helpful?
You must be Logged in to comment
Code Block

1 Comments

Prasanta Das

Nice Blog

Abhishek Jalan
DevSecOps Engineer
+21 more
17 Blog Posts
4 Discussion Threads
Trending Categories
93
Software12
DevOps34
Frontend Development13
Backend Development13
Server Administration13
Linux Administration12
Data Center14
Sentry11
Terraform15
Ansible9
Docker13
Penetration Testing12
Kubernetes12
NGINX8
JenkinsX10
Jenkins18
SSL6
Ethical-Hacking10
Python8
NodeJs9
RedHat8
Github11
AngularJs15
Google Cloud Platform (GCP)6
SonarQube9
Amazon Web Service (AWS)2
VMware17
Blockchain6
Snipe-IT7