ELK and Filebeat, Curator, Backup to S3
- DEV COMMON
- 2018. 11. 2.
반응형
Logstash 가 mysql 이나 csv, mongodb, Hadoop 같은 저장소로 부터 ElasticSearch 로 데이터를 필터링하여 저장해주고 Kibana 를 통해 이를 쉽게 분석 할 수 있다.
Filebeat 는 tomcat log 나 db log 에서 값의 변화가 있을 때 마다 logstash 로 이를 전송해준다.
Curator 는 ElasticSearch 에 크기 나 기간에 따라 과거 데이터를 삭제해주고 또한 백업 및 복구를 해줄 수 있도록 한다.
https://www.elastic.co/downloads/beats/filebeat
sudo vim /etc/filebeat/filebeat.yml
# Paths that should be crawled and fetched. Glob based paths.
paths:
#- /var/log/*.log
- /opt/tomcat/logs/catalina.out
#- c:\programdata\elasticsearch\logs\*
#================================ Outputs =====================================
# Configure what output to use when sending the data collected by the beat.
#-------------------------- Elasticsearch output ------------------------------
#output.elasticsearch:
# Array of hosts to connect to.
#hosts: ["localhost:9200"]
# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
#----------------------------- Logstash output --------------------------------
output.logstash:
# The Logstash hosts
hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
sudo vim /etc/logstash/conf.d/logstash.conf
input {
beats {
port => 5044
}
}
output {
if [beat][hostname] == "ip-172-31-30-178" or [beat][hostname] == "ip-172-31-30-179" {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "tomcat-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
else if [beat][hostname] == "ip-172-31-30-180" {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "database-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
else {
elasticsearch {
hosts => "localhost:9200"
manage_template => false
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
document_type => "%{[@metadata][type]}"
}
}
}
sudo service kibana restart
sudo service elasticsearch restart
sudo initctl restart logstash
sudo service filebeat restart
sudo apt install python-pip
pip install elasticsearch-curator
touch curator.yml
---
# Remember, leave a key empty if there is no value. None will be a string,
# not a Python "NoneType"
client:
hosts:
- 127.0.0.1
port: 9200
url_prefix:
use_ssl: False
certificate:
client_cert:
client_key:
ssl_no_validate: False
http_auth:
timeout: 30
master_only: False
logging:
loglevel: INFO
logfile:
logformat: default
blacklist: ['elasticsearch', 'urllib3']
touch delete_indices_size_base.sh
/usr/local/bin/curator /home/ec2-user/delete_indices_size_base.yml --config /home/ec2-user/curator.yml
touch delete_indices_size_base.yml
---
actions:
1:
action: delete_indices
description: >-
Delete indices matching the prefix filebeat in excess of
300GB of data (75% of 400GB) , starting with the oldest indices, based on index creation_date.
An empty index list (from no indices being in excess of the size limit, for
example) will not generate an error.
options:
ignore_empty_list: True
timeout_override: 300
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: filebeat-
- filtertype: space
disk_space: 300
use_age: True
source: creation_date
2:
action: delete_indices
description: >-
Delete indices matching the prefix tomcat in excess of
300GB of data (75% of 400GB) , starting with the oldest indices, based on index creation_date.
An empty index list (from no indices being in excess of the size limit, for
example) will not generate an error.
options:
ignore_empty_list: True
timeout_override: 300
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: tomcat-
- filtertype: space
disk_space: 300
use_age: True
source: creation_date
touch delete_indices_time_base.sh
/usr/local/bin/curator /home/ec2-user/delete_indices_time_base.yml --config /home/ec2-user/curator.yml
touch delete_indices_time_base.yml
---
actions:
1:
action: delete_indices
description: >-
Delete indices older than 30 days (based on index name), for tomcat-
prefixed indices. Ignore the error if the filter does not result in an
actionable list of indices (ignore_empty_list) and exit cleanly.
options:
ignore_empty_list: True
timeout_override:
continue_if_exception: False
disable_action: False
filters:
- filtertype: pattern
kind: prefix
value: tomcat-
exclude:
- filtertype: age
source: name
direction: older
timestring: '%Y.%m.%d'
unit: days
unit_count: 30
exclude:
touch curator_cron
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
HOME=/
# daily
0 0 * * * ec2-user /usr/local/bin/curator /home/ec2-user/delete_indices_time_base.yml --config /home/ec2-user/curator.yml > /home/ec2-user/log/curator_purging_time_base.log 2>&1
0 0 * * * ec2-user /usr/local/bin/curator /home/ec2-user/delete_indices_size_base.yml --config /home/ec2-user/curator.yml > /home/ec2-user/log/curator_purging_size_base.log 2>&1
sudo su
cd /etc/cron.d
ls
cp /home/thewavelet/curator_cron .
touch curl_backup_config.sh
curl -XPUT 'http://localhost:9200/_snapshot/s3_elk_backup' -d '{
"type": "s3",
"settings": {
"access_key": "[YOUR_ACCESS_KEY]",
"secret_key": "YOUR_SECRET_KEY",
"bucket": "YOUR_BUCKET",
"region": "YOUR_REGION",
"base_path": "elasticsearch",
"max_retries": 3
}
}'
touch backup_to_S3.sh
curl -XPUT 'http://localhost:9200/_snapshot/s3_elk_backup/test?wait_for_completion=true' -d '{
"indices": "tomcat-2017.08.06",
"ignore_unavailable": "true",
"include_global_state": false
}'
touch s3_backup_cron
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
HOME=/
# daily
0 0 * * * ec2-user /home/ec2-user/daily_elk_backup.sh > /home/ec2-user/log/elk_backup.log 2>&1
touch daily_elk_backup.sh
TODAY=$(date +'%Y.%m.%d')
YESTERDAY=$(date --date="1 days ago" +'%Y.%m.%d')
echo Today is $TODAY
echo Yesterday $YESTERDAY indices will be stored in S3
INDEX_PREFIXES=''
INDEX_PREFIXES+='tomcat- '
#INDEX_PREFIXES+='filebeat- '
#INDEX_PREFIXES+='database- '
for prefix in $INDEX_PREFIXES;
do
INDEX_NAME=$$YESTERDAY
SNAPSHOT_NAME=$INDEX_NAME"-snapshot"
echo Start Snapshot $INDEX_NAME
curl -XPUT "http://localhost:9200/_snapshot/s3_elk_backup/$SNAPSHOT_NAME?wait_for_completion=true" -d '{
"indices": "'"$INDEX_NAME"'",
"ignore_unavailable": "true",
"include_global_state": false
}'
echo Successfully completed storing "$INDEX_NAME" in S3
done
curl -XGET 'localhost:9200/_snapshot/s3_elk_backup/_all?pretty'
curl -XDELETE 'localhost:9200/tomcat-2017.08.06'
touch elk_restore.sh
if [[ $# -ne 1 ]] ; then
echo "Missing argument. Please provide index name"
echo "Usage: elk_restore.sh tomcat-2017.08.05"
exit 1
fi
# Store environment from command argument which will be used for S3 location
INDEX_NAME=$1
echo "INDEX_NAME: $1"
curl -XPOST "localhost:9200/_snapshot/s3_elk_backup/$INDEX_NAME-snapshot/_restore" -d '{
"indices": "'"$INDEX_NAME"'",
"ignore_unavailable": "true",
"include_global_state": false
}'
sh elk_restore.sh tomcat-2017.08.06
반응형
'DEV COMMON' 카테고리의 다른 글
intellij 한글 입력 느려지는 현상 (1) | 2021.02.21 |
---|---|
IntelliJ 단축키 (0) | 2018.12.23 |
Logstash Basic for Ubuntu16.04 (0) | 2018.11.02 |
Kibana Basic for Ubuntu 16.04 (0) | 2018.11.01 |
ElasticSearch Basic for Ubuntu 16.04 (0) | 2018.10.30 |