Cathay Pacific IT - Logstash Configuration
(For IT Solution Centre - ECX & CHL)
Requirements
Setup
- Install Docker.
- Clone this repository
Usage
To start this container, simply execute the following command:
For file input:
docker run -it --rm neblish/cx-logstash logstash -f '/etc/logstash/conf.d/{input-file,*filter*,output-elasticsearch}.conf'
For TCP input:
docker run -it --rm -p 5000:5000 -p 5044:5044 neblish/cx-logstash logstash -f '/etc/logstash/conf.d/{input-tcp,*filter*,output-elasticsearch}.conf'
Log Location
If input-file.conf
is used, then the logs
folder will have to be created to store the logs that are to be ingested into Logstash.
Unless stated otherwise the current Logstash configuration will assume the following folder structure:
logs
- <application type> (e.g. cx-app)
- <server name> (e.g. was-server01)
- <log files> (e.g. application.log)
Currently compressed log files are not supported; those will have to be uncompressed before they can be ingested by Logstash.
Currently the following application types are supported:
cq-author
- CQ/AEM Author logs (access.log
,history.log
,replication.log
)cq-publish
- CQ/AEM Publish logs (access.log
,history.log
,replication.log
)cx-mmb
- MMB Application Logs (application.log
)ihs
- IHS Web Server Logs (access_log
)
Known Issues
XML Responses split into multiple entries.
Logstash's multiline codec has a limit of 500 lines or 10MB. If an log entry exceeds those limits then it will be split into two entries in elasticsearch.
Logstash not ingesting files stored in logs folder
Sometimes logstash might not ingest logs that are more than 1 day old even when they are inside the respective log folders. If that happens simply touch the log files and logstash will resume ingestion.