Public Repository

Last pushed: 2 years ago
Short Description
Docker image containing Packetbeat agent without Elasticsearch & Kibana
Full Description

This image is borrowed from Tudor's packetbeat-docker.
This runs the Packetbeat agent inside it's own container,
but by mounting the network host it is able to see the
traffic from the other containers or from the applications
running on the hosts.

I have also added support for publishing json entries directly to Kafka.
Simply provide a comma separated list of broker addresses and a kafka topic and that should be it. The Kafka publisher is based on Shopify/sarama's kafka client, which has built in support for message buffering, snappy compression, blocking and non-blocking producers, etc. The kafka producer in this version of packetbeat is an asynchronous producer (i.e. non-blocking). [https://godoc.org/github.com/Shopify/sarama#AsyncProducer]

How to use

To build:

 docker build -t rshriram/packetbeat-agent:1.0.0_beta2-kafka .

To run:

 docker run --net=host -d --name packetbeat --cpuset-cpus 0 -v path/to/packebteat.yml/folder:/etc/packetbeat rshriram/packetbeat-agent:1.0.0_beta2-kafka packetbeat  -e -c /etc/packetbeat/packetbeat.yml

You need to provide a packetbeat.yml file yourself.

The --net=host part makes it possible to sniff the traffic
from other containers.

The --cpuset-cpus 0 has a measurable impact on packetbeat's performance, by pinning the program to a single core, thereby providing better cache locality when processing packets from kernel.

From docker hub

You can also pull the image from Docker Hub and run it like this:

docker pull rshriram/packetbeat-agent:1.0.0_beta2-kafka
docker run --net=host -d --name packetbeat --cpuset-cpus 0 -v path/to/packetbeat.yml/folder:/etc/packetbeat rshriram/packetbeat-agent:1.0.0_beta2-kafka packetbeat -e -c /etc/packetbeat/packetbeat.yml

Sample packetbeat.yml file

################### Packetbeat Configuration Example ######################

# This file contains an overview of various configuration settings. Please consult
# the docs at https://www.elastic.co/guide/en/beats/packetbeat/current/_configuration.html
# for more details.

# The Packetbeat agent works by sniffing the network traffic between your
# application components. It inserts meta-data about each transaction into
# Elasticsearch.

############################# Shipper ############################################
agent:

 # The name of the agent that publishes the network data. It can be used to group 
 # all the transactions sent by a single agent in the web interface.
 # If this options is not defined, the hostname is used.
 name:

 # The tags of the agent are included in their own field with each
 # transaction published. Tags make it easy to group transactions by different
 # logical properties.
 #tags: ["service1"]

 # Uncomment the following if you want to ignore transactions created
 # by the server on which the agent is installed. This option is useful
 # to remove duplicates if agents are installed on multiple servers.
 # ignore_outgoing: true

############################# Sniffer ############################################

# Select the network interfaces to sniff the data. You can use the "any"
# keyword to sniff on all connected interfaces.
interfaces:
 device: docker0
 type: "af_packet"
 buffer_size_mb: 512
 snaplen: 512


############################# Protocols ######################################
protocols:
  http:

    # Configure the ports where to listen for HTTP traffic. You can disable
    # the HTTP protocol by commenting out the list of ports.
    ports: [80, 8080, 8000, 5000, 8002]

    # Uncomment the following to hide certain parameters in URL or forms attached
    # to HTTP requests. The names of the parameters are case insensitive.
    # The value of the parameters will be replaced with the 'xxxxx' string.
    # This is generally useful for avoiding storing user passwords or other
    # sensitive information.
    # hide_keywords: ['pass', 'password', 'passwd']

  mysql:

    # Configure the ports where to listen for MySQL traffic. You can disable
    # the MySQL protocol by commenting out the list of ports.
    ports: [3306]

  pgsql:

    # Configure the ports where to listen for Pgsql traffic. You can disable
    # the Pgsql protocol by commenting out the list of ports.
    ports: [5432]

  redis:

    # Configure the ports where to listen for Redis traffic. You can disable
    # the Redis protocol by commenting out the list of ports.
    ports: [6379]

  thrift:
    # Configure the ports where to listen for Thrift-RPC traffic. You can disable
    # the Thrift-RPC protocol by commenting out the list of ports.
    ports: [9090]

  mongodb:
    # Configure the ports where to listen for Mongodb traffic. You can disable
    # the Mongodb protocol by commenting out the list of ports.
    ports: [27017]
    send_request: true
    send_response: true

############################# Output ############################################

# Configure what outputs to use when sending the data collected by packetbeat.
# You can enable one or multiple outputs by setting enabled option to true.
output:

  # Elasticsearch as output
  # Options:
  # host, port: where Elasticsearch is listening on
  # save_topology: specify if the topology is saved in Elasticsearch
  elasticsearch:
    enabled: false
    hosts: ["localhost:9200"]
    save_topology: true

  # Redis as output
  # Options:
  # host, port: where Redis is listening on
  # save_topology: specify if the topology is saved in Redis
  #redis:
  #  enabled: true
  #  host: localhost
  #  port: 6379
  #  save_topology: true

  kafka:
    enabled: true
    #if you have multiple brokers, provide a comma separated list of host:port values
    host: "10.10.10.10:9092"
    topic: foobar

  # File as output
  # Options:
  # path: where to save the files
  # filename: name of the files
  # rotate_every_kb: maximum size of the files in path
  # number of files: maximum number of files in path
  #file:
  #  enabled: true
  #  path: "/tmp/packetbeat"
  #  filename: packetbeat
  #  rotate_every_kb: 1000
  #  number_of_files: 7

############################# Processes ############################################

# Configure the processes to be monitored and how to find them. If a process is
# monitored than Packetbeat attempts to use it's name to fill in the `proc` and
# `client_proc` fields.
# The processes can be found by searching their command line by a given string.
#
# Process matching is optional and can be enabled by uncommenting the following
# lines.
#
#procs:
#  enabled: false
#  monitored:
#    - process: mysqld
#      cmdline_grep: mysqld
#
#    - process: pgsql
#      cmdline_grep: postgres
#
#    - process: nginx
#      cmdline_grep: nginx
#
#    - process: app
#      cmdline_grep: gunicorn
Docker Pull Command
Owner
rshriram

Comments (0)