Public | Automated Build

Last pushed: 2 years ago
Short Description
Short description is empty for this repo.
Full Description

cloudconfig

A system for provisioning CoreOS cloud-config.yml files

This system is currently useful if you are running CoreOS on bare metal instances which can be identified by their mac address.

You can setup your cluster nodes in a yaml based configuration file which must be mounted via the var mount in the docker instance.

Example cluster-config.yml

cluster:
  features:
    - etcd2
    - etcd-client-ssl
    - etcd2-ssl
    - fleet
    - mount
    - timezone
    - private-repository

  private-repository:
    insecure-addr: "10.0.0.0/8"

  timezone: Europe/Berlin

  \# generate a new token for each unique cluster from https://discovery.etcd.io/new
  etcd2:
    discovery: https://discovery.etcd.io/xyz

  ssh_authorized_keys:
    - ssh-rsa ...

  update:
    reboot-strategy: off
    group: stable

  nodes:
    - mac: c8:60:00:cc:xx:8d
      hostname: coreos-1
      ip: 11.22.33.44

      fleet:
        metadata: dc=colo1,rack=rack1,disc=ssd,disc_amount=1,mem=32

      update:
        group: alpha


    - mac: c8:60:00:bb:aa:91
      hostname: coreos-2
      ip: 1.2.3.4
      mount:
        - dev: /dev/sdb
          mount-point: /mnt/sdb
          type: ext4
      fleet:
        metadata: dc=colo2,rack=rack2,disc=hdd,disc_amount=2,mem=32

Example usage

Provisioning server

You have to provide a volume for the /opt/cloudconfig/var directory. In this directory a file named cluster-config.yml is expected.

You might want to copy the example conf/cluster-config.yml to your var/ directory for a quick start.

docker run -d -p 1234:80 \
-v $(pwd):/opt/cloudconfig/var \
-e BASE_URL=http://cloudconfig.example.com:1234 \
hauptmedia/cloudconfig

You can also run the provisioning service on your local machine and provide connectivity to it via a reverse ssh tunnel.

\# override the BASE_URL so that the host can use the provided reverse ssh tunnel on 127.0.0.1:8080 to reach this service
\# you can also run it in interactive mode and inspect the log files on stdout

docker run -i -t --rm 8080:80 \
-v $(pwd):/opt/cloudconfig/var \
-e BASE_URL=http://127.0.0.1:8080 \
hauptmedia/cloudconfig
ssh -R8080:127.0.0.1:8080 core@host

\#or for boot2docker for example
ssh -R8080:192.168.59.103:8080 core@host

Cluster Node usage

Run on CoreOS hosts to update cloud-config.yml or on new (bare metal hosts) to install CoreOS with the provisioned cloud-config.yml

curl -sSL http://cloudconfig.example.com:1234/install.sh | sudo sh

Available features & config options

bash-profile

Writes a /home/core/.bash_profile file and register the ssh-agent at /tmp/ssh-agent.sock if available.

etcd2

Run the etcd2 service

configuration options

  • node[etcd2][name] - The node name (defaults to node[hostname])
  • node[etcd2][advertise-client-urls] - The advertised public hostname:port for client communication (defaults to node[ip]:2379)
  • node[etcd2][initial-advertise-peer-urls] - The advertised public hostname:port for server communication (defaults to node[ip]:2380)
  • cluster[etcd2][discovery] node[etcd][discovery] - A URL to use for discovering the peer list (optional)

References

etcd-client-ssl

Installs client certificates which can be used to connect to a etcd cluster.

You can use the scripts provided in the https://github.com/hauptmedia/ssl-cert repository to manage your etcd ssl certificates.

Creating the certificate

bin/create-etcd-cert -t client -c coreos-1.skydns.io

etcd2-ssl

Secures the etcd service using SSL/TLS. You're required to create a certificate authority for etcd (once) and
server and peer certs for each cluster node.

The IP addresses used by etcd must be integrated into the certificate.

You can use the scripts provided in the https://github.com/hauptmedia/ssl-cert repository to manage your etcd ssl certificates.

Please refer to the README.md file in the ssl-cert repository for further information.

Creating the certificates

mkdir var/etcd-ca
create-ca -d var/etcd-ca
bin/create-etcd-cert -t server -c coreos-1.skydns.io -i 192.168.1.2 
bin/create-etcd-cert -t peer -c coreos-1.skydns.io -i 192.168.1.2

References

flannel

Starts flanneld service. Will be automatically configured for etcd ssl access if etcd2-ssl was enabled.
It will also automatically write the specified network settings in etcd.

configuration options

  • cluster[flannel][network] node[flannel][network]
  • cluster[flannel][subnet_len] node[flannel][subnet_len]
  • cluster[flannel][subnet_min] node[flannel][subnet_min]
  • cluster[flannel][subnet_max] node[flannel][subnet_max]
  • cluster[flannel][backend_type] node[flannel][backend_type] - vxlan | udp - defaults to vxlan

References

fleet

Runs the fleet service. It automatically configures itself for etcd2-ssl if etcd2-ssl is enabled.

This feature writes a /etc/fleet-metadata.env file which contains the fleet metadata as environment variables.

The fleet metadata keys will be transformed to uppercase. E.g. fleet metadata "dc=dc1,rack=12" will
be available as DC=dc1 RACK=12

The env file can be used to pass the fleet metadata as environment variables in docker containers
with the --env-file=/etc/fleet-metadata.env docker command line option or in systemd service definitions
using the EnvironmentFile=/etc/fleet-metadata.env configuration option.

This feature also writes a /etc/fleetctl.env file which can be used to provide a configuration to fleetctl.

configuration options

All default fleet configuration options are available plus:

  • cluster[fleet][verbosity] node[fleet][verbosity] Enable debug logging by setting this to an integer value greater than zero. Only a single debug level exists, so all values greater than zero are considered equivalent. Default: 0
  • cluster[fleet][etcd_servers] node[fleet][etcd_servers] Provide a custom set of etcd endpoints. Default: ["http://127.0.0.1:4001"]
  • cluster[fleet][etcd_request_timeout] node[fleet][etcd_request_timeout] Amount of time in seconds to allow a single etcd request before considering it failed. Default: 1.0
  • cluster[fleet][etcd_cafile,etcd_keyfile,etcd_certfile] node[fleet][etcd_cafile,etcd_keyfile,etcd_certfile] Provide TLS configuration when SSL certificate authentication is enabled in etcd endpoints
  • cluster[fleet][public_ip] node[fleet][public_ip] IP address that should be published with the local Machine's state and any socket information. If not set, fleetd will attempt to detect the IP it should publish based on the machine's IP routing information.
  • cluster[fleet][metadata] node[fleet][metadata] Comma-delimited key/value pairs that are published with the local to the fleet registry. This data can be used directly by a client of fleet to make scheduling decisions. An example set of metadata could look like: metadata="region=us-west,az=us-west-1"
  • cluster[fleet][agent_ttl] node[fleet][agent_ttl] An Agent will be considered dead if it exceeds this amount of time to communicate with the Registry. The agent will attempt a heartbeat at half of this value. Default: "30s"
  • cluster[fleet][engine_reconcile_interval] node[fleet][engine_reconcile_interval] Interval at which the engine should reconcile the cluster schedule in etcd. Default: 2

Use fleetctl with SSL/TLS configuration shipped with this image

docker run -i -t --rm \
-v $(pwd):/opt/cloudconfig/var \
hauptmedia/cloudconfig \
fleetctl \
--cert-file=/opt/cloudconfig/var/etcd-ca/certs/fleetctl-client.crt \
--key-file=/opt/cloudconfig/var/etcd-ca/private/fleetctl-client.key \
--ca-file=/opt/cloudconfig/var/etcd-ca/certs/etcd-ca.crt \
--endpoint=https://public-ip-of-etcd:2379 \
list-machines
`

Use fleetctl with SSL/TLS configuration on a coreos-node

fleetctl \
--cert-file=/etc/ssl/etcd/certs/client.crt \
--key-file=/etc/ssl/etcd/private/client.key \
--ca-file=/etc/ssl/etcd/certs/ca.crt \
--endpoint=https://127.0.0.1:2379 \
list-machines
`

References

host-env-file

This feature writes the some information about the evironment in /etc/host.env.

This feature also install the /opt/bin/getip script for easy retrieval of the system's main ip address.

iptables

Setup a firewall on the given node. Automatically configures the firwall to allow inter node communication.

  • cluster[iptables][allow] node[iptables][allow] List of ports which should be allowed for public access

Example:

      iptables:
          allow:
              - port: 22
                protocol: tcp
              - port: 3306
                protocol: tcp

mount

Mounts a given device to the specified mount point

  • cluster[mount][dev] node[mount][dev] Device which should be mounted
  • cluster[mount][mount-point] node[mount][mount-point] Mount point where the device should be mounted
  • cluster[mount][type] node[mount][type] Filesystem type of the mountpoint
  • cluster[mount][format] node[mount][format] If true, the device will be formatted on first system startup

References

private-repository

Add support for private docker repositories

  • cluster[private-repository][insecure-addr] node[private-repository][insecure-addr] -
    If the private registry supports only HTTP or HTTPS with an unknown CA certificate specfiy it's address here. CIDR notations are also allowed.

References

https://coreos.com/docs/launching-containers/building/registry-authentication/

set-host-dns-entry

This feature utilizes the /opt/bin/skydns-set-record script provided by the skydns feature and registers the
hostname of the node in skydns. This will only work if the hostname was specified as a FQDN and skydns is configured
to be authoritative for the domain name.

skydns

Starts the skydns service. Will be automatically configured for etcd ssl access if etcd2-ssl was enabled.
It will also automatically write the specified dns config in etcd.

configuration options

  • cluster[skydns][dns_addr] node[skydns][dns_addr] IP:port on which SkyDNS should listen, defaults to node[ip]:53.
  • cluster[skydns][domain] domain for which SkyDNS is authoritative, defaults to skydns.local
  • cluster[skydns][nameservers] forward DNS requests to these nameservers (array of IP:port combination), when not authoritative for a domain, defaults to [8.8.8.8:53, 8.8.4.4:53]
  • cluster[skydns][ttl] default TTL in seconds to use on replies when none is set in etcd, defaults to 3600.
  • cluster[skydns][min_ttl] minimum TTL in seconds to use on NXDOMAIN, defaults to 30.

Setting a hostname with curl

curl -XPUT \
    --cert /etc/ssl/etcd/certs/client.crt \
    --cacert /etc/ssl/etcd/certs/ca.crt  \
    --key /etc/ssl/etcd/private/client.key \
    https://127.0.0.1:2379/v2/keys/skydns/local/skydns/test \
    -d value='{"host":"10.10.13.37"}'

using the skydns-set-record script

The skydns feature installs a convenience script which can be used to set hostname records at /home/core/bin/skydns-set-record

/opt/bin/skydns-set-record test.skydns.local 10.10.10.10

# with ttl (after which the record becomes unavailable)
/opt/bin/skydns-set-record test.skydns.local 10.10.10.10 60

References

ssh-agent

Runs an ssh-agent for the core user. The ssh-agent socket will be available at /tmp/ssh-agent.sock.

It automatically registers the private key of the core user at the agent (assuming that it has no passphrase set).

This feature can be used to enable fleetctl ssh authentication on the coreos node.

ssh-key

This features writes a private key file for the core user. This is useful in combination with the ssh-agent feature to
provide authentication credentials for fleetctl.

configuration options

  • cluster[ssh-key][private]node[ssh-key][private]- Content of the private key file (will be written to/home/core/.ssh/id_rsa`)
  • cluster[ssh-key][public]node[ssh-key][public]- Content of the public key file (will be written to/home/core/.ssh/id_rsa.pub`)

static-network

Configures a static network configuration

configuration options

  • node[static-network][][iface] - Interface for which this config entry should be applied
  • node[static-network][][address] - CIDR notation for the ip address which should be configured
  • node[static-network][][dns] - DNS server to use
  • node[static-network][][gateway] - IP address of the gateway which should be used

timezone

  • cluster[timezone]node[timezone]` - Set the timezone to the specified string on a cluster wide or node level

update

Configures the update strategy on a cluster or node level. This feature is always enabled.

configuration options

  • cluster[update][reboot-strategy] node[update][reboot-strategy] - reboot | etcd-lock | best-effort | off (defaults to off)
  • cluster[update][group] node[update][group] - master | alpha | beta | stable (defaults to stable)
  • cluster[update][server] node[update][server] - location of the CoreUpdate server

References

Docker Pull Command
Owner
hauptmedia
Source Repository