Public Repository

Last pushed: 2 years ago
Short Description
Console for viewing Topography, Metrics, etc of an Akka Cluster
Full Description

astrolabe: An Akka Cluster Console

A burgeoning web-based console for realtime visualization of Akka Clusters.

*(This code is heavily influenced by–and owes its existence to–ochron's ScalaJS Single Page Application Tutorial).

Creator:

Contributors:

Features

'Astrolabe' is a realtime, reactive, browser-based console intended to provide visualization of any Akka Cluster, with live updating of changes in the cluster state, including member adds & removes. It currently visualizes several properties in diagrams (outlined below), utilizing D3 & React.JS.

It can also serve as an effective Demo/Sample project illustrating combining Scala.JS with D3 & scalajs-react, for simple Akka Cluster topography overview.

Current Features:

Currently the following features are supported:

  • Join any Akka Cluster (that isn't encrypted with SSL, as it would require you specially configure for the keys, ssl certs, etc) on the fly – configured via browser.
  • Visualize the topography of each Cluster
    • Members View: Shows each individual ActorSystem which is joined to the cluster, with information on their hostname/IP address, port, and configured roles.
    • Roles View: Similar to Members View, shows each individual ActorSystem which is joined to the cluster, with information on their hostname/IP address, port, and configured roles. Additionally shows information on configured routers. Allows you to define dependencies between specific Actors, to clarify the visualization.
    • Nodes View: Shows each individual host (hostname/IP address), with each ActorSystem hanging off of that host by Port & Roles.

A number of future features are planned:

  • Cluster Metrics awareness: This will collect information about cluster metrics for each node in the system, and display an overview.
  • Actor Hierarchy overview: This will let you visualize the actor tree on each node in the system, displaying a diagram to understand the node.

Visualization Sample Output

Members View

Roles View

Nodes View

##Getting started

To get started, we'll first need to boot the Spray HTTP Server (for running the console) and setup Scala.JS to recompile any changes.

  1. Open 2 terminals, each running sbt.

  2. In the first terminal, we'll need to start Spray (using sbt-revolver, so that it automatically restarts when we make code changes:

> re-start
  1. In the second terminal, we want Scala.JS to recompile our JavaScript when local changes are made:
> ~fastOptJS
  1. Open a browser at
localhost:9000

Running the Sample Cluster

A console alone isn't enough: we'll need some Akka nodes to visualize. To do this, we need a running Akka Cluster.

To boot up the sample Akka cluster, and test the behavior of the console, you have two options:

Booting The Sample Cluster Locally with Multiple JVMs

This approach will run multiple instances of the JVM, each with an Akka node in it, to facilitate testing. First, we'll need to create a zip file of the compiled project that we can run with.

sbt sampleCluster/dist

cd sampleCluster/target/universal
unzip samplecluster-1.0.0.zip
sudo chmod +x samplecluster-1.0.0/bin/samplecluster

Once this is done, we'll have a fully functional Sample Akka Cluster in samplecluster-1.0.0/bin/samplecluster. Now we can start our multiple JVMs to give us some Akka nodes; in this example we're going to start two separate clusters – FooCluster, and BazCluster

FooCluster

To get started with the FooCluster, we will need a stable seed node. We can boot this as follows:

samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2551 FooCluster 127.0.0.1:2551 Stable-Seed &

Next, we'll boot up a bunch of sample actors:

samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2552 FooCluster 127.0.0.1:2551 Baz-Security &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2553 FooCluster 127.0.0.1:2551 Baz-Security &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2554 FooCluster 127.0.0.1:2551 Foo-Worker &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2555 FooCluster 127.0.0.1:2551 Foo-Worker &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2556 FooCluster 127.0.0.1:2551 Bar-Worker &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2557 FooCluster 127.0.0.1:2551 Bar-Worker &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2558 FooCluster 127.0.0.1:2551 Foo-Http &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2559 FooCluster 127.0.0.1:2551 Bar-Http &
BazCluster

To get started with the BazCluster, we will need a stable seed node as well. We can boot this as follows:

samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2661 BazCluster 127.0.0.1:2661 Stable-Seed &

Finally, we'll boot up a bunch of sample actors:

samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2662 BazCluster 127.0.0.1:2661 Baz-Security &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2663 BazCluster 127.0.0.1:2661 Foo-Worker &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2664 BazCluster 127.0.0.1:2661 Bar-Worker &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2665 BazCluster 127.0.0.1:2661 Foo-Http &
samplecluster-1.0.0/bin/samplecluster 127.0.0.1 2666 BazCluster 127.0.0.1:2661 Bar-Http &
Maintenance and Shutdown

To stop a particular actor by port...

On Mac OS X:

kill -9 `lsof -i tcp:2551 | grep -i LISTEN`
kill -9 `lsof -i tcp:2552 | grep -i LISTEN`
kill -9 `lsof -i tcp:2553 | grep -i LISTEN`
kill -9 `lsof -i tcp:2554 | grep -i LISTEN`
kill -9 `lsof -i tcp:2555 | grep -i LISTEN`
kill -9 `lsof -i tcp:2556 | grep -i LISTEN`
kill -9 `lsof -i tcp:2557 | grep -i LISTEN`
kill -9 `lsof -i tcp:2558 | grep -i LISTEN`
kill -9 `lsof -i tcp:2559 | grep -i LISTEN`

kill -9 `lsof -i tcp:2661 | grep -i LISTEN`
kill -9 `lsof -i tcp:2662 | grep -i LISTEN`
kill -9 `lsof -i tcp:2663 | grep -i LISTEN`
kill -9 `lsof -i tcp:2664 | grep -i LISTEN`
kill -9 `lsof -i tcp:2665 | grep -i LISTEN`
kill -9 `lsof -i tcp:2666 | grep -i LISTEN`

*Nix:

fuser -k -n tcp 2551

etc.

Booting the Sample Cluster In Multiple VMs with Vagrant

You'll need this Vagrant box: https://github.com/dsugden/vagrant-ansible-ubuntu-oracle-java8, packaged & installed (NOTE: the project Vagrantfile assumes the box is named ubuntu/trusty64_oraclejava8.

You'll then need to create a runnable distribution of the SampleCluster code to deploy on our sample VMs. There are two options:

  1. Create and use a standard zip file
  2. Create and use a debian package file
Building a Zip File

You'll need to generate a distribution zip file, and then unzip it so we can access it from our Vagrant VMs:

sbt sampleCluster/dist'

cd sampleCluster/target/universal
unzip samplecluster-1.0.0.zip
sudo chmod +x samplecluster-1.0.0/bin/samplecluster

Then, continue on to start up your test nodes

Building a Debian Package

Since we're working with an Ubuntu VM for this style of testing, we have the option of using a Debian package.

Just tell SBT to create the Debian package, and we'll install it when we need it in a bit:

sbt sampleCluster/debian:packageBin

Then, continue on to start up your test nodes

Booting the VM Test Nodes

Start up the Vagrant environment, which will boot 4 VMs for us, each capable of running nodes of the Akka SampleCluster:

vagrant up

Then, we'll want 4 separate terminal windows or tabes, and to log in to each VM:

vagrant ssh seed
vagrant ssh member_2
vagrant ssh member_3
vagrant ssh member_4

On each of these nodes we'll need to make the Akka SampleCluster available to boot in several roles. We can either use the zip-based Universal package, or install our Debian package.

To use the zip-based Universal package (which we already unzipped), run the following on each VM:

export PATH=/vagrant/sampleCluster/target/universal/samplecluster-1.0.0/bin:$PATH

This will make the samplecluster–used to boot each Akka cluster node–available in your standard shell path.

Alternately, installing the Debian package on each of the 4 VMs, will also make it available on your path:

sudo dpkg -i /vagrant/sampleCluster/target/samplecluster_1.0.0_all.deb

We will then need to boot up a Seed node, which will act as the Primary cluster member (with a stable, known address) for other nodes to contact:

samplecluster 192.168.11.20 2551 FooCluster 192.168.11.20:2551 Stable-Seed &

Next, we'll boot up 3 sample nodes on member_2:

samplecluster 192.168.11.22 2552 FooCluster 192.168.11.20:2551 Baz-Security &
samplecluster 192.168.11.22 2553 FooCluster 192.168.11.20:2551 Baz-Security &
samplecluster 192.168.11.22 2554 FooCluster 192.168.11.20:2551 Foo-Worker &

And the same on member_3:

samplecluster 192.168.11.23 2555 FooCluster 192.168.11.20:2551 Foo-Worker &
samplecluster 192.168.11.23 2556 FooCluster 192.168.11.20:2551 Bar-Worker &
samplecluster 192.168.11.23 2557 FooCluster 192.168.11.20:2551 Bar-Worker &

Finally, we'll boot 2 Akka nodes on member_4:

samplecluster 192.168.11.24 2558 FooCluster 192.168.11.20:2551 Foo-Http &
samplecluster 192.168.11.24 2559 FooCluster 192.168.11.20:2551 Bar-Http &

Discover your cluster in the console:

  1. Goto the ClusterMap tab

  2. Click the '+' button beside the "Clusters" in left window

  3. Enter Cluster Name.

  4. "App Host" is the IP of the box where the console is running

  5. "Seen Host" and "Port" are of the cluster you wish to discover

Docker Pull Command
Owner
boldradius