Public | Automated Build

Last pushed: 3 months ago
Short Description
ETP Server (Requires Mongo)
Full Description

ETP Node Server

This is an experimental implementation of a node.js server and html5 client for
the Energistics Transfer protocol (ETP). ETP is a proposed specification for
streaming real-time data from oil field drilling and production facilities. It
uses websockets for transport and Apache Avro for serialization.

This implementation also uses mongodb for storage, although that is not part of
the spec.




  • Install Node from - v0.10 min required.
  • Install Mongodb - v3.0 min required
  • Running from source requires Linux or a Linux-like windows environment like Cygwin


c:\>mkdir etpdemo
c:\>cd etpdemo

To install from NPM

npm install etp-server

To install from source

Clone the node folder from bitbucket. Say, into c:\etpdemo

$ make init
$ make -B
$ make test


To run from NPM installation

c:\ralfdemo>node node_modules/etp-server/bin/server

To run from Source installation

c:\ralfdemo>node dist/bin/server

simple-http-server Now Serving: ./ at http://localhost:8080/

Wed May 08 2013 08:05:21 GMT-0500 (Central Daylight Time) RaLF Server is listening on port 8081

Now point your modern, HTML5-compliant browser at http://localhost:8080


The following can be passed as command line options when starting the server.

Option Default Description
--httpServer true Run the web server, set false if you just want the ETP Websocket server
--httpPort port 8080 Web Server Port
--wsPort port 8081 Websocket Port
--schemas lastest Name of the RaLF schema file to use. Look in the schema folder for .avpr files, any can be used
--autoSubscribe false Start pushing data without a subscription
--defaultSubscription Name of a Uri to use when auto-publishing.
--databaseConnectString mongodb://localhost:27017/witsml mongodb connection string
--traceMessages false Creates a disk log of each message sent and received by the server.
--traceDirectory trace Name of the folder to hold the trace files.
--help n/a print this information

Recording Clients

The server now has the ability to record streaming data from other servers, store it in the
database and relay the points to any subscribed clients, essentially acting as an aggregator.
To enable this feature:

  1. Create a config directory under the main etp-server folder.
  2. Create a file called 'recorders.json'
  3. It should contain a single json array of servers you would like to connect to.

             "url": "ws://localhost:8082",
             "encoding": "binary",
             "retryInterval": "20000",
             "active": true,
             "contextUri": "eml:///witsml1411/log(LOUIS-1)"
             "url": "ws://",
             "encoding": "binary",
             "retryInterval": "20000",
             "active": true,
             "contextUri": "eml:///witsml1411/log(SimpleStreamer-1)"
             "url": "ws://",
             "encoding": "binary",
             "retryInterval": "20000",
             "active": true,
             "contextUri": "eml:///witsml1411/log(BOROMIR-1)"

Fields in recorders.json:


The address and port of the server to connect.


Currently only supports binary.


If the server is not available, or goes down during the connection, the recording
client will attempt to retry at this interval (in milliseconds). Set this value to
0 if you don't want to retry at all.


If this is set to false, aggregating server will not even try to connect.


In addition to the main server application, there are a number of stand-alone
utilities to help load the data base and provide simulated data for various
ETP configuration scenarios.



perfServer.js (located in the bin directory) is a convenient way of creating
a simple streaming server. It use the windows perf counters as a data source
(and thus works only on windows machines) to generate channels at one second
intervals. Use --help to see options. One of the options for perfServer is
--skipDuplicates which will cause it to send data points only when the value of
the perf counter changes. So you will still get a data set per second, but only of
the values that have changed.



loadAll.js (located in the bin directory) can be used to populate your server
database with existing WITSML data. loadAll uses a pool of up to 10 or so
processes to load the data set in parallel. There is a companion file loadOne.js,
which is the file that is forked by the loadAll, and it can also be run stand-alone to
load a single document. Loading the full data set can take 10-30 minutes depending on
memory, cores, ssd, etc. on your machine.



logPlayer.js (located in the bin directory) can be used to simulate a real time feed,
using a witsml1411 time log as input. The algorithm used is to read the time difference
between successive rows in the data section, and then use setTimeout to send the next
row at the appropriate time. There is a 'speed' parameter which is simply a divisor
applied to the number of milliseconds between rows that allows you to speed up the
simulator. Going above 1000 doesn't produce meaningful results. If you want to send the
entire log as fast as possible, specify a speed of 0.

Known Issues

- Does not currently support re-connecting sessions.
- Only supports describing individual channels.
Docker Pull Command
Source Repository

Comments (0)