unleashorg/unleash-edge
Unleash Edge. Helps you scale Unleash.
100K+
Unleash Edge is the successor to the Unleash Proxy.
Unleash Edge sits between the Unleash API and your SDKs and provides a cached read-replica of your Unleash instance. This means you can scale up your Unleash instance to thousands of connected SDKs without increasing the number of requests you make to your Unleash instance.
Unleash Edge offers two important features:
Performance: Unleash Edge caches in memory and can run close to your end-users. A single instance can handle tens to hundreds of thousands of requests per second. Resilience: Unleash Edge is designed to survive restarts and operate properly even if you lose connection to your Unleash server. Unleash Edge is built to help you scale Unleash, if you're looking for the easiest way to connect your client SDKs you can check out our Frontend API.
Migrating to Edge from the Proxy For more info on migrating, check out the migration guide that details the differences between Edge and the Proxy and how to achieve similar behavior in Edge.
The easiest way to run Unleash Edge is via Docker. We have published a docker image on docker hub. Since Edge 19.2.0 we recommend running in STRICT mode (-e STRICT=true). This does mean you have to give Edge which tokens to use at startup (use -e TOKENS=). STRICT mode guarantees that Edge will only allow client tokens with access to the same set of project or a subset of projects to the tokens it gets configured with at startup.
Step 1: Pull
docker pull docker pull unleashorg/unleash-edge
Step 2: Start
docker run \
-e UPSTREAM_URL=https://app.unleash-hosted.com/demo/ \
-e TOKENS=<A comma separated list of client tokens with access to the desired environment and projects>
-e STRICT=true
-p 3063:3063 \
unleashorg/unleash-edge \
edge
You should see the following output:
2023-05-10T07:59:47.544611Z INFO actix_server::builder: starting 1 worker
2023-05-10T07:59:47.544907Z INFO actix_server::server: Actix runtime found; starting in Actix runtime
Step 3: verify
In order to verify the proxy you can use curl and see that you get a few evaluated feature toggles back:
curl http://localhost:3063/api/client/features -H "Authorization: demo-app:default.70fc9102e309558b3395e7852bae426d4bfcc65e3ce868269cc3197b"
Expected output would be something like:
{
"version": 2,
"features": [
{
"name": "snowing",
"type": "release",
"description": "Enable snowing feature",
"createdAt": null,
"lastSeenAt": null,
"enabled": true,
"stale": false,
"impressionData": true,
"project": "demo-app",
"strategies": [
{
"name": "flexibleRollout",
"sortOrder": null,
"segments": null,
"constraints": [],
"parameters": {
"groupId": "snowing",
"rollout": "100",
"stickiness": "default"
}
},
{
"name": "Users",
"sortOrder": null,
"segments": [
115
],
"constraints": [
{
"contextName": "country",
"operator": "IN",
"caseInsensitive": false,
"inverted": false,
"values": [
"Belgium"
],
"value": null
}
],
"parameters": {
"users": "13"
}
}
],
"variants": [
{
"name": "989-",
"weight": 334,
"weightType": "variable",
"stickiness": "default",
"payload": {
"type": "string",
"value": "#green"
},
"overrides": []
},
{
"name": "d",
"weight": 333,
"weightType": "variable",
"stickiness": "default",
"payload": {
"type": "json",
"value": "{ \"op\": \"replace\" }"
},
"overrides": []
},
{
"name": "fdsfdsf",
"weight": 333,
"weightType": "variable",
"stickiness": "default",
"payload": null,
"overrides": []
}
]
}
...
]
}
Health endpoint
You can verify that Edge is ready to receive requests by curling our health endpoint.
curl http://localhost:3063/internal-backstage/health
{"status":"OK"}%
All options can be passed in after docker run unleashorg/unleash-edge, or passed in as environment variables matching the upper case names in the following readme.
Command Overview:
unleash-edge
Usage:unleash-edge [OPTIONS] <COMMAND>
Subcommands:
edge
— Run in edge modeoffline
— Run in offline modeOptions:
-p
, --port <PORT>
— Which port should this server listen for HTTP traffic on
Default value: 3063
-i
, --interface <INTERFACE>
— Which interfaces should this server listen for HTTP traffic on
Default value: 0.0.0.0
-w
, --workers <WORKERS>
— How many workers should be started to handle requests. Defaults to number of physical cpus
Default value: 16
--tls-enable
— Should we bind TLS
Default value: false
--tls-server-key <TLS_SERVER_KEY>
— Server key to use for TLS
--tls-server-cert <TLS_SERVER_CERT>
— Server Cert to use for TLS
--tls-server-port <TLS_SERVER_PORT>
— Port to listen for https connection on (will use the interfaces already defined)
Default value: 3043
--instance-id <INSTANCE_ID>
— Instance id. Used for metrics reporting
Default value: 01H02BF5PSYFF9SNBGR7MJRW9D
-a
, --app-name <APP_NAME>
— App name. Used for metrics reporting
Default value: unleash-edge
--markdown-help
unleash-edge edge
Run in edge mode
Usage:unleash-edge edge [OPTIONS] --upstream-url <UPSTREAM_URL>
Options:
-u
, --upstream-url <UPSTREAM_URL>
— Where is your upstream URL. Remember, this is the URL to your instance, without any trailing /api suffix
-r
, --redis-url <REDIS_URL>
— A URL pointing to a running Redis instance. Edge will use this instance to persist feature and token data and read this back after restart. Mutually exclusive with the --backup-folder option
-b
, --backup-folder <BACKUP_FOLDER>
— A path to a local folder. Edge will write feature and token data to disk in this folder and read this back after restart. Mutually exclusive with the --redis-url option
-m
, --metrics-interval-seconds <METRICS_INTERVAL_SECONDS>
— How often should we post metrics upstream?
Default value: 60
-f
, --features-refresh-interval-seconds <FEATURES_REFRESH_INTERVAL_SECONDS>
— How long between each refresh for a token
Default value: 10
--token-revalidation-interval-seconds <TOKEN_REVALIDATION_INTERVAL_SECONDS>
— How long between each revalidation of a token
Default value: 3600
-t
, --tokens <TOKENS>
— Get data for these client tokens at startup. Hot starts your feature cache
-H
, --custom-client-headers <CUSTOM_CLIENT_HEADERS>
— Expects curl header format (-H : ) for instance -H X-Api-Key: mysecretapikey
-s
, --skip-ssl-verification
— If set to true, we will skip SSL verification when connecting to the upstream Unleash server
Default value: false
--pkcs8-client-certificate-file <PKCS8_CLIENT_CERTIFICATE_FILE>
— Client certificate chain in PEM encoded X509 format with the leaf certificate first. The certificate chain should contain any intermediate certificates that should be sent to clients to allow them to build a chain to a trusted root
--pkcs8-client-key-file <PKCS8_CLIENT_KEY_FILE>
— Client key is a PEM encoded PKCS#8 formatted private key for the leaf certificate
--pkcs12-identity-file <PKCS12_IDENTITY_FILE>
— Identity file in pkcs12 format. Typically this file has a pfx extension
--pkcs12-passphrase <PKCS12_PASSPHRASE>
— Passphrase used to unlock the pkcs12 file
--upstream-certificate-file <UPSTREAM_CERTIFICATE_FILE>
— Extra certificate passed to the client for building its trust chain. Needs to be in PEM format (crt or pem extensions usually are)
unleash-edge offline
Run in offline mode
Usage:unleash-edge offline [OPTIONS]
Options:
-b
, --bootstrap-file <BOOTSTRAP_FILE>
-t
, --tokens <TOKENS>
To make the integration simple we have developed proxy client SDKs. You can find them all in our documentation:
docker pull unleashorg/unleash-edge