A REST API wrapper for the Polymesh Private blockchain.
This version is compatible with polymesh private chain versions 1.x and 2.x
This repo uses a submodule to avoid duplicating the public REST API. The submodules can be pulled with:
git submodule update --init --recursive
- node.js version 18.x
- yarn version 1.x
Note, if running with node v16+ the env NODE_OPTIONS
should be set to --unhandled-rejections=warn
$ yarn
# development
$ yarn start
# watch mode
$ yarn start:dev
# REPL (interactive command line)
$ yarn start:repl
# production mode
$ yarn start:prod
Documentation for REPL mode can be found here
# unit tests
$ yarn test
# e2e tests
$ yarn test:e2e
# test coverage
$ yarn test:cov
PORT=## port in which the server will listen. Defaults to 3000 ##
POLYMESH_NODE_URL=## websocket URL for a Polymesh node ##
POLYMESH_MIDDLEWARE_V2_URL=## URL for an instance of the Polymesh GraphQL Middleware Native SubQuery service ##
LOCAL_SIGNERS=## list of comma separated IDs to refer to the corresponding mnemonic ##
LOCAL_MNEMONICS=## list of comma separated mnemonics for the signer service (each mnemonic corresponds to a signer in LOCAL_SIGNERS) ##
# Below are optional params that enable some features. The above should be good to get started with
DEVELOPER_SUDO_MNEMONIC=## a mnemonic that has `sudo` privileges for a chain. Defaults to `//Alice` ##
DEVELOPER_UTILS=## set to `true` to enable developer testing endpoints ##
# Vault Signer:
VAULT_URL=## The URL of a Vault transit engine##
VAULT_TOKEN=## The access token for authorization with the Vault instance ##
# Fireblocks Signer:
FIREBLOCKS_URL=## The fireblocks URL ##
FIREBLOCKS_API_KEY=## The API Key to use ##
FIREBLOCKS_SECRET_PATH=## Path to secret file to sign requests with ##
# Webhooks:
SUBSCRIPTIONS_TTL=## Amount of milliseconds before a subscription is considered expired ##
SUBSCRIPTIONS_MAX_HANDSHAKE_TRIES=## Amount of attempts to activate a subscription via handshake before it is considered rejected ##
SUBSCRIPTIONS_HANDSHAKE_RETRY_INTERVAL=## Amount of milliseconds between subscription handshake attempts ##
NOTIFICATIONS_MAX_TRIES=## Amount of attempts to deliver a notification before it is considered failed ##
NOTIFICATIONS_RETRY_INTERVAL=## Amount of milliseconds between notification delivery attempts ##
NOTIFICATIONS_LEGITIMACY_SECRET=## A secret used to create HMAC signatures ##
# Auth:
AUTH_STRATEGY=## list of comma separated auth strategies to use e.g. (`apiKey,open`) ##
API_KEYS=## list of comma separated api keys to initialize the `apiKey` strategy with ##
# Datastore:
REST_POSTGRES_HOST=## Domain or IP of DB instance ##
REST_POSTGRES_PORT=## Port the DB is listening (usually 5432) ##
REST_POSTGRES_USER=## DB user to use##
REST_POSTGRES_PASSWORD=## Password of the user ##
REST_POSTGRES_DATABASE=## Database to use ##
# Artemis:
ARTEMIS_HOST=localhost## Domain or IP of artemis instance ##
ARTEMIS_USERNAME=artemis ## Artemis user ##
ARTEMIS_PASSWORD=artemis ## Artemis password ##
ARTEMIS_PORT=5672 ## Port of AMQP acceptor ##
# Proof Server:
PROOF_SERVER_URL=## API path where the proof server is hosted
The REST API has endpoints that submit transactions to the block chain (generally POST routes). Each of these endpoints share a field "options"
that controls what key will sign it, and how it will be processed.
e.g.
{
options: {
signer: "alice",
processMode: "submit"
},
...transactionParams
}
Process modes include:
submit
This will create a transaction payload, sign it and submit it to the chain. It will respond with 201 when the transaction has been successfully finalized. (Usually around 15 seconds).submitWithCallback
This works like submit, but returns a response as soon as the transaction is submitted. The URL specified bywebhookUrl
will receive updates as the transaction is processeddryRun
This creates and validates a transaction, and returns an estimate of its fees.offline
This creates an unsigned transaction and returns a serialized JSON payload. The information can be signed, and then submitted to the chain.AMQP
This creates an transaction to be processed by worker processes using an AMQP broker to ensure reliable processing
A signing manager is required for submit
and submitWithCallback
processing modes.
There are currently three signing managers the REST API can be configured with, the local signer, the Hashicorp Vault signer or the Fireblocks signing manager. If args for multiple are given the precedence order is Vault over Fireblocks over Local.
For any method that modifies chain state, the key to sign with can be controlled with the "options.signer" field. This can either be the SS58 encoded address, or an ID that is dependent on the particular signing manager.
-
Vault Signing: By setting
VAULT_URL
andVAULT_TOKEN
an external Vault instance will be used to sign transactions. The URL should point to a transit engine in Vault that has Ed25519 keys in it.To refer to a key when signing use the Vault name and version
${name}-${version}
e.g.alice-1
. -
Fireblocks Signing By setting
FIREBLOCKS_URL
,FIREBLOCKS_API_KEY
andFIREBLOCKS_SECRET_PATH
Fireblocks raw signing API will be used to sign transactions. The secret path should point to a file containing the secret setup in the Fireblocks platform, along with the API Key.The signer consists of 3 integers separated by
-
, as in1-0-0
. This correspond toaccount
,change
andaddress_index
from the BIP-44 standard. Ifchange
andaddress
portion are left out they will default to0
. Each combination refers to a unique address that must be on boarded on chain before it can be used.Note, if using the docker image the secret file will need to be mounted into the container with the flag
--volume $HOST_SECRET_PATH:$FIREBLOCKS_SECRET_PATH
being passed todocker run
. -
Local Signing: By using
LOCAL_SIGNERS
andLOCAL_MNEMONICS
private keys will be initialized in memory. When making a transaction that requires a signer use the correspondingLOCAL_SIGNERS
(by array offset).
Offline payloads contain a field "unsignedTransaction"
, which consists of 4 keys. payload
and rawPayload
correspond to signPayload
and signRaw
. You will need to pass one of these to the respective signer you are using (or replicate signRaw
in your environment). method
is the hex encoded transaction, which can help verify what is being signed. metadata
is an echo of whatever is passed as metadata
in the options. It has no effect on operation, but can be useful for attaching extra info to the transactions, e.g. clientId
or memo
After being generated the signature with the payload can be passed to /submit
to be submitted to the chain.
This mode introduces the risk transactions are rejected due to incorrect nonces or elapsed lifetime. See the options DTO definition for full details
AMQP is a form on offline processing where the payload will be published on an AMQP topic, instead of being returned. Currently there are a set of "offline" modules, that setup listeners to the different queues.
- A transaction with "AMQP" mode is received. This gets serialized to an offline payload and published on
Requests
- A signer process subscribes to
Requests
. For each message it generates a signature, and publishes a message onSignatures
- A submitter process subscribes to
Signatures
and submits to the chain. It publishes toFinalizations
, for consumer applications to subscribe to
To use AMQP mode a message broker must be configured. The implementation assumes ArtemisMQ is used, with an AMQP acceptor. In theory any AMQP 1.0 compliant broker should work though.
If using AMQP, it is strongly recommended to use a persistent data store (i.e postgres). There are two tables related to AMQP processing: offline_tx
and offline_event
:
offline_tx
is a table for the submitter process. This provides a convenient way to query submitted transactions, and to detect ones rejected by the chain for some reasonoffline_event
is a table for the recorder process. This uses Artemis diverts to record every message exchanged in the process, serving as an audit log
If using the project's compose file, an Artemis console will be exposed on :8181
with artemis
being both username and password.
Normally the endpoints that create transactions wait for block finalization before returning a response, which normally takes around 15 seconds. When processMode submitAndCallback
is used the webhookUrl
param must also be provided. The server will respond after submitting the transaction to the mempool with 202 (Accepted) status code instead of the usual 201 (Created).
Before sending any information to the endpoint the service will first make a request with the header x-hook-secret
set to a value. The endpoint should return a 200
response with this header copied into the response headers.
If you are a developer you can toggle an endpoint to aid with testing by setting the env DEVELOPER_UTILS=true
which will enabled a endpoint at /developer-testing/webhook
which can then be supplied as the webhookUrl
. Note, the IsUrl validator doesn't recognize localhost
as a valid URL, either use the IP 127.0.0.1
or create an entry in /etc/hosts
like 127.0.0.1 rest.local
and use that instead.
Webhooks are still being developed and should not be used against mainnet. However the API should be stable to develop against for testing and demo purposes
Webhooks have yet to implement a Repo to maintain subscription state, or AMQP to ensure it won't miss events. As such it can not guarantee delivery of messages
The plan is to use a datastore and a message broker to make this module production ready
The REST API uses passport.js for authentication. This allows the service to be configurable with multiple strategies.
Currently there are two strategies available:
- Api Key:
By configuring
apiKey
as a strategy, any request with the headerx-api-key
will be authenticated with this strategy. The envAPI_KEYS
can be used to provide initial keys. - Open:
By configuring
open
as a strategy any request will be authenticated with a default user. This is primarily intended for development, however it can be used to provide a "read only" API. It should never be used in combination with a signing manager that holds valuable keys.
More strategies can be added, there are many pre-made strategies that are available, and custom ones can be written.
To implement a new strategy, create a new file in ~/auth/strategies/
and update the strategies.consts
file with an appropriate name. Be sure to add some tests for your logic as well.
The REST API has taken a plugin style approach to where it stores state. Do note, the Polymesh chain is responsible for processing most POST request. This only affects where REST API specific entities are stored (e.g. Users and ApiKeys). Most transactions are permanently stored on chain regardless of the datastore used
Currently there are two datastore available:
- LocalStore:
This is the default setting. This uses the process memory to store state. This allows the REST API to be ran as a single process which is convenient for development purposes, or when an instance is intended for read only purposes (i.e. no
signers
are loaded). However all state will be lost when the process shuts down - Postgres This is the more production ready approach. This allows state to be persisted, and multiple server instances to user the same information. Internally this uses TypeORM to manage the database
package.json
contains scripts to help manage the development postgres service defined in docker-compose.yml
. These are all prefixed with postgres:dev
, e.g. yarn postgres:dev:start
, which will use the configuration defined in postgres.dev.config
.
To implement a new repo for a service, first define an abstract class describing the desired interface. Also write a test suite to specify the expected behavior from an implementation. Then in the concrete implementations define a new Repo that satisfies the test suite.
To implement a new datastore create a new module in ~/datastores
and create a set of Repos
that will implement the abstract classes. You will then need to set up the DatastoreModule
to export the module when it is configured. For testing, each implemented Repo should be able to pass the test
method defined on the abstract class it is implementing.
To pass in the env variables you can use -e
to pass them individually, or use a file with --env-file
.
For documentation you will need to expose a port that maps to :3000
(or its $PORT
if set) in the container.
docker build . -t $image_name
docker run -it --env-file .pme.env -p $HOST_PORT:3000 $image_name
Accessing http://localhost:<PORT>
will take you to the swagger playground UI where all endpoints are documented and can be tested
You may need to enable "Use Rosetta for x86/amd64 emulation on Apple Silicon" in order for the Artemis AMQP container to start
Currently in "Settings" > "Features in development" in docker desktop
This project uses NestJS, which is MIT licensed.
The project itself is Apache 2.0 licensed.