README
Backend library for Node.js
General purpose backend library. The primary goal is to have a scalable platform for running and managing Node.js servers for Web services implementation.
This project only covers the lower portion of the Web services ecosystem: Node.js processes, HTTP servers, basic API functionality, database access, caching, messaging between processes, metrics and monitoring, a library of tools for developing Node.js servers.
For the UI and presentation layer there are no restrictions what to use as long as it can run on top of the Express server.
Features:
- Exposes a set of Web service APIs over HTTP(S) using Express framework.
- Database API supports SQLite, PostreSQL, DynamoDB, ElasticSearch with all basic operations behaving the same way allowing you to switch databases without changing the code.
- Database operations (Get, Put, Del, Update, Select) for all supported databases using the same DB API.
- Experimental database drivers for MySQL, Cassandra, Riak, CouchDB
- Experimental DynamoDB Streams processing in background worker processes
- Easily extensible to support any kind of database, provides a database driver on top of Redis with all supported methods as an example.
- Provides accounts, connections, locations, messaging and icons APIs with basic functionality for a quick start.
- Supports crontab and queue job processing by separate worker processes.
- Authentication is based on signed requests using API key and secret, similar to Amazon AWS signing requests.
- Runs web server as separate processes to utilize multiple CPU cores.
- Supports WebSockets connections and process them with the same Express routes as HTTP requests
- Supports several cache modes(Redis, Memcache, Hazelcast, LRU) for the database operations, multiple hosts support in the clients for failover.
- Supports several PUB/SUB modes of operations using Redis, RabbitMQ, Hazelcast.
- Supports async jobs processing using several work queue implementations on top of SQS, Redis, DB, RabbitMQ, Hazelcast.
- ImageMagick as a separate C++ module for in-process image scaling, see bkjs-wand on NPM.
- REPL (command line) interface for debugging and looking into server internals.
- Supports push notifications via Webpush, APN and FCM.
- Supports HTTP(S) reverse proxy mode where multiple Web workers are load-balanced by the proxy server running in the master process instead of relying on the OS scheduling between processes listening on the same port.
- Can be used with any MVC, MVVC or other types of frameworks that work on top of, or with, the Express server.
- AWS support is very well integrated including EC2, S3, DynamoDB, SQS and more.
- Includes simple log watcher to monitor the log files including system errors.
- Supports i18n hooks for request/response objects, easily overriden with any real i18n implementation.
- Integrated very light unit testing facility which can be used to test modules and API requests
- Support runtime metrics about the timing on database, requests, cache, memory and request rate limit control
- Full implementation of SRP6a protocol in the server and client
- Hosted on github, BSD licensed.
Check out the Documentation for more details.
Installation
To install the module with all optional dependencies if they are available in the system
npm install backendjs
This may take some time because of downloading and compiling required dependencies like ImageMagick. They are not required in all applications but still part of the core of the system to be available once needed.
To install from the git
npm install git+https://github.com/vseryakov/backendjs.git
or simply
npm install vseryakov/backendjs
Quick start and introduction
Simplest way of using the backendjs, it will start the server listening on port 8000
$ node > const bkjs = require('backendjs') > bkjs.server.start()
Access is allowed only with valid signature except urls that are explicitly allowed without it (see
api-allow
config parameter below)Same but using the helper tool, by default it will use embedded SQLite database and listen on port 8000.
bkjs web
or to the PostgreSQL server, database backend (if not running local server can be started with
bkjs init-pgsql
if postgresql is installed)bkjs web -db-pool pgsql -db-pgsql-pool postgresql://postgres@localhost/backend
To start the server and connect to the DynamoDB (command line parameters can be saved in the
etc/config file
, see below about config files)bkjs web -db-pool dynamodb -db-dynamodb-pool default -aws-key XXXX -aws-secret XXXX
If running on EC2 instance with IAM profile then no need to specify AWS credentials:
bkjs web -db-pool dynamodb -db-dynamodb-pool default
or to the ElasticSearch server, database backend
bkjs web -db-pool elasticsearch -db-elasticsearch-pool http://127.0.0.1:9200
All commands above will behave exactly the same
Tables are not created by default, in order to initialize the database, run the server or the shell with
-db-create-tables
flag, it is called only inside a master process, a worker never creates tables on startprepare the tables in the shell
bksh -db-pool dynamodb -db-dynamodb-pool default -db-create-tables
run the server and create tables on start, run Elasticsearch locally first on the local machine
bkjs get-elasticsearch bkjs run-elasticsearch bkjs web -db-pool elasticsearch -db-elasticsearch-pool http://127.0.0.1:9200 -db-create-tables
While the local backendjs is runnning, the documentation is always available at http://localhost:8000/doc.html (or whatever port is the server using)
To add users from the command line
bksh -account-add login test secret test name TestUser email test@test.com -scramble 1
By default no external modules are loaded so it needs the accounts module with a parameter
-allow-modules PATTERN
, this will load all modules that match the pattern, default modules start withbk_
:bkjs web -allow-modules bk_
To start Node.js shell with backendjs loaded and initialized, all command line parameters apply to the shell as well
bkjs shell
To access the database while in the shell
> db.select("bk_user", {}, console.log); > db.select("bk_user", {}, lib.log); > db.add("bk_user", { id: 'test2', login: 'test2', secret: 'test2', name' Test 2 name' }, lib.log); > db.select("bk_user", { id: 'test2' }, lib.log); > db.select("bk_user", { id: ['test1','test2'] }, { ops: { id: "in" } }, lib.log);
To search using Elasticsearch (assuming it runs on EC2 and it is synced with DynamoDB using streams)
> db.select("bk_user", { q: 'test' }, { pool: "elasticsearch" }, lib.log);
To run an example
The library is packaged with copies of Bootstrap, jQuery, Knockout.js for quick Web development in web/js and web/css directories, all scripts are available from the browser with /js or /css paths. To use all at once as a bundle run the following command:
npm run devbuild
Go to
examples/api
directory:Run the application, it will start the Web server on port 8000:
./app.sh
Now log in with the new account,
Go to http://localhost:8000/api.html and click on Login at the top-right corner, then enter 'test' as login and 'test' as secret in the login popup dialog.
To see your account details run the command in the console
/account/get
To see current metrics run the command in the console
/system/stats/get
When the web server is started with
-watch
parameters any change in the source files will make the server restart automatically letting you focus on the source code and not server management, this mode is only enabled by default in development mode, checkapp.sh
for parameters before running it in production.
Configuration
Almost everything in the backend is configurable using config files, a config database or DNS. The whole principle behind it is that once deployed in production, even quick restarts are impossible to do so there should be a way to push config changes to the processes without restarting.
Every module defines a set of config parameters that defines the behavior of the code, due to the single threaded nature of the Node.js. It is simple to update any config parameter to a new value so the code can operate differently. To achieve this the code must be written in a special way, like driven by configuration which can be changed at any time.
All configuration goes through the configuration process that checks all inputs and produces valid output which is applied to the module variables. Config file or database table with configuration can be loaded on demand or periodically, for example all local config files are watched for modification and reloaded automatically, the config database is loaded periodically which is defined by another config parameter.
Backend runtime
When the backendjs server starts it spawns several processes that perform different tasks.
There are 2 major tasks of the backend that can be run at the same time or in any combination:
- a Web server (server) with Web workers (web)
- a job scheduler (master)
These features can be run standalone or under the guard of the monitor which tracks all running processes and restarted any failed ones.
This is the typical output from the ps command on Linux server:
ec2-user 891 0.0 0.6 1071632 49504 ? Ssl 14:33 0:01 bkjs: monitor
ec2-user 899 0.0 0.6 1073844 52892 ? Sl 14:33 0:01 bkjs: master
ec2-user 908 0.0 0.8 1081020 68780 ? Sl 14:33 0:02 bkjs: server
ec2-user 917 0.0 0.7 1072820 59008 ? Sl 14:33 0:01 bkjs: web
ec2-user 919 0.0 0.7 1072820 60792 ? Sl 14:33 0:02 bkjs: web
ec2-user 921 0.0 0.7 1072120 40721 ? Sl 14:33 0:02 bkjs: worker
To enable any task a command line parameter must be provided, it cannot be specified in the config file. The bkjs
utility supports several
commands that simplify running the backend in different modes.
bkjs start
- this command is supposed to be run at the server startup as a service, it runs in the background and the monitors all tasks, the env variableBKJS_SERVER
can be set in the profile to one of themaster or monitor
to define which run mode to use, default mode is monitorbkjs monitor
- this command is supposed to be run at the server startup, it runs in the background and the monitors all processes, the command line parameters are:-daemon -monitor -master -syslog
bkjs master
- this command is supposed to be run at the server startup, it runs in the background and the monitors all processes, the command line parameters are:-daemon -monitor -master -syslog
bkjs watch
- runs the master and Web server in wather mode checking all source files for changes, this is the common command to be used in development, it passes the command line switches:-watch -master
bkjs web
- this command runs just web server process.bkjs run
- this command runs without other parameters, all additional parameters can be added in the command line, this command is a barebone helper to be used with any other custom settings.bkjs shell
orbksh
- start backendjs shell, no API or Web server is initialized, only the database pools
Application structure
The main purpose of the backendjs is to provide API to access the data, the data can be stored in the database or some other way but the access to that data will be over HTTP and returned back as JSON. This is default functionality but any custom application may return data in whatever format is required.
Basically the backendjs is a Web server with ability to perform data processing using local or remote jobs which can be scheduled similar to Unix cron.
The principle behind the system is that nowadays the API services just return data which Web apps or mobiles apps can render to the user without the backend involved. It does not mean this is simple gateway between the database, in many cases it is but if special processing of the data is needed before sending it to the user, it is possible to do and backendjs provides many convenient helpers and tools for it.
When the API layer is initialized, the api module contains app
object which is an Express server.
Special module/namespace app
is designated to be used for application development/extension. This module is available in the same way as api
and core
which makes it easy to refer and extend with additional methods and structures.
The typical structure of a backendjs application is the following:
const bkjs = require('backendjs');
const api = bkjs.api;
const app = bkjs.app;
const db = bkjs.db;
app.listArg = [];
// Define the module config parameters
core.describeArgs('app', [
{ name: "list-arg", array: 1, type: "list", descr: "List of words" },
{ name: "int-arg", type: "int", descr: "An integer parameter" },
]);
// Describe the tables or data models, all DB pools will use it, the master or shell
// process only creates new tables, workers just use the existing tables
db.describeTables({
...
});
// Optionally customize the Express environment, setup MVC routes or else, `api.app` is the Express server
app.configureMiddleware = function(options, callback)
{
...
callback()
}
// Register API endpoints, i.e. url callbacks
app.configureWeb = function(options, callback)
{
api.app.get('/some/api/endpoint', (req, res) => {
// to return an error, the message will be translated with internal i18n module if locales
// are loaded and the request requires it
api.sendReply(res, err);
// or with custom status and message, explicitely translated
api.sendReply(res, 404, res.__({ phrase: "not found", locale: "fr" }));
// with config check
if (app.intArg > 5) ...
if (app.listArg.indexOf(req.query.name) > -1) ...
// to send data back with optional postprocessing hooks
api.sendJSON(req, err, data);
// or simply
res.json(data);
});
...
callback();
}
// Optionally register post processing of the returned data from the default calls
api.registerPostProcess('', /^\/account\/([a-z\/]+)$/, function(req, res, rows) { ... });
...
// Optionally register access permissions callbacks
api.registerAccessCheck('', /^\/test\/list$/, function(req, status, callback) { ... });
api.registerPreProcess('', /^\/test\/list$/, function(req, status, callback) { ... });
...
bkjs.server.start();
Except the app.configureWeb
and server.start()
all other functions are optional, they are here for the sake of completeness of the example. Also
because running the backend involves more than just running web server many things can be setup using the configuration options like common access permissions,
configuration of the cron jobs so the amount of code to be written to have fully functioning production API server is not that much, basically only
request endpoint callbacks must be provided in the application.
As with any Node.js application, node modules are the way to build and extend the functionality, backendjs does not restrict how the application is structured.
Modules
Another way to add functionality to the backend is via external modules specific to the backend, these modules are loaded on startup from the backend
home subdirectory modules/
and from the backendjs package directory for core modules. The format is the same as for regular Node.js modules and
only top level .js files are loaded on the backend startup.
By default no modules are loaded except bk_user
, it must be configured by the -allow-modules
config parameter.
The modules are managed per process role, by default server
and master
processes do not load any modules at all to keep them
small and because they monitor workers the less code they have the better.
The shell process loads all modules, it is configured with .+
.
To enable any module to be loaded in any process it can be configured by using a role in the config parameter:
// Global modules except server and master
-allow-modules '.+'
// Master modules
-allow-modules-master 'bk_user|bk_system'
Once loaded they have the same access to the backend as the rest of the code, the only difference is that they reside in the backend home and
can be shipped regardless of the npm, node modules and other env setup. These modules are exposed in the core.modules
the same way as all other core submodules
methods.
Let's assume the modules/
contains file facebook.js which implements custom FB logic:
const bkjs = require("backendjs");
const fb = {
args: [
{ name: "token", descr: "API token" },
]
}
module.exports = fb;
fb.configureWeb = function(options, callback) {
...
}
fb.makeRequest = function(options, callback) {
bkjs.core.sendRequest({ url: options.path, query: { access_token: fb.token } }, callback);
}
This is the main app code:
const bkjs = require("backendjs");
const core = bkjs.core;
// Using facebook module in the main app
api.app.get("some url", (req, res) => {
core.modules.facebook.makeRequest({ path: "/me" }, (err, data) => {
bkjs.api.sendJSON(req, err, data);
});
});
bkj.server.start();
NPM packages as modules
In case different modules is better keep separately for maintenance or development purposes they can be split into
separate NPM packages, the structure is the same, modules must be in the modules/ folder and the package must be loadable
via require as usual. In most cases just empty index.js is enough. Such modules will not be loaded via require though but
by the backendjs core.loadModule
machinery, the NPM packages are just keep different module directories separate from each other.
The config parameter allow-packages
can be used to specify NPM package names to be loaded separated by comma, as with the default
application structure all subfolders inside each NPM package will be added to the core:
- modules will be loaded from the modules/ older
- locales from the locales/ folder
- files in the web/ folder will be added to the static search path
- all templates from views/ folder will be used for rendering
If there is a config file present as etc/config
it will be loaded as well, this way each package can maintain its default config parameters if necessary
without touching other or global configuration. Although such config files will not be reloaded on changes, when NPM installs or updates packages it
moves files around so watching the old config is no point because the updated config file will be different.
Database schema definition
The backend support multiple databases and provides the same db layer for access. Common operations are supported and all other specific usage can be achieved by
using SQL directly or other query language supported by any particular database.
The database operations supported in the unified way provide simple actions like db.get, db.put, db.update, db.del, db.select
. The db.query
method provides generic
access to the database driver and executes given query directly by the db driver, it can be SQL or other driver specific query request.
Before the tables can be queried the schema must be defined and created, the backend db layer provides simple functions to do it:
- first the table needs to be described, this is achieved by creating a JavaScript object with properties describing each column, multiple tables can be described at the same time, for example lets define album table and make sure it exists when we run our application:
db.describeTables({
album: {
id: { primary: 1 }, // Primary key for an album
name: { pub: 1 }, // Album name, public column
mtime: { type: "now" }, // Modification timestamp
},
photo: {
album_id: { primary: 1 }, // Combined primary key
id: { primary: 1 }, // consisting of album and photo id
name: { pub: 1, index: 1 }, // Photo name or description, public column with the index for faster search
mtime: { type: "now" }
}
});
- the system will automatically create the album and photos tables, this definition must remain in the app source code and be called on every app startup. This allows 1) to see the db schema while working with the app and 2) easily maintain it by adding new columns if necessary, all new columns will be detected and the database tables updated accordingly. And it is all JavaScript, no need to learn one more language or syntax to maintain database tables.
Each database may restrict how the schema is defined and used, the db layer does not provide an artificial layer hiding all specifics, it just provides the same API and syntax, for example, DynamoDB tables must have only hash primary key or combined hash and range key, so when creating table to be used with DynamoDB, only one or two columns can be marked with primary property while for SQL databases the composite primary key can consist of more than 2 columns.
The backendjs always creates several tables in the configured database pools by default, these tables are required to support default API functionality and some
are required for backend operations. Refer below for the JavaScript modules documentation that described which tables are created by default. In the custom applications
the db.describeTables
method can modify columns in the default table and add more columns if needed.
For example, to make age and some other columns in the accounts table public and visible by other users with additional columns the following can be
done in the api.initApplication
method. It will extend the bk_user table and the application can use new columns the same way as the already existing columns.
Using the birthday column we make 'age' property automatically calculated and visible in the result, this is done by the internal method api.processAccountRow
which
is registered as post process callback for the bk_user table. The computed property age
will be returned because it is not present in the table definition
and all properties not defined and configured are passed as is.
The cleanup of the public columns is done by the api.sendJSON
which is used by all API routes when ready to send data back to the client. If any post-process
hooks are registered and return data itself then it is the hook responsibility to cleanup non-public columns.
db.describeTables({
bk_user: {
birthday: {},
ssn: {},
salary: { type: "int" },
occupation: {},
home_phone: {},
work_phone: {},
});
app.configureWeb = function(options, callback)
{
db.setProcessRow("post", "bk_user", this.processAccountRow);
...
callback();
}
app.processAccountRow = function(req, row, options)
{
if (row.birthday) row.age = Math.floor((Date.now() - core.toDate(row.birthday))/(86400000*365));
}
To define tables inside a module just provide a tables
property in the module object, it will be picked up by database initialization automatically.
const mod = {
name: "billing",
tables: {
invoices: {
id: { type: "int", primary: 1 },
name: {},
price: { type: "real" },
mtime: { type: "now" }
}
}
}
module.exports = mod;
// Run db setup once all the DB pools are configured, for example produce dynamic icon property
// for each record retrieved
mod.configureModule = function(options, callback)
{
db.setProcessRows("post", "invoices", function(req, row, opts) {
if (row.id) row.icon = "/images/" + row.id + ".png";
});
callback();
}
API requests handling
All methods will put input parameters in the req.query
, GET or POST.
One way to verify input values is to use lib.toParams
, only specified parameters will be returned and converted according to
the type or ignored.
Example:
var params = {
test1: { id: { type: "text" },
count: { type: "int" },
email: { regexp: /^[^@]+@[^@]+$/ }
}
};
api.app.all("/endpoint/test1", function(req, res) {
const query = lib.toParams(req.query, params.test1);
if (typeof query == "string") return api.sendReply(res, 400, query);
...
});
Example of TODO application
Here is an example how to create simple TODO application using any database supported by the backend. It supports basic operations like add/update/delete a record, show all records.
Create a file named app.js
with the code below.
const bkjs = require('backendjs');
const api = bkjs.api;
const lib = bkjs.lib;
const app = bkjs.app;
const db = bkjs.db;
// Describe the table to store todo records
db.describeTables({
todo: {
id: { type: "uuid", primary: 1 }, // Store unique task id
due: {}, // Due date
name: {}, // Short task name
descr: {}, // Full description
mtime: { type: "now" } // Last update time in ms
}
});
// API routes
app.configureWeb = function(options, callback)
{
api.app.get(/^\/todo\/([a-z]+)$/, function(req, res) {
var options = api.getOptions(req);
switch (req.params[0]) {
case "get":
if (!req.query.id) return api.sendReply(res, 400, "id is required");
db.get("todo", { id: req.query.id }, options, (err, rows) => { api.sendJSON(req, err, rows); });
break;
case "select":
options.noscan = 0; // Allow empty scan of the whole table if no query is given, disabled by default
db.select("todo", req.query, options, (err, rows) => { api.sendJSON(req, err, rows); });
break;
case "add":
if (!req.query.name) return api.sendReply(res, 400, "name is required");
// By default due date is tomorrow
if (req.query.due) req.query.due = lib.toDate(req.query.due, Date.now() + 86400000).toISOString();
db.add("todo", req.query, options, (err, rows) => { api.sendJSON(req, err, rows); });
break;
case "update":
if (!req.query.id) return api.sendReply(res, 400, "id is required");
db.update("todo", req.query, options, (err, rows) => { api.sendJSON(req, err, rows); });
break;
case "del":
if (!req.query.id) return api.sendReply(res, 400, "id is required");
db.del("todo", { id: req.query.id }, options, (err, rows) => { api.sendJSON(req, err, rows); });
break;
}
});
callback();
}
bkjs.server.start();
Now run it with an option to allow API access without an account:
node app.js -log debug -web -api-allow-path /todo -db-create-tables
To use a different database, for example PostgresSQL(running localy) or DynamoDB(assuming EC2 instance), all config parametetrs can be stored in the etc/config as well
node app.js -log debug -web -api-allow-path /todo -db-pool dynamodb -db-dynamodb-pool default -db-create-tables
node app.js -log debug -web -api-allow-path /todo -db-pool pgsql -db-pgsql-pool default -db-create-tables
API commands can be executed in the browser or using curl
:
curl 'http://localhost:8000/todo?name=TestTask1&descr=Descr1&due=2015-01-01`
curl 'http://localhost:8000/todo/select'
Backend directory structure
When the backend server starts and no -home argument passed in the command line the backend makes its home environment in the ~/.bkjs
directory.
It is also possible to set the default home using BKJS_HOME environment variable.
The backend directory structure is the following:
etc
- configuration directory, all config files are thereetc/profile
- shell script loaded by the bkjs utility to customize env variablesetc/config
- config parameters, same as specified in the command line but without leading -, each config parameter per line:Example:
debug=1 db-pool=dynamodb db-dynamodb-pool=http://localhost:9000 db-pgsql-pool=postgresql://postgres@127.0.0.1/backend To specify other config file: bkjs shell -config-file file
etc/config.local - same as the config but for the cases when local environment is different than the production or for dev specific parameters
some config parameters can be configured in DNS as TXT records, the backend on startup will try to resolve such records and use the value if not empty. All params that marked with DNS TXT can be configured in the DNS server for the domain where the backend is running, the config parameter name is concatenated with the domain and queried for the TXT record, for example:
cache-host
parameter will be queried for cache-host.domain.name for TXT record type.etc/crontab
- jobs to be run with intervals, JSON file with a list of cron jobs objects:Example:
Create file in ~/.backend/etc/crontab with the following contents:
[ { "cron": "0 1 1 * * 1,3", "job": { "app.cleanSessions": { "interval": 3600000 } } } ]
Define the function that the cron will call with the options specified, callback must be called at the end, create this app.js file
var bkjs = require("backendjs"); bkjs.app.cleanSessions = function(options, callback) { bkjs.db.delAll("session", { mtime: options.interval + Date.now() }, { ops: "le" }, callback); } bkjs.server.start()
Start the jobs queue and the web server at once
bkjs master -jobs-workers 1 -jobs-cron
etc/crontab.local - additional local crontab that is read after the main one, for local or dev environment
modules
- loadable modules with specific functionalityimages
- all images to be served by the API server, every subfolder represent naming space with lots of subfolders for imagesvar
- database files created by the servertmp
- temporary filesweb
- Web pages served by the static Express middleware
Cache configurations
Database layer support caching of the responses using db.getCached
call, it retrieves exactly one record from the configured cache, if no record exists it
will pull it from the database and on success will store it in the cache before returning to the client. When dealing with cached records, there is a special option
that must be passed to all put/update/del database methods in order to clear local cache, so next time the record will be retrieved with new changes from the database
and refresh the cache, that is { cached: true }
can be passed in the options parameter for the db methods that may modify records with cached contents. In any case
it is required to clear cache manually there is db.clearCache
method for that.
Also there is a configuration option -db-caching
to make any table automatically cached for all requests.
Local
If no cache is configured the local driver is used, it keeps the cache on the master process in the LRU pool and any worker or Web process
communicate with it via internal messaging provided by the cluster
module. This works only for a single server.
memcached
Set ipc-cache=memcache://HOST[:PORT]
that points to the host running memcached. To support multiple servers add the option
ipc-cache-options-servers=10.1.1.1,10.2.2.1:5000
.
Redis
Set ipc-cache=redis://HOST[:PORT]
that points to the server running Redis server.
To support more than one master Redis server in the client add additional servers in the servers parameter,
ipc-cache-options-servers=10.1.1.1,10.2.2.1:5000
, the client will reconnect automatically on every
disconnect. To support quick failover it needs a parameter for the node-redis
module (which is used by the driver) max_attempts
to be a
number how many attempts to reconnect before switching to another server like ipc-cache-options-max_attempts=3
. If there is only one
server then it will keep reconnecting until total reconnect time exceeds the connect_timeout
ms.
Any other node-redis
module parameter can be passed as well.
Cache configurations also can be passed in the url, the system supports special parameters that start with bk-
, it will extract them into options automatically.
For example:
ipc-cache=redis://host1?bk-servers=host2,host3&bk-max_attempts=3
ipc-cache-backup=redis://host2
ipc-cache-backup-options-max_attempts=3
Redis Sentinel
To enable Redis Sentinel pass in the option -sentinel-servers
: ipc-cache=redis://host1?bk-sentinel-servers=host1,host2
.
The system will connect to the sentinel, get the master cache server and connect the cache driver to it, also it will listen constantly on
sentinel events and failover to a new master autimatically. Sentinel use the regular redis module and supports all the same
parameters, to pass options to the sentinel driver prefix them with sentinel-
:
ipc-cache=redis://host1?bk-servers=host2,host3&bk-max_attempts=3&bk-sentinel-servers=host1,host2,host3
ipc-cache-backup=redis://host2
ipc-cache-backup-options-sentinel-servers=host1,host2
ipc-cache-backup-options-sentinel-max_attempts=5
PUB/SUB or Queue configurations
Redis
To configure the backend to use Redis for PUB/SUB messaging and support the system bus configure both queue and cache because in subscribe mode Redis connection does not allow to send any messages,
publishing will be done using the cache connection in the ipc.broadcast
.
For example to define the system bus:
ipc-queue-system=redis://
ipc-cache-system=redis://
ipc-system-queue=system
Redis Queue
To configure the backend to use Redis for job processing set ipc-queue=redisq://HOST
where HOST is IP address or hostname of the single Redis server.
This driver implements reliable Redis queue, with visibilityTimeout
config option works similar to AWS SQS.
Once configured, then all calls to jobs.submitJob
will push jobs to be executed to the Redis queue, starting somewhere a backend master
process with -jobs-workers 2
will launch 2 worker processes which will start pulling jobs from the queue and execute.
The naming convention is that any function defined as function(options, callback)
can be used as a job to be executed in one of the worker processes.
An example of how to perform jobs in the API routes:
core.describeArgs('app', [
{ name: "queue", descr: "Queue for jobs" },
]);
app.queue = "somequeue";
app.processAccounts = function(options, callback) {
db.select("bk_user", { type: options.type || "user" }, (err, rows) => {
...
callback();
});
}
api.all("/process/accounts", function(req, res) {
jobs.submitJob({ job: { "app.processAccounts": { type: req.query.type } } }, { queueName: app.queue }, (err) => {
api.sendReply(res, err);
});
});
SQS
To use AWS SQS for job processing set ipc-queue=https://sqs.amazonaws.com....
, this queue system will poll SQS for new messages on a worker
and after successful execution will delete the message. For long running jobs it will automatically extend visibility timeout if it is configured.
Local
The local queue is implemented on the master process as a list, communication is done via local sockets between the master and workers. This is intended for a single server development purposes only.
RabbitMQ
To configure the backend to use RabbitMQ for messaging set ipc-queue=amqp://HOST
and optionally amqp-options=JSON
with options to the amqp module.
Additional objects from the config JSON are used for specific AMQP functions: { queueParams: {}, subscribeParams: {}, publishParams: {} }. These
will be passed to the corresponding AMQP methods: amqp.queue, amqp.queue.sibcribe, amqp.publish
. See AMQP Node.js module for more info.
Security configurations
API only
This is default setup of the backend when all API requests except must provide valid signature and all HTML, JavaScript, CSS and image files
are available to everyone. This mode assumes that Web development will be based on 'single-page' design when only data is requested from the Web server and all
rendering is done using JavaScript. This is how the examples/api/api.html
developers console is implemented, using JQuery-UI and Knockout.js.
To see current default config parameters run any of the following commands:
bkjs bkhelp | grep api-allow
node -e 'require("backendjs").core.showHelp()'
Secure Web site, client verification
This is a mode when the whole Web site is secure by default, even access to the HTML files must be authenticated. In this mode the pages must defined 'Backend.session = true' during the initialization on every html page, it will enable Web sessions for the site and then no need to sign every API request.
The typical client JavaScript verification for the html page may look like this, it will redirect to login page if needed, this assumes the default path '/public' still allowed without the signature:
<link href="/css/bkjs.bundle.css" rel="stylesheet">
<script src="/js/bkjs.bundle.js" type="text/javascript"></script>
<script>
$(function () {
Bkjs.session = true;
$(Bkjs).on("bkjs.nologin", function() { window.location='/public/index.html'; });
Bkjs.koInit();
});
</script>
Secure Web site, backend verification
On the backend side in your application app.js it needs more secure settings defined i.e. no html except /public will be accessible and
in case of error will be redirected to the login page by the server. Note, in the login page Bkjs.session
must be set to true for all
html pages to work after login without singing every API request.
- We disable all allowed paths to the html and registration:
app.configureMiddleware = function(options, callback) {
this.allow.splice(this.allow.indexOf('^/