Supported OS and architectures
|Platform||Supported OS||Supported architecture|
|Windows||Windows 10 or Windows Server 2019||x86_64|
|Android||Android 6-10||x86, arm64-v8a or armeabi-v7a ABI|
|Linux||Ubuntu 18.04+ or Alpine 3.9+||x86_64, aarch64, armv7 or arm|
|Docker||Docker 20+||x86_64, aarch64, armv7 or arm|
|macOS||Big Sur or later||x86_64 or aarch64|
It is not supported to put firewalls between devices of the same swarm. Wherever possible, devices should be placed within one broadcast domain. Placing devices of the same swarm across different broadcast domains requires manually configuring initial peers and configuring communication between these broadcast domains accordingly.
Apart from the mDNS parameters, the three used TCP ports can be freely configured when starting Actyx. The command line options for these are
--bind-swarmfor the inter-node data transfer port (default: 4001)
--bind-apifor the HTTP API used by applications (default: 4454)
--bind-adminfor the admin port used by the Node Manager and Actyx CLI (default: 4458)
actyx --help for more details.
A lot of different factors play into performance of Actyx and your apps. Assuming a standard network setup, rugged tablets or other devices with relatively low computing power, and a standard factory use case, these are approximate limits:
|Latency||below 200 ms, not guaranteed as dependent on several factors|
|No. of nodes||max. 100 nodes|
|Event data rate||~1 event per node per 10 seconds|
Please refer to the performance and limits page for more detailed information, also regarding typical disk usage etc.
The following list shows the factors that influence performance. Please note that these are not requirements, but assumptions made for the above performance characteristics:
|LAN setup and latency||standard / WiFi|
|App runtimes||Webview (Node.js) and Docker|
|Devices||Rugged tablets or other devices with relatively low computing power (e.g. Raspberry Pi 3)|
|Device CPU||1-2 GHz, x86/arm architecture|
|Device RAM||1-4 GB|
|Business logic complexity||Standard factory use case, production data and machine data acquisition|
The limits regarding performance and disk space described on this page are only true within the circumstances outlined above. If one of these factors changes, the limits for your solution might change. If you are looking for guidance on a specific use case, please refer to our conceptual guide or contact us.
Node Settings Schema
Which settings are available and which values they may have is defined in the so-called Actyx Node Setting Schema. The most recent version thereof is available for download at:
admin section of the settings holds a list of authorized users, represented by their public keys (you can find yours in your O/S-specific user profile folder under
If you need to access a node that doesn’t accept your key, but you have administrator access to the (virtual) device it is running on, then you may use
ax users add-key to get yourself in — make sure to stop Actyx before doing this.
displayName property is shown in the Node Manager or
ax nodes inspect etc., so it is useful to set it to some short string that helps you identify the node.
You can change the
/admin/logLevels/node setting to adjust the logging output verbosity at runtime; valid values are DEBUG, INFO, WARN, or ERROR.
/api/events/readOnly setting controls whether the node will send events to the rest of the swarm.
Its main use is to create a “silent observer” that you use to test new app versions without risking to taint the swarm with development or test events.
licensing section is described in licensing apps.
swarm section you can fine-tune the networking behavior of Actyx:
announceAddresses: an array of addresses allows you to declare IP addresses under which the node is reachable, but that are not listen addresses. This is frequently necessary when running Actyx inside a Docker container, see configuring the
bitswapTimeout: maximal wait time for a data request sent to another swarm node. You may need to increase this on very slow networks if you regularly see bitswap timeout warnings in the logs.
blockCacheCount: the number of IPFS data blocks kept in the local
actyx-datafolder for the current topic. All swarm event data blocks and pinned files will be kept regardless of this setting, only voluntarily cached blocks can be evicted.
blockCacheSize: the size in bytes up to which data blocks are kept in the local
actyx-datafolder for the current topic. The same restriction applies as for blocks in that events and pinned files are not eligible for eviction, so the cache may grow larger.
blockGcInterval: the interval at which eligible data are evicted from the local
actyx-datafolder for the current topic. There should be no need to change this.
branchCacheSize: cache size in bytes for those IPFS blocks that contain event stream metadata. You may need to increase this in situations where many devices have been producing events for a long time: if this cache becomes too small, application query performance will drastically decline.
initialPeers: an array of addresses of the form
/ip4/<IP address>/tcp/<port>/p2p/<PeerId>to which this node shall immediately try to connect.
mdns: flag with which usage of the mDNS protocol for peer discovery can be disabled.
pingTimeout: Each connection to another Actyx node is continually monitored via messages sent over that TCP connection (they are called “ping”, but have nothing to do with the
pingnetwork tool). When three successive pings have not been answered within the allotted timeout, the connection is closed and will be re-established. You may need to increase this on very slow networks if you regularly see ping timeout warnings in the logs.
swarmKey: an additional layer of encryption between Actyx nodes that allows you to separate swarm so that they cannot connect to each other. See the guide on swarm keys.
topic: a logical separation of nodes within the same swarm — events can only be replicated within the same topic, and each node can only be part of one topic. This is mainly useful for effectively erasing all events within a swarm by switching all nodes to a new topic. The old events will still be in the
actyx-datafolder, you can access them by switching a node back to the old topic; there will be no interference between old and new topic.