Introduction
The On Prem Platform provides a cloud-managed Edge Agent specialized in deploying lambda-based inference workloads to baremetal resource-constrained devices at the source of the signal.
It aims to help you bring your control plane to your on-premise devices, without the use of heavyweight virtual machines, operator-intensive Kubernetes, or heavyweight Docker images. This enables developers to implement low-latency control functions that are informed by ML or AI based inference, using real-time data from IoT busses.
A cloud-hosted control plane is made available at console.on-prem.net and api.on-prem.net, and is ready for use by edge devices such as Raspberry Pi that are able to phone home to the cloud.
Cloud Console
A web console provides a collaborative lambda development experience, while also enabling you to organize the runtime environment where your lambdas will run, using a hierarchy of Facility and Device records.
CLI
A CLI enables GitOps and DevOps workflows by providing idempotent configuration capabilities via yaml or json files. The CLI provides access to the full set of functionality provided by the web console.
$ onprem list devices
┌──────────────────────┬───────────────┬──────────────┬─────────────┬───────────────────────────────────┐
│ id ┆ name ┆ manufacturer ┆ model ┆ uuid │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ str │
╞══════════════════════╪═══════════════╪══════════════╪═════════════╪═══════════════════════════════════╡
│ cfv5v3h32ckl0mq6al70 ┆ c001-b8-n01 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ 8938301d-9e32-5470-8f2b-0dd379cb… │
│ cfv5v8932ckl0mq6amc0 ┆ c001-b8-n03 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ 692c82a0-a2ea-550a-9caa-e42debf0… │
│ cfv60ah32ckl0mq6au60 ┆ c001-b8-n04 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ acc28e51-0f57-552b-a36e-9023db67… │
│ cfv60o932ckl0mq6b1d0 ┆ c001-b8-n05 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ 8225ad60-4ce5-5f8e-958f-0a78a0d6… │
│ cfv60vp32ckl0mq6b38g ┆ c001-b8-n06 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ c55f0ba1-359c-5cda-a2d2-c6601428… │
│ cfv624132ckl0mq6bbi0 ┆ c001-b8-n07 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ 6af5f0a6-af50-57ce-ba78-9435e223… │
│ cfv623h32ckl0mq6bbc0 ┆ c001-b8-n08 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ e430593d-36ce-5f26-9c14-9d6ccaeb… │
│ cfv634h32ckl0mq6bjbg ┆ c001-b8-n09 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ 0313d666-d3ed-5656-8468-0ceb1417… │
│ ch2ql7p32ckj9ndqd200 ┆ c006-n1 ┆ NVIDIA ┆ Jetson Nano ┆ 185b8d7c-5c4a-5f63-86d9-d80bbf72… │
│ chai28932ckjgou9an3g ┆ c006-n2 ┆ NVIDIA ┆ Jetson Nano ┆ 8ed6b72b-9056-5f3e-8ef4-9a1e537d… │
│ cht7np132ckk7b39k6o0 ┆ seeed-0 ┆ NVIDIA ┆ null ┆ 4791cb95-1f40-547f-be80-af538afa… │
│ cht7nv932ckk7b39k8e0 ┆ seeed-1 ┆ NVIDIA ┆ null ┆ a9514069-59a1-5301-83e1-51b493fa… │
│ cht7o4h32ckk7b39k9tg ┆ seeed-2 ┆ NVIDIA ┆ null ┆ d238d655-3f1e-56db-bc2d-563e57c5… │
│ cht7o9932ckk7b39kb7g ┆ seeed-3 ┆ NVIDIA ┆ null ┆ b3bd7b4f-43b9-5c72-bc83-b7ce6874… │
│ cjprdbc3v1vsmvpqo980 ┆ c006-n1 ┆ NVIDIA ┆ Jetson Nano ┆ 185b8d7c-5c4a-5f63-86d9-d80bbf72… │
│ cjprdt43v1vsmvpqoaug ┆ c006-n2 ┆ NVIDIA ┆ Jetson Nano ┆ 8ed6b72b-9056-5f3e-8ef4-9a1e537d… │
└──────────────────────┴───────────────┴──────────────┴─────────────┴───────────────────────────────────┘
Edge Agent
The primary component is the Agent, a lean next-generation Rust-based software agent purpose-built to run low-latency lambdas for inference and signal processing on resource-constrained hardware, where insights can be used to inform control systems.
It embeds a Lambda service with support for Lua and WASM language runtimes, and includes drivers for interacting with common IoT busses. These Kubernetes-style robotic lambda control loops run autonomously at the edge, while the agent is able to optionally phone home to the control plane when connectivity permits. When connected, it is able to download new configuration bundles, and make a reverse-tunnel available so that you can manage it via the cloud console.
Agent
The On Prem Agent is installed wherever you want to run lambdas.
Agent Installation
Debian
Installation on Debian based operating systems, which includes Raspberry Pi OS, involves the following steps:
- Register our APT repository's public key
- Register our APT repository
- Refresh your packages index
- Install our agent
- Tell systemd to start our agent
- Tell systemd to enable our agent, ensuring it gets started after reboots
wget -qO - https://apt.on-prem.net/public.key | sudo tee /etc/apt/trusted.gpg.d/on-prem.asc
VERSION_CODENAME=`grep "VERSION_CODENAME=" /etc/os-release |awk -F= {' print $2'}|sed s/\"//g`
echo "deb https://apt.on-prem.net/ ${VERSION_CODENAME} main" | sudo tee /etc/apt/sources.list.d/on-prem.list
sudo apt-get update
sudo apt-get -y install on-prem-agent
sudo systemctl start on-prem-agent
sudo systemctl enable on-prem-agent
Docker
Our default image shown below is a multi-architecture manifest, that should automatically provide you with an image compatible with your current hardware architecture.
docker pull onpremnet/agent
Run Interactively
Running interactively is the most efficient way to test new configurations. If anything doesn't work, just Ctrl+C, make some tweaks, then try again.
docker run -e 'API_KEY=__PASTE_YOUR_API_KEY__' -it onpremnet/agent
Run as a daemon
docker run -e 'API_KEY=__PASTE_YOUR_API_KEY__' -d onpremnet/agent
Kubernetes
Run as a DaemonSet on every node
Create the following file, using your API Key:
# daemonset.yaml
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: on-prem-agent
#namespace: default
labels:
app: on-prem-agent
spec:
selector:
matchLabels:
name: on-prem-agent
template:
metadata:
labels:
app: on-prem-agent
spec:
tolerations:
# this toleration is to have the daemonset runnable on master nodes
# remove it if your masters can't run pods
- key: node-role.kubernetes.io/master
effect: NoSchedule
containers:
- name: on-prem-agent
image: onpremnet/agent:latest
env:
- name: API_KEY
value: __PASTE_YOUR_API_KEY__
resources:
limits:
memory: 100Mi
requests:
cpu: 100m
memory: 200Mi
terminationGracePeriodSeconds: 30
And then apply it to your cluster:
kubectl apply -f daemonset.yaml
Getting Started with the Agent
Provision an API Key
Next, use our cloud console to provision an API Key, which the agent will use to authenticate with the API service.
Configure agent with your API Key
Edit the agent config file to set your api key. The agent will automatically detect when you save a change to this file (much like kubelet if you're familiar with Kubernetes), and then a full startup will follow.
$ sudo vi /etc/on-prem/agent.yml
...
api_key: c1h0....6vlg
Monitor agent log
To see what the agent is doing, you can monitor its log as follows. Also, you can enable the "debug: true" entry in the config file to increase the logging verbosity.
$ sudo journalctl -u on-prem-agent -f
May 04 16:50:22 c002-n1 systemd[1]: Started On-Prem Agent.
May 01 16:50:22 c002-n1 on-prem-agent[675]: [INFO on_prem_agent] On Prem Agent 1.4.2
May 01 16:50:22 c002-n1 on-prem-agent[675]: [INFO on_prem_agent::configdb::jamm] Opened config db /var/lib/on-prem/agent-config.db
May 01 16:50:26 c002-n1 on-prem-agent[675]: [INFO on_prem_agent::datadb::sled] Opened data db /var/lib/on-prem/agent-data.db
May 01 16:50:26 c002-n1 on-prem-agent[675]: [INFO on_prem_agent] Connected to https://api.on-prem.net
CLI
The CLI provides convenient access to the On Prem control plane (API service), and includes typical CLI conveniences such as caching of credentials.
CLI Installation
Debian
Installation on Debian based operating systems, which includes Raspberry Pi OS, involves the following steps:
- Register our APT repository's public key
- Register our APT repository
- Refresh your packages index
- Install the CLI
wget -qO - https://apt.onprem.net/public.key | sudo tee /etc/apt/trusted.gpg.d/onprem.asc
VERSION_CODENAME=`grep "VERSION_CODENAME=" /etc/os-release |awk -F= {' print $2'}|sed s/\"//g`
echo "deb https://apt.onprem.net/ ${VERSION_CODENAME} main" | sudo tee /etc/apt/sources.list.d/onprem.list
sudo apt-get update
sudo apt-get -y install onprem-cli
Docker
On Prem Docker images are multi-platform images that will automatically provide you with an image compatible with your current hardware architecture.
$ docker pull onpremnet/cli
Run Interactively
$ docker run -it onpremnet/cli --help
USAGE:
onprem [OPTIONS] <SUBCOMMAND>
FLAGS:
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
--api-key <api-key> API Key used to authorize requests
--api-url <api-url> Customize the API URL
SUBCOMMANDS:
help Prints this message or the help of the given subcommand(s)
import
login
logout
How to provide authorization
$ docker run -it onpremnet/cli --api-key __REDACTED__ ...command...
CLI Usage
Getting Help
$ onprem
USAGE:
onprem [OPTIONS] <SUBCOMMAND>
FLAGS:
-h, --help Prints help information
-V, --version Prints version information
OPTIONS:
--api-key <api-key> API Key used to authorize requests
--api-url <api-url> Customize the API URL
SUBCOMMANDS:
help Prints this message or the help of the given subcommand(s)
apply
get
list
login
logout
...
Commands
Logging In
Use the On Prem Console to provision an API Key, which the CLI can use to authenticate with the API service.
$ onprem --api-key __REDACTED__ login
API Key written to ~/.on-prem/config
Apply Command
The apply
command focuses on uploading records that are defined locally in JSON or YAML files, and
synchronizing them to the control plane. This enables bootstrapping of the control plane using GitOps.
With this in mind, operations focus on idempotency, so that running any given operation a 2nd time is
harmless and/or just back-fills or resumes where a prior attempt might have left off.
Run Curl
The curl
command offers a curl-compatible CLI interface. Operations are performed on remote agents.
graph LR; cli --> control_plane; control_plane <-- tunnel --> agent; subgraph user_edge[User Edge] cli[CLI]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; end
Download a file
$ onprem curl --device cibiquh32ckn0os7791g --fail -O https://apt.onprem.net/public.key
File written to "public.key"
Get a single record
The get
command provides read access to a single record. Type the get
command without any additional parameters
for the comprehensive list:
$ onprem get
Usage: onprem get <COMMAND>
Commands:
api-key
device
...
Output Formats
Supported output formats include [arrow
, json
, markdown
, ps
, and wide
]. The ps
and wide
format definitions are borrowed from
kubectl.
Examples
Get a device
$ onprem get device c6uuol7qrh9u4hh2bo60
┌──────────────────────┬─────────┬──────────────┬─────────────┬───────────────────────────────────┬─────────────────┬─────────┐
│ id ┆ name ┆ manufacturer ┆ model ┆ uuid ┆ lastIpAddr ┆ tainted │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ str ┆ str ┆ bool │
╞══════════════════════╪═════════╪══════════════╪═════════════╪═══════════════════════════════════╪═════════════════╪═════════╡
│ chai28932ckjgou9an3g ┆ c006-n2 ┆ NVIDIA ┆ Jetson Nano ┆ 8ed6b72b-9056-5f3e-8ef4-9a1e537d… ┆ 192.168.241.162 ┆ false │
└──────────────────────┴─────────┴──────────────┴─────────────┴───────────────────────────────────┴─────────────────┴─────────┘
Get a device as JSON
$ onprem get device c6uuol7qrh9u4hh2bo60 -o json > mydevice.json
Detect I²C Devices
The i2cdetect
command asks a remote device to discover information about its I²C bus.
The On Prem Agent performs this operation directly using embedded logic,
preventing operators from having to install i2c-tools
on the target device.
graph LR; cli --> control_plane; control_plane <-- tunnel --> agent; subgraph user_edge[User Edge] cli[CLI]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; end
List Components
This example is run against a Raspberry Pi carrying an Argon 40 Fan HAT.
$ onprem i2cdetect --device cibiquh32ckn0os7791g -y 1
0 1 2 3 4 5 6 7 8 9 a b c d e f
00: -- -- -- -- -- -- -- -- -- -- -- -- --
10: -- -- -- -- -- -- -- -- -- -- 1a -- -- -- -- --
20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
70: -- -- -- -- -- -- -- --
List multiple records
The list
command provides read access to multiple records. Type the list
command without any additional parameters
for the comprehensive list:
$ onprem list
Usage: onprem list <COMMAND>
Commands:
api-keys
devices
lambdas
...
Output Formats
Supported output formats include [arrow
, json
, markdown
, ps
, and wide
]. The ps
and wide
format definitions are borrowed from
kubectl.
Examples
Get devices
$ onprem get devices
┌──────────────────────┬───────────────┬──────────────┬─────────────┬──────────┬───────────────────────────────────┐
│ id ┆ name ┆ manufacturer ┆ model ┆ assetTag ┆ uuid │
│ --- ┆ --- ┆ --- ┆ --- ┆ --- ┆ --- │
│ str ┆ str ┆ str ┆ str ┆ str ┆ str │
╞══════════════════════╪═══════════════╪══════════════╪═════════════╪══════════╪═══════════════════════════════════╡
│ c6uso9fqrh9u4hh2bnng ┆ c003-n4 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ def ┆ e3f42058-9100-5b9b-ba4e-0b81f98f… │
│ c6ut2ffqrh9u4hh2bnqg ┆ c003-n1 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ null ┆ 02da31b3-6f0e-5558-bf9f-930b7ed0… │
│ cft7kep32ckrvnbjkt7g ┆ c003-n3 ┆ Raspberry Pi ┆ 4B Rev 1.2 ┆ null ┆ 0c9af28b-bc4f-571f-9c7a-5240f475… │
│ ch2ql7p32ckj9ndqd200 ┆ c006-n1 ┆ NVIDIA ┆ Jetson Nano ┆ null ┆ 185b8d7c-5c4a-5f63-86d9-d80bbf72… │
│ chai28932ckjgou9an3g ┆ c006-n2 ┆ NVIDIA ┆ Jetson Nano ┆ null ┆ 8ed6b72b-9056-5f3e-8ef4-9a1e537d… │
│ ci2fabp32ckvhk1g9qe0 ┆ bitscope-0 ┆ Raspberry Pi ┆ 4B Rev 1.4 ┆ null ┆ 070b9dbf-437a-59e2-b84d-bcabaa3d… │
└──────────────────────┴───────────────┴──────────────┴─────────────┴──────────┴───────────────────────────────────┘
Get devices in Arrow Format
$ onprem get devices -o arrow > mydevices.arrow
Run a lambda
The run lambda
command invokes an existing lambda. Lambdas run on a remote device where the On Prem Agent is running.
You may provide a --device
option to specify where the lambda is run.
Note that manually invoking a lambda on a device requires connectivity, and might be something you're only able to do during the factory burn-in phase of your project. In air-gapped environments, lambdas can work on high-throughput low-latency signals without any need of network connectivity.
Running on a device
graph LR; cli --> control_plane; control_plane <-- tunnel --> agent; agent --> lua_vm; subgraph user_edge[User Edge] cli[CLI]; end subgraph cloud[Cloud: api.on-prem.net] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; lua_vm[Embedded Lua VM]; end
Register the lambda
# mylambda.yaml
id: abcd
name: example
description: >
A simple example that performs a transformation on the input event.
scriptContentType: text/x-lua
script: >
local M = {}
function M.handler(event, context)
local retval = event
if event['a'] ~= nil then
retval['d'] = event['a']+1
end
return retval
end
return M
$ onprem import lambdas mylambda.yaml
Run it
$ onprem run lambda abcd --device ci2fabp32ckvhk1g9qe0 --event '{"a":123,"b":true,"c":"dog"}'
{"a":123,"b":true,"c":"dog","d":124}
Lambdas
Lambdas provide a way to run custom code that reacts to certain events. These might include:
- IoT Bus events such as GPIO edge triggers
- network events like Kafka or Redis subscriptions
When creating a new Lambda in the web console, a number of templates are offered:
Lambdas are written in Lua for over-the-air deployability. That Lua will typically orchestrate native low-level modules that are either built into the agent (to support networking, storage, and IoT busses), or provided by WASM modules provided by you.
Running on a device
Lambdas are deployed as part of an agent's sealed configuration bundle, and are then able to run autonomously to perform workloads such as ETL or control functions without the need for cloud connectivity.
graph LR; agent --> lambdas; lambdas --> databases; lambdas --> services; lambdas --> busses; subgraph device_edge[Device Edge] agent; databases[(Edge Databases)]; services[Network Services]; busses[IoT Busses]; end subgraph agent[Agent] lambdas[Lambdas]; end
Structure of a Lambda
Lambdas are AWS Lambda compatible Lua module tables that
at a minimum must include a handler()
function.
local M = {}
function M.handler(event, context)
print('event has fired')
return {
-- SampleKey = 'sample value'
}
end
return M
In this example, a Lambda is written to take a picture using a Raspberry Pi camera.
While testing, Lambdas can be run manually via the CLI from a workstation at the developer edge.
$ onprem run lambda clv3b1c3v1vsk4qabftg --event-data-to-file out.jpeg
Wrote event[data] to out.jpeg (940.9K)
Embedded Modules
The following Lua modules are embedded in the On Prem platform and available for use by Lambdas and Lambda Triggers.
Module | Compute Kernel | LuaRocks Compatibles | Features |
---|---|---|---|
crc16 | mlua-crc16 (Rust) | luacrc16 | Checksums |
inspect | (pure Lua) | inspect | Stringify a Lua variable while debugging |
json | mlua-json (Rust) | lua-cjson, lunajson | JSON serde support |
kafka | mlua-kafka (Rust) | kafka |
Simple Kafka client (⛔︎ unavailable on armv7) |
periphery | mlua-periphery (Rust) | lua-periphery | Peripheral I/O |
rdkafka | mlua-rdkafka (Rust) |
Robust Kafka client (⛔︎ unavailable on armv7) |
|
socket | mlua-socket (Rust) | LuaSocket | Networking |
Lambda Examples
- Call WASM
- Include model files
- Interact with I²C Bus
- Interact with Serial Bus
- Take a Photo (Raspberry Pi)
- Toggle a LED
Call WASM
A lambda may bundle one or more associated WASM modules that will be deployed to the agent alongside the lambda. WASM modules may include functions like machine learning or tensor models that are easier to express in a language such as Rust.
Rust-based add_one() function
This Rust based WASM module exports an add_one()
function.
add_one.yaml
add_one.lua
Cargo.toml
src/
lib.rs
lib.rs:
#![allow(unused)] fn main() { #[no_mangle] pub extern "C" fn add_one(x: i32) -> i32 { x + 1 } }
Cargo.toml:
[package]
name = "add-one"
version = "0.1.0"
edition = "2018"
[lib]
crate-type = ["cdylib"]
Build the WASM module:
$ rustup target add wasm32-unknown-unknown
$ cargo build --target wasm32-unknown-unknown --release
Register the lambda
$ onprem generate xid
cmv805656a11vubtf6pg
# add_one.yaml
id: cmv805656a11vubtf6pg
kind: Lambda
name: add_one
description: >
A lambda that calls a WASM function to add one.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
fileInfoIds:
- "@target/wasm32-unknown-unknown/release/add_one.wasm"
scriptContentType: text/x-lua
script: "@add_one.lua"
-- add_one.lua
local wasmer = require('wasmer')
local store = wasmer.Store:default()
local M={}
function M.handler(event, context)
local path = context.pathForFilename('add_one.wasm')
local f = io.open(path, 'rb')
local binary = f:read('*a')
local module = wasmer.Module:from_binary(store, binary)
local instance = wasmer.Instance:new(store, module)
local add_one = instance.exports:get_function('add_one')
return add_one:call(store, event)
end
return M
$ onprem apply add_one.yaml
If the agent is connected to the control plane, it will have downloaded its new config bundle containing the new lambda and associated files within a few seconds.
View Lambda in the Console
Notice the lambda now appears in the cloud console, and that it contains the associated WASM file add_one.wasm
.
Run the Lambda
$ onprem run lambda cmv805656a11vubtf6pg --event '123'
124
Include model files
A lambda may define associated files that will be deployed to the agent alongside the lambda. Files may be things like machine learning or tensor models.
Register the lambda
lambda_that_includes_model_files.yaml
models/
currencies.csv
timezones.csv
keras-tract-tf2-example.onnx
$ onprem generate xid
cmpsf7656a16efdhjrf0
# lambda_that_includes_model_files.yaml
id: cmpsf7656a16efdhjrf0
kind: Lambda
name: lambda_that_includes_model_files.yaml
description: >
A lambda that uses associated model files.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
fileInfoIds:
- "@models/currencies.csv"
- "@models/timezones.csv"
- "@models/keras-tract-tf2-example.onnx"
scriptContentType: text/x-lua
script: >
local M={}
function M.handler(event, context)
local path = context.pathForFilename('currencies.csv')
local currencies = io.open(path, 'r')
end
return M
$ onprem apply lambda_that_includes_model_files.yaml
Interact with I²C Bus
This lambda reads the input voltage on a SixFab UPS Hat. The On Prem CLI is used to demonstrate manually triggering the lambda and taking delivery of the event JSON using a remote desktop.
graph LR; cli --> control_plane; control_plane <-- tunnel --> agent; agent -- i2c --> hat; subgraph user_edge[User Edge] cli[CLI]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; hat[SixFab UPS Fan HAT]; end
Note that manually triggering a lambda is unusual in that it requires device connectivity to the control plane. A more typical scenario is where Lambdas and their Lambda Trigger control loops run autonomously at the device edge, regardless of the device's connectivity to the control plane.
Register the lambda
$ onprem generate xid
clut5qm56a1d39be96j0
# get_sixfab_ups_hat_input_voltage.yaml
name: get_sixfab_ups_hat_input_voltage
kind: Lambda
id: clut5qm56a1d39be96j0
description: >
Read the input voltage on a SixFab UPS HAT.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
scriptContentType: text/x-lua
script: >
local socket = require('socket')
local I2C = require('periphery.I2C')
function lshift(a, b)
return a * 2 ^ b
end
local M={}
function M.handler(event, context)
local i2c = I2C('/dev/i2c-1')
local addr = 0x41
-- send GetInputVoltage (0x02) command
local req = {0xcd, 0x02, 0x01, 0x00, 0x00, 0xc8, 0x9a}
i2c:transfer(addr, { req })
-- wait for HAT to prepare response
socket.sleep(0.01)
-- read response
local res = {0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, flags=I2C.I2C_M_RD}
i2c:transfer(addr, { res })
-- decode response
local crc_hi, crc_lo = res[2], res[3]
local crc = lshift(crc_hi, 8) + crc_lo
local datalen_hi, datalen_lo = res[4], res[5]
local datalen = lshift(datalen_hi, 8) + datalen_lo
assert(datalen == 4)
local x3, x2, x1, x0 = res[6], res[7], res[8], res[9]
local raw_reading = lshift(x3, 24) + lshift(x2, 16) + lshift(x1, 8) + x0
local voltage = raw_reading / 1000
-- respond
return {voltage=voltage, rawReading=raw_reading, crc=crc}
end
return M
$ onprem apply get_sixfab_ups_hat_input_voltage.yaml
It will now show up in the cloud console.
Invoke it
$ onprem run lambda clut5qm56a1d39be96j0
{"crc":514,"rawReading":4928,"voltage":4.928}
Interact with Serial Bus
This lambda demonstrates use of a serial bus by cycling the power of a node mounted on a BitScope Cluster Blade.
Cycling the power is done using the BMC from a 2nd node. In this diagram, Device 0 is shown being used to manage the power of Devices 1, 2, or 3.
graph LR; cli --> control_plane; control_plane <-- tunnel --> agent0; agent0 --> bmc; bmc --> device0; bmc --> device1; bmc --> device2; bmc --> device3; subgraph user_edge[User Edge] cli[CLI]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] blade; end subgraph blade[CB04B Cluster Blade] bmc; device0; device1; device2; device3; end subgraph device0[Device 0, station 124] agent0[Agent]; end subgraph device1[Device 1, station 125] agent1[Agent]; end subgraph device2[Device 2, station 126] agent2[Agent]; end subgraph device3[Device 3, station 127] agent3[Agent]; end
Register the lambda
$ onprem generate xid
cj7db73erad7pt31vcng
# bitscope_cycle_power.yaml
id: cj7db73erad7pt31vcng
kind: Lambda
name: bitscope_cycle_power
description: >
Cycle the power of a node mounted on a BitScope Cluster Blade.
runAt:
# Run at Device 0 (station 124)
deviceId: ci2fabp32ckvhk1g9qe0
scriptContentType: text/x-lua
script: >
local socket = require('socket')
local Serial = require('periphery.Serial')
function write_command(serial, command, timeout_ms)
for i = 1, #command do
local c = command:sub(i, i)
serial:write(c)
assert(serial:read(1, timeout_ms) == c)
end
end
function set_remote_power(serial, station, value, timeout_ms)
assert(type(station) == 'number')
assert(type(value) == 'boolean')
local command = string.format("[%2x]{", station) .. (value and '/' or '\\') .. '}'
write_command(serial, command, timeout_ms)
end
local M={}
function M.handler(event, context)
assert(type(event.device) == 'string')
assert(type(event.baudrate) == 'number')
local serial = Serial(event.device, event.baudrate)
local timeout_ms = event.timeout or 50
-- send remote power off command
set_remote_power(serial, event.station, false, timeout_ms)
-- wait a bit
socket.sleep(0.25)
-- send remote power on command
set_remote_power(serial, event.station, true, timeout_ms)
-- respond
return {station=event.station, ok=true}
end
return M
$ onprem apply bitscope_cycle_power.yaml
It will now show up in the cloud console.
Run it
$ onprem run lambda cj7db73erad7pt31vcng --event '{"station":126,"device":"/dev/serial0","baudrate":115200}'
{"station":126,"ok":true}
Take a Photo (Raspberry Pi)
This lambda captures a still frame using a Raspberry Pi camera on a remote edge device. The On Prem CLI is used to demonstrate manually triggering the lambda and taking delivery of the image using a remote desktop.
graph LR; cli --> control_plane; control_plane <-- tunnel --> agent; agent --> camera; subgraph user_edge[User Edge] cli[CLI]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; camera[Camera]; end
Note that manually triggering a lambda is unusual in that it requires device connectivity to the control plane. A more typical scenario is where Lambdas and their Lambda Trigger control loops run autonomously at the device edge, regardless of the device's connectivity to the control plane.
Register the lambda
$ onprem generate xid
clut5qm56a1d39be96j0
# capture_image_rpi.yaml
id: clut5qm56a1d39be96j0
kind: Lambda
name: capture_image_rpi
description: >
Capture a still frame from a Raspberry Pi camera.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
scriptContentType: text/x-lua
script: >
local M = {}
function M.handler(event, context)
local filename = os.tmpname()
os.execute('rpicam-jpeg --nopreview -o ' .. filename)
local file = io.open(filename, 'rb')
local event = {
data = file:read('*a'),
}
io.remove(file)
return event
end
return M
$ onprem apply capture_image_rpi.yaml
It will now show up in the cloud console.
Invoke it
The CLI can display the returned event as JSON, which is inefficient:
$ onprem run lambda clut5qm56a1d39be96j0
{"data":[...much raw data...]}
But it can also pluck the data field out of the returned JSON, which is efficient and does not involve JSON
parsing. This is because agents return event data
fields separately for efficient transport encoding.
If the returned event contains any other fields, they are displayed to stdout
while other things are
written to stderr
.
$ onprem run lambda clut5qm56a1d39be96j0 --event-data-to-file out.jpeg > event.json
Wrote event[data] to out.jpeg (940.9K)
$ cat event.json
{}
$ open out.jpeg
Toggle an LED
This lambda toggles the state of led0
on a Raspberry Pi.
The On Prem CLI is used to demonstrate manually triggering the lambda and taking delivery
of the event JSON using a remote desktop.
graph LR; cli --> control_plane; control_plane <-- tunnel --> agent; agent --> led; subgraph user_edge[User Edge] cli[CLI]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; led[LED]; end
Note that manually triggering a lambda is unusual in that it requires device connectivity to the control plane. A more typical scenario is where Lambdas and their Lambda Trigger control loops run autonomously at the device edge, regardless of the device's connectivity to the control plane.
Setup
Start by disabling the default triggering for led0
, so that you can use it for your own purposes.
$ echo none | sudo tee /sys/class/leds/led0/trigger
Register the lambda
$ onprem generate xid
clutm1e56a1dn0f2p4dg
# toggle_led0.yaml
id: clutm1e56a1dn0f2p4dg
kind: Lambda
name: toggle_led0
description: >
Toggle led0.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
scriptContentType: text/x-lua
script: >
local LED = require('periphery.LED')
local M={}
function M.handler(event, context)
local led = LED('led0')
local currentValue = led:read()
local newValue = not currentValue
led:write(newValue)
return {currentValue=currentValue, newValue=newValue}
end
return M
$ onprem apply toggle_led0.yaml
It will now show up in the cloud console.
Run it twice
$ onprem run lambda clutm1e56a1dn0f2p4dg
{"currentValue":true,"newValue":false}
$ onprem run lambda clutm1e56a1dn0f2p4dg
{"currentValue":false,"newValue":true}
Cleanup
Restore the default triggering for led0
with:
$ echo mmc0 | sudo tee /sys/class/leds/led0/trigger
Lambda Triggers
Lambda Triggers provide a way to generate events that Lambdas can respond do. Every Lambda Trigger is expected to run a control loop, and is given a dedicated thread in the agent.
When creating a new Lambda Trigger in the web console, a number of templates are offered:
Structure of a Lambda Trigger
A Lambda Trigger contains an initialization function which can be used to perform resource allocations. A context object is made available and can be used for temporary storage.
Then a Lambda Trigger provides a run function, which should emit events that will be delivered to Lambdas.
local redis = require('redis')
local socket = require('socket')
local M = {}
function M.init(context, params)
local redis_client = redis.connect('my-redis', 6379)
assert(redis_client:ping())
context['redis_client'] = redis_client
end
function M.run(context)
local redis_client = context.redis_client
local channels = {'foo', 'bar'}
for msg, abort in redis_client:pubsub({subscribe=channels}) do
local event = {
timestamp = socket.gettime(),
msg = msg
}
coroutine.yield(event)
end
end
return M
Lambda Trigger Examples
Periodic
This example demonstrates defining a custom Lambda Trigger that can periodically trigger Lambdas.
Initial Provisioning
graph LR; cli --> control_plane; console --> control_plane; control_plane <-- tunnel --> agent; subgraph user_edge[User Edge] cli[CLI]; console[Console]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; end
Subsequent Autonomous Edge Operation
graph LR; agent --> trigger[Lambda Trigger]; trigger --> lambda1; trigger --> lambda2; trigger --> lambda3; subgraph device_edge[Device Edge] agent; end subgraph agent[Agent] trigger; lambda1[Lambda 1]; lambda2[Lambda 2]; lambda3[Lambda 3]; end
Define the lambda trigger
$ onprem generate xid
cj7br83erad4ipi8nb4g
# my_periodic_lambda_trigger_type.yaml
id: cj7br83erad4ipi8nb4g
kind: LambdaTriggerType
name: periodic_trigger
description: >
Periodically trigger lambdas.
runsAtControlPlane: true
runsAtDevices: true
scriptContentType: text/x-lua
script: >
local socket = require('socket')
local M = {}
function M.init(context, params)
end
function M.run(context)
while true do
local event = {
timestamp = socket.gettime()
}
coroutine.yield(event)
socket.sleep(0.5) -- seep for 1/2 second
end
end
return M
Upload it to the control plane
$ onprem apply my_periodic_lambda_trigger_type.yaml
It will now show up in the cloud console.
And it will also now show up as one of the trigger choices when editing a Lambda.
GPIO Edge Trigger
This example demonstrates defining a custom Lambda Trigger that subscribes to GPIO edge events via the Linux kernel. It then demonstrates various Lambdas that respond to it and perform various functions.
Initial Provisioning
graph LR; cli --> control_plane; console --> control_plane; control_plane <-- tunnel --> agent; subgraph user_edge[User Edge] cli[CLI]; console[Console]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; end
Subsequent Autonomous Edge Operation
graph TB; agent --> trigger[Lambda Trigger]; pin -- edge trigger --> trigger; trigger --> lambda1; trigger --> lambda2; trigger --> lambda3; subgraph device_edge[Device Edge] agent; pin[GPIO Pin]; end subgraph agent[Agent] trigger; lambda1[Lambda 1]; lambda2[Lambda 2]; lambda3[Lambda 3]; end
Define the lambda trigger
$ onprem generate xid
cj7ca3berad6gieb3rbg
# my_gpio_trigger_type.yaml
id: cj7ca3berad6gieb3rbg
kind: LambdaTriggerType
name: gpio_trigger
description: >
Trigger lambdas when a GPIO edge event occurs.
runsAtControlPlane: false
runsAtDevices: true
scriptContentType: text/x-lua
script: >
local GPIO = require('periphery.GPIO')
local socket = require('socket')
local M = {}
function M.init(context)
local params = {
path = '/dev/gpiochip0',
line = 23,
direction = 'in',
edge = 'both',
}
local gpio = GPIO(params)
context['gpio'] = gpio
end
function M.run(context)
local gpio = context.gpio
while true do
local event = gpio:read_event()
coroutine.yield(event)
socket.sleep(0.005)
end
end
return M
The sleep used above is precautionary but unnecessary when performing a blocking call such as
read_event()
. Each Lambda Trigger loop runs in a dedicated thread, and run loops are
free to peg the CPU of a single core if they want.
Upload it to the control plane
$ onprem apply ./my_gpio_trigger_type.yaml
It will now show up in the cloud console.
And it will also now show up as one of the trigger choices when editing a Lambda.
Lambda Example 1: Configure an LED to follow the GPIO pin
$ onprem generate xid
cj7co6jerad78frcc100
# follow_gpio23_with_led0.yaml
id: cj7co6jerad78frcc100
kind: Lambda
name: follow_gpio_with_led0
description: >
Follow GPIO pin 23 and display with led0.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
triggerTypeId: cj7ca3berad6gieb3rbg
scriptContentType: text/x-lua
script: >
local LED = require('periphery.LED')
local led = LED('led0')
local M={}
function M.handler(event, context)
local newValue = false
if event.edge == 'rising' then
newValue = true
end
led:write(newValue)
return {edge=event.edge, timestamp=event.timestamp}
end
return M
$ onprem apply ./follow_gpio23_with_led0.yaml
Lambda Example 2: Aggregate the GPIO events in Redis
$ onprem generate xid
clv0p4u56a1fjkem7h9g
# follow_gpio23_and_aggregate_in_redis.yaml
id: clv0p4u56a1fjkem7h9g
kind: Lambda
name: follow_gpio23_and_aggregate_in_redis
description: >
Follow GPIO pin 23 and aggregate events in Redis.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
triggerTypeId: cj7ca3berad6gieb3rbg
scriptContentType: text/x-lua
script: >
local redis = require('redis')
local redisClient = redis.connect('my-redis', 6379)
assert(redisClient:ping())
local M={}
function M.handler(event, context)
redisClient:pipeline(function(pipeline)
-- Count total events for all time
pipeline:incrby('event_count', 1)
-- Also count events per day
pipeline:incrby('event_count_' .. os.date('%Y-%m-%d'), 1)
-- Also count events per hour
pipeline:incrby('event_count_' .. os.date('%Y-%m-%dT%H'), 1)
end)
return {edge=event.edge, timestamp=event.timestamp}
end
return M
$ onprem apply follow_gpio23_and_aggregate_in_redis.yaml
Kafka
This example demonstrates defining a custom Lambda Trigger that subscribes to a Kafka topic.
Initial Provisioning
graph LR; cli --> control_plane; console --> control_plane; control_plane <-- tunnel --> agent; subgraph user_edge[User Edge] cli[CLI]; console[Console]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; end
Subsequent Autonomous Edge Operation
graph TB; agent --> trigger[Lambda Trigger]; trigger --> lambda1; trigger --> lambda2; trigger --> lambda3; kafka -- subscribe --> trigger; subgraph device_edge[Device Edge] agent; kafka[(Kafka)] end subgraph agent[Agent] trigger; lambda1[Lambda 1]; lambda2[Lambda 2]; lambda3[Lambda 3]; end
Define the lambda trigger
$ onprem generate xid
cj7ei4jerad89eavqu70
# kafka_trigger.yaml
id: cj7ei4jerad89eavqu70
kind: LambdaTriggerType
name: kafka_trigger
description: >
Trigger lambdas driven by a Kafka subscription.
runsAtControlPlane: false
runsAtDevices: true
scriptContentType: text/x-lua
script: >
local kafka = require('kafka')
local settings = {
['bootstrap.servers'] = 'c001-b6-n3:9092,c001-b6-n4:9092,c001-b6-n5:9092',
['auto.offset.reset'] = 'latest',
['group.id'] = 'onprem.lambda-trigger.kafka_trigger',
}
local consumer = kafka.consumer(settings)
local M = {}
function M.init(context)
consumer:subscribe('topic1', 'topic2', 'topic3')
context['consumer'] = consumer
end
function M.run(context)
local consumer = context.consumer
while true do
local message = consumer:poll(1000)
if message then
coroutine.yield(message)
end
end
end
return M
Upload it to the control plane
$ onprem apply kafka_trigger.yaml
It will now show up in the cloud console.
And it will also now show up as one of the trigger choices when editing a Lambda.
When a subscription yields a new message, it will trigger associated Lambda with an event containing the following fields:
timestamp
(number)topic
(string)partition
(number)offset
(number)key
(string)payload
(string)
Redis
This example demonstrates defining a custom Lambda Trigger that subscribes to a Redis pub+sub channel. It then demonstrates a Lambda that responds.
Initial Provisioning
graph LR; cli --> control_plane; console --> control_plane; control_plane <-- tunnel --> agent; subgraph user_edge[User Edge] cli[CLI]; console[Console]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; end
Subsequent Autonomous Edge Operation
graph TB; agent --> trigger[Lambda Trigger]; trigger --> lambda1; trigger --> lambda2; trigger --> lambda3; redis -- subscribe --> trigger; subgraph device_edge[Device Edge] agent; redis[(Redis)] end subgraph agent[Agent] trigger; lambda1[Lambda 1]; lambda2[Lambda 2]; lambda3[Lambda 3]; end
Define the lambda trigger
$ onprem generate xid
ck86ttgdr07e6c71dnjg
# my_redis_subscribe.yaml
id: ck86ttgdr07e6c71dnjg
kind: LambdaTriggerType
name: my_redis_subscribe_trigger_type
description: >
Trigger lambdas when a Redis pub+sub event fires.
runsAtControlPlane: false
runsAtDevices: true
scriptContentType: text/x-lua
script: >
local redis = require('redis')
local socket = require('socket')
local M = {}
function M.init(context)
local redis_client = redis.connect('my-redis', 6379)
assert(redis_client:ping())
context['redis_client'] = redis_client
end
function M.run(context)
local redis_client = context.redis_client
local channels = {'foo', 'bar'}
for msg, abort in redis_client:pubsub({subscribe=channels}) do
local event = {
timestamp = socket.gettime(),
msg = msg
}
coroutine.yield(event)
end
end
return M
Upload it to the control plane
$ onprem apply ./my_redis_subscribe.yaml
It will now show up in the cloud console.
And it will also now show up as one of the trigger choices when editing a Lambda.
Configure a Lambda to respond
$ onprem generate xid
ck871o8dr07ebbbn2h30
# respond_to_redis_events.yaml
id: ck871o8dr07ebbbn2h30
kind: Lambda
name: respond_to_redis_events
description: >
Respond to events received from Redis pub+sub channel.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
triggerTypeId: ck86ttgdr07e6c71dnjg
scriptContentType: text/x-lua
script: >
local M={}
function M.handler(event, context)
-- TODO do something with the event here
return event
end
return M
$ onprem apply respond_to_redis_events.yaml
Manually Trigger
$ redis-cli publish foo 123
Continuous Video Frame Capture
This example demonstrates a custom Lambda Trigger that repeatedly captures still frames using a Raspberry Pi camera. It then demonstrates a Lambda responding to the trigger by delivering the images to a Kafka based inference pipeline.
Initial Provisioning
graph LR; cli --> control_plane; console --> control_plane; control_plane <-- tunnel --> agent; subgraph user_edge[User Edge] cli[CLI]; console[Console]; end subgraph cloud[Cloud <small>api.on-prem.net</small>] control_plane[Control Plane]; end subgraph device_edge[Device Edge] agent[Agent]; end
Subsequent Autonomous Edge Operation
graph TB; agent --> trigger[Lambda Trigger]; trigger --> lambda1 -- image.jpeg --> kafka; trigger --> lambda2; trigger --> lambda3; camera -- libcamera --> trigger; subgraph device_edge[Device Edge] agent; camera[Camera] kafka[(Redpanda Edge)] end subgraph agent[Agent] trigger; lambda1[Lambda 1]; lambda2[Lambda 2]; lambda3[Lambda 3]; end
Define the lambda trigger
$ onprem generate xid
cmalphe56a11da41vqsg
# continuous_video_frame_capture_raspberry_pi.yaml
id: cmalphe56a11da41vqsg
kind: LambdaTriggerType
name: continuous_video_frame_capture_raspberry_pi
description: >
Capture continuous still frames from a Raspberry Pi camera.
runsAtControlPlane: false
runsAtDevices: true
scriptContentType: text/x-lua
script: >
local M = {}
function M.init(context)
end
function M.run(context)
local filename = os.tmpname()
while true do
os.execute('rpicam-jpeg --nopreview -o ' .. filename)
local file = io.open(filename, 'rb')
local event = {
data = file:read('*a'),
}
io.remove(file)
coroutine.yield(event)
end
end
return M
Upload it to the control plane
$ onprem apply ./continuous_video_frame_capture_raspberry_pi.yaml
It will now show up in the cloud console, and will now show up as one of the trigger choices when editing a Lambda.
Configure a Lambda to respond
$ onprem generate xid
cmalqo656a11eos3sqb0
# deliver_images_to_kafka.yaml
id: cmalqo656a11eos3sqb0
kind: Lambda
name: deliver_images_to_kafka
description: >
Deliver still frame images to a Kafka based inference pipeline.
runAt:
deviceId: ci2fabp32ckvhk1g9qe0
triggerTypeId: cmalphe56a11da41vqsg
scriptContentType: text/x-lua
script: >
local kafka = require('kafka')
local socket = require('socket')
local settings = {
['bootstrap.servers'] = 'my-broker-0:9092,my-broker-1:9092,my-broker-2:9092',
['group.id'] = 'onprem.lambda.kafka',
}
local producer = kafka.producer(settings)
local M={}
function M.handler(event)
local key = socket.gettime()
local value = event.data
producer:produce(key, value)
producer:poll(1000)
end
return M
$ onprem apply deliver_images_to_kafka.yaml
It will now show up in the cloud console.
Manually Trigger for testing
$ onprem run lambda cmalqo656a11eos3sqb0 --event-data-from-file ./my.jpeg