The infrastructure is based on top of Kubernetes® using Tekton® TaskRuns, ConfigMaps, Secrets, and Persistent Volumes. See the Task architecture for more information.
The following are the default components that are part of the application architecture stack. The Agent can be replaced with a custom implementation, and the Tasks can be replaced with custom implementations.
Component | Type | Technology | Internal to Internal | External Ingress | Internal Dependency | External Dependency |
---|---|---|---|---|---|---|
Flow | Front End | React + Node.js | Flow MS | true | ||
Workflow | Microservice | Spring Boot (Java®) | Engine MS | true | MongoDB® | |
Engine | Microservice | Spring Boot (Java) | Workflow MS | false | MongoDB® | |
Agent | Microservice | Spring Boot (Java) | Engine MS | false | Kubernetes®, Tekton® TaskRun | |
Task | TaskRun | Node.js CLI / Container | false | Kubernetes®, Tekton® TaskRun | MongoDB® shell image |
Notes:
Dependency | Implementation | Notes |
---|---|---|
Kubernetes |
|
Kubernetes 1.20+ |
Tekton | TaskRuns | Part of Pipelines 0.69.0+ |
Application Load Balancer | NGINX 0.23.0+ | |
Database | MongoDB | |
Message Streaming | NATS Jetstream 2.3.4 | Optional |
Certificate Manager | Jetstack Cert Manager (0.7.0) | Optional if you have your own TLS certificate as a Kubernetes secret |
Logs |
|
If logging enabled, default is Kubernetes |
Monitoring | New Relic Infrastructure, New Relic Kubernetes, New Relic APM Agent | If enabled |
Tracing | Jaeger® | If enabled |
Tekon TaskRuns are a Custom Resource Definition (CRD) wrapping Kubernetes Pods and allow us to define the Task specific metadata.
A certain amount of disk, memory, and CPU is required to process TaskRuns. Our recommendation is to run these on dedicated nodes and to set them to automatically delete. This will ensure you have enough resources to continually execute new Tasks.
The Workflow Tasks run as jobs on any node, unless dedicated nodes are implemented using:
dedicated=bmrg-worker:NoSchedule
node-role.kubernetes.io/bmrg-worker=true
As with all containers, there is ephemeral storage used that we have limited to 8GB by default. This impacts the number of Tasks that can be running in parallel, based on the amount of primary disk used. This is important.
Flow Tasks have a setting to delete on completion. If this is not enabled, then the completed workers stick around and use up the available ephemeral storage.
See Kubernetes ephemeral storage reference information.
There are different types of persistent volumes used by the task orchestration system and are enabled by Workflow in the Workflow Editor > Configuration.
You can configure the storage size, storage class, and access modes for the following types in the Settings under Administer. By default:
We recommend using Ranchers Local Path Provisioner on the nodes executing Tasks as this allows for dynamic provisioning of local disk, that if SSD, allows for low latency high speed writes.
Workspaces are the representation of Storage in use by Boomerang Flow (and Tekton) Workflows. There are currently two workspaces available to be enabled in a Workflow. See the Workspaces section of the Workflow Editor How to Guide for more information.
All Tasks run with a data drive (/data
) specific to that Task and based on Kubernetes EmptyDir volume. Use this for inner Task workings.
If dedicated nodes are enabled, a pod-soft anti-affinity feature is also enabled to ensure that attempts are made to balance workers across nodes as best as possible.
If you are implementing a Kubernetes cluster, which uses ClusterImagePolicy or ImagePolicy, you may need to add docker.io/boomerangio/*:*
to your policies to be able to retrieve the images.
Supported for use with IPv4 networks only.