Manage workers
Boundary Community Edition requires organizations to configure their own self-managed workers. Workers can provide access to private networks while still communicating with an upstream Boundary control plane.
Note
Workers should be kept up-to-date with the Boundary control plane's version, otherwise new features will not work as expected.
Boundary is an identity-aware proxy that sits between users and the infrastructure they want to connect to. The proxy has two components:
- A control plane that manages state around users under management, targets, and access policies.
- Worker nodes, assigned by the control plane once a user authenticates into Boundary and selects a target.
Deploying workers allows Boundary users to securely connect to private endpoints (such as SSH services on hosts, databases, or HashiCorp Vault) without exposing a private network.
This tutorial demonstrates the basics of how to register and manage workers using Boundary Boundary Community Edition.
Prerequisites
This tutorial assumes you have:
- Boundary Community Edition running in dev mode
- Completed the previous Community Edition Administration tutorials and created
a
postgres
target in the Manage Targets tutorial
This tutorial deploys a worker locally in a Docker container, which is then registered to the controller deployed using Boundary's dev mode.
Workers must be able to install the Boundary binary.
To begin, ensure Boundary is running locally in dev mode:
$ boundary dev==> Boundary server configuration: [Controller] AEAD Key Bytes: pcPFykfubnEycoY+xLqn071qBQR5OB7u [Recovery] AEAD Key Bytes: LtvZXRu1lOL3fMuctHn7kEohQvz/1eH9 [Worker-Auth] AEAD Key Bytes: j1QNfPHJhBmZJsGmxZ9BN+kHn+C81mJE [Recovery] AEAD Type: aes-gcm [Root] AEAD Type: aes-gcm [Worker-Auth-Storage] AEAD Type: aes-gcm [Worker-Auth] AEAD Type: aes-gcm Cgo: disabled Controller Public Cluster Addr: 127.0.0.1:9201 Dev Database Container: priceless_euler Dev Database Url: postgres://postgres:password@localhost:55000/boundary?sslmode=disable Generated Admin Login Name: admin Generated Admin Password: password Generated Host Catalog Id: hcst_1234567890 Generated Host Id: hst_1234567890 Generated Host Set Id: hsst_1234567890 Generated Oidc Auth Method Id: amoidc_1234567890 Generated Org Scope Id: o_1234567890 Generated Password Auth Method Id: ampw_1234567890 Generated Project Scope Id: p_1234567890 Generated Target Id: ttcp_1234567890 Generated Unprivileged Login Name: user Generated Unprivileged Password: password Listener 1: tcp (addr: "127.0.0.1:9200", cors_allowed_headers: "[]", cors_allowed_origins: "[*]", cors_enabled: "true", max_request_duration: "1m30s", purpose: "api") Listener 2: tcp (addr: "127.0.0.1:9201", max_request_duration: "1m30s", purpose: "cluster") Listener 3: tcp (addr: "127.0.0.1:9203", max_request_duration: "1m30s", purpose: "ops") Listener 4: tcp (addr: "127.0.0.1:9202", max_request_duration: "1m30s", purpose: "proxy") Log Level: info Mlock: supported: false, enabled: false Version: Boundary v0.11.2 Version Sha: 02e410af7a2606ae242b8637d8a02754f0a5f43e Worker Auth Current Key Id: chastise-scone-lair-cussed-thrive-husband-haggler-trio Worker Auth Storage Path: /var/folders/8g/4dnhwwzx2d771tkklxwrd0380000gp/T/nodeenrollment2003067152 Worker Public Proxy Addr: 127.0.0.1:9202==> Boundary server started! Log data will stream in below:.........
If you restarted dev mode, go back to the Manage Targets tutorial to create a postgres container and target.
Verify the Boundary installation
Verify that Boundary 0.9.0 or above is installed locally.
$ boundary versionVersion information: Git Revision: 02e410af7a2606ae242b8637d8a02754f0a5f43e Version Number: 0.11.2
Configure the worker
To configure a worker, the following details are required:
- Boundary Controller URL (Boundary address)
- Auth Method ID (from the Admin Console)
- Admin login name and password
Because Boundary is running in dev mode, these values map to:
- Boundary Controller URL:
http://127.0.0.1:9200
- Auth Method ID:
ampw_1234567890
- Admin login name and password:
admin
andpassword
, respectively
Authorization Methods
There are two workflows that can be used to register a worker in Boundary Community Edition:
- Controller-Led authorization workflow
- Worker-Led authorization workflow
Select a workflow to proceed.
In this flow, the operator fetches an activation token from the controller. The token is then embedded in the worker's config file, and authorization is performed when the worker is started.
First, authenticate to the controller. Enter the password of password
when
prompted.
$ boundary authenticate password -auth-method-id ampw_1234567890 -login-name adminPlease enter the password (it will be hidden):Authentication information: Account ID: acctpw_1234567890 Auth Method ID: ampw_1234567890 Expiration Time: Thu, 19 Jan 2023 15:37:46 MST User ID: u_1234567890The token was successfully stored in the chosen keyring and is not displayed here.
Next, generate an activation token for the new worker.
$ boundary workers create controller-ledWorker information: Active Connection Count: 0 Controller-Generated Activation Token: neslat_2KrT6eg8F8PE5znPhjesuWAtW9S2KdhqPox3w6Z4n9kXvWLfd37Sj1VMQMNB7tqtXCDwdbX9F4UMDHvW5CnLDbb61DjXh Created Time: Thu, 12 Jan 2023 15:38:22 MST ID: w_rKKkVB2d8z Type: pki Updated Time: Thu, 12 Jan 2023 15:38:22 MST Version: 1 Scope: ID: global Name: global Type: global Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags
Copy the Controller-Generated Activation Token
value on line 6.
Write the worker config
Create a new folder to store your Boundary config file. This tutorial creates
the boundary/
directory in the user's home directory ~/
(labeled as
myusername
later on) to store the worker config. If you do not have permission
to create this directory, create the folder elsewhere.
$ mkdir ~/boundary/ && cd ~/boundary/
Next, create a new file named worker.hcl
in the ~/boundary/
directory.
$ touch ~/boundary/worker.hcl
Open the file with a text editor, such as Vi.
Paste the following configuration into the worker config file:
~/boundary/worker.hcl
disable_mlock = truelistener "tcp" { address = "127.0.0.1:9204" purpose = "proxy"}worker { auth_storage_path = "/home/myusername/boundary/worker1" initial_upstreams = ["127.0.0.1"] controller_generated_activation_token = "<Controller-Generated Activation Token Value>" tags { type = ["worker", "local"] }}
Update the <Controller-Generated Activation Token Value>
on line 3 with
the token value copied from the boundary workers create controller-led
command
output.
Update the auth_storage_path
** to match the full path to the
~/boundary/worker1
directory, such as /home/myusername/boundary/worker1
.
Notice the listener "tcp"
address
is set to "127.0.0.1:9204"
. Because
Boundary is running in dev mode, a pre-configured worker is already
listening on port 9202
. To avoid conflicts, the new worker listens on
9204
. In a non-dev deployment, the worker would usually listen on
port 9202
.
Also notice the worker
initial_upstreams
is set to 127.0.0.1
. In a non-dev
deployment, this address would resolve to an upstream controller.
Save this file.
Parameters that can be specified for workers include:
auth_storage_path
is a local path where a worker will store its credentials. Storage should not be shared between workers.controller_generated_activation_token
is one-time-use; it is safe to keep it here even after the worker has successfully authorized and authenticated, as it will be unusable at that point.initial_upstreams
indicates the address or addresses a worker will use when initially connecting to Boundary. Do not use any HCP worker values forinitial_upstreams
.public_addr
attribute can be specified within theworker {}
stanza. This example omits the worker's public address because the Boundary client and worker are deployed on the same local machine, but would be used in a non-dev deployment.
To see all valid config options, refer to the worker configuration docs.
Start the worker
With the worker config defined, start the worker server. Provide the full path to the worker config file.
$ boundary server -config="/home/myusername/boundary/worker.hcl"==> Boundary server configuration: Cgo: disabled Listener 1: tcp (addr: "127.0.0.1:9204", max_request_duration: "1m30s", purpose: "proxy") Log Level: info Mlock: supported: false, enabled: false Version: Boundary v0.11.2 Version Sha: 02e410af7a2606ae242b8637d8a02754f0a5f43e Worker Auth Current Key Id: unable-sappy-manager-object-shakiness-overnight-pastime-lazily Worker Auth Registration Request: GzusqckarbczHoLGQ4UA25uSRP2BhspoFcDqirahPonSvtyH3wD44KE9UUcRUgoVqNESjcCwtJ2rMZFun5LpRjmFWF5ykK4rvYTvzT8GppGeifvbdSH8qi3CstwAiJVynnLBVtRb2r8Ekwx6ksZ8mWC9u94m5sm3ayzhBLwEafSEnbN9FsjP5StCFLzPMDqny8iXUuvYJUS7MAeXJaEiv2g8pwYfJ4cZG7Hu7kqc2d8nKiNCsKbLohMRRFT887frRWDXDaUUETHbRG2RzexqGWhqC2q4UTodhSnzpdnX79 Worker Auth Storage Path: /home/myusername/boundary/worker1 Worker Public Proxy Addr: 127.0.0.1:9204==> Boundary server started! Log data will stream in below:
Verify the worker registration
Verify the worker has successfully authenticated to the upstream controller by listing the available workers.
There will be an initial worker created by boundary dev
available at
127.0.0.1:9202
. The newly created worker will have an address of
127.0.0.1:9204
.
$ boundary workers listWorker information: ID: w_WEbOvv0Wvl Type: pki Version: 1 Address: 127.0.0.1:9202 ReleaseVersion: Boundary v0.11.2 Last Status Time: Fri, 27 Jan 2023 20:21:26 UTC Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags ID: w_O0pSsDWt0U Type: pki Version: 1 Address: 127.0.0.1:9204 ReleaseVersion: Boundary v0.11.2 Last Status Time: Fri, 27 Jan 2023 20:21:26 UTC Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags
Worker management
Workers can be managed and updated using the CLI or Admin Console UI.
List the available workers:
$ boundary workers listWorker information: ID: w_WEbOvv0Wvl Type: pki Version: 1 Address: 127.0.0.1:9202 ReleaseVersion: Boundary v0.11.2 Last Status Time: Fri, 27 Jan 2023 20:21:26 UTC Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags ID: w_O0pSsDWt0U Type: pki Version: 1 Address: 127.0.0.1:9204 ReleaseVersion: Boundary v0.11.2 Last Status Time: Fri, 27 Jan 2023 20:21:26 UTC Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags
Copy the new worker ID
with an Address of 127.0.0.1:9204
(such as
w_O0pSsDWt0U
).
Read the worker details:
$ boundary workers read -id w_O0pSsDWt0UWorker information: Active Connection Count: 0 Address: 127.0.0.1:9202 Created Time: Thu, 12 Jan 2023 15:27:09 MST ID: w_O0pSsDWt0U Last Status Time: 2023-01-12 23:58:53.895606 +0000 UTC Release Version: Boundary v0.11.2 Type: pki Updated Time: Thu, 12 Jan 2023 16:58:53 MST Version: 1 Scope: ID: global Name: global Type: global Tags: Configuration: type: ["worker" "local"] Canonical: type: ["worker" "local"] Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags
To update a worker, issue an update request using the worker ID. The request should include the fields to update.
Update the worker name and description:
$ boundary workers update -id=w_O0pSsDWt0U -name="worker1" -description="my first worker"Worker information: Active Connection Count: 0 Address: 127.0.0.1:9202 Created Time: Thu, 12 Jan 2023 15:27:09 MST Description: my first worker ID: w_O0pSsDWt0U Last Status Time: 2023-01-12 23:59:21.099383 +0000 UTC Name: worker1 Release Version: Boundary v0.11.2 Type: pki Updated Time: Thu, 12 Jan 2023 16:59:22 MST Version: 2 Scope: ID: global Name: global Type: global Tags: Configuration: type: ["worker" "local"] Canonical: type: ["worker" "local"] Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags
Updating a worker will return the updated resource details.
Lastly, a worker can be deleted by issuing a delete request using boundary
workers delete
and passing the worker ID. To verify deletion, check the worker
no longer exists with boundary workers list
.
Note
Do not delete the new worker. Proceed to the next section to test the new worker using an existing target.
Worker-aware targets
From the Manage Targets tutorial you should already have a configured target.
List the available targets:
$ boundary targets list -recursiveTarget information: ID: ttcp_pF6i4wtOgy Scope ID: p_WJxjlrrkvP Version: 1 Type: tcp Name: postgres-target Description: updated postgres target Authorized Actions: no-op read update delete add-host-sets set-host-sets remove-host-sets add-host-sources set-host-sources remove-host-sources add-credential-sources set-credential-sources remove-credential-sources authorize-session
Export the target ID as an environment variable:
$ export TARGET_ID=<postgres-target-ID>
Boundary can use worker tags that define key-value pairs targets can use to determine where they should route connections.
A simple tag was included in the worker.hcl
file from before:
worker { tags { type = ["worker", "local"] }
This config creates the resulting tags on the worker:
Tags: Worker Configuration: type: ["worker" "local"] Canonical: type: ["worker" "local"]
The Tags
or Name
of the worker (worker1
) can be used to create a
worker filter for the target.
Update the postgres target to add a worker tag filter that searches for
workers that have the worker
tag. Boundary will consider any worker with this
tag assigned to it an acceptable proxy for this target.
$ boundary targets update tcp -id $TARGET_ID -egress-worker-filter='"worker" in "/tags/type"'Target information: Created Time: Mon, 23 Jan 2023 18:29:48 MST Description: updated postgres target Egress Worker Filter: "worker" in "/tags/type" ID: ttcp_xRRjzpH0qV Name: postgres Session Connection Limit: -1 Session Max Seconds: 28800 Type: tcp Updated Time: Mon, 23 Jan 2023 19:58:15 MST Version: 5 Scope: ID: p_OVOOKRiV5J Name: QA_Tests Parent Scope ID: o_8EhpHB3qEN Type: project Authorized Actions: no-op read update delete add-host-sources set-host-sources remove-host-sources add-credential-sources set-credential-sources remove-credential-sources authorize-session Host Sources: Host Catalog ID: hcst_5g9PpiZjXZ ID: hsst_vsoLdMEQSf Attributes: Default Port: 16001
Note
The type: "local"
tag could have also been used, or a filter
that searches for the name of the worker directly ("/name" == "worker1"
).
With the filter assigned, any connections to this target will be forced to proxy through the worker.
Finally, open a session to the postgres target using boundary connect
postgres
. When prompted, enter the password secret
to connect.
$ boundary connect postgres -target-id $TARGET_ID -username postgresPassword for user postgres:psql (13.2)Type "help" for help.postgres=#
You can verify the session is running through the new worker by checking the worker's active sessions using the CLI or the Admin Console.
$ boundary workers read -id w_lzmuKKecGNWorker information: Active Connection Count: 1 Address: 127.0.0.1:9202 Created Time: Tue, 24 Jan 2023 17:43:16 MST ID: w_lzmuKKecGN Last Status Time: 2023-01-25 00:52:07.523008 +0000 UTC Release Version: Boundary v0.11.2 Type: pki Updated Time: Tue, 24 Jan 2023 17:52:07 MST Version: 1 Scope: ID: global Name: global Type: global Tags: Configuration: type: ["worker" "local"] Canonical: type: ["worker" "local"] Authorized Actions: no-op read update delete add-worker-tags set-worker-tags remove-worker-tags
Sessions can be managed using the same methods discussed in the Manage Sessions tutorial.
When finished, the session can be terminated manually using \q
, or canceled
via another authenticated Boundary command. Sessions can also be managed using
the Admin Console UI or Boundary Desktop app.
Note
To cancel this session using the CLI, you will need to open a new
terminal window and authenticate to Boundary again using boundary
authenticate
.
$ boundary sessions list -recursiveSession information: ID: s_dcJqC5PgxQ Scope ID: p_VunaJTWd3d Status: active Created Time: Tue, 21 Jun 2022 13:04:37 MDT Expiration Time: Tue, 21 Jun 2022 21:04:37 MDT Updated Time: Tue, 21 Jun 2022 13:04:37 MDT User ID: u_qSOx2RdVhG Target ID: ttcp_eaMvjZpzx7 Authorized Actions: no-op read read:self cancel cancel:self
Cancel the existing session.
$ boundary sessions cancel -id=s_dcJqC5PgxQSession information: Auth Token ID: at_UXLZbQFJxN Created Time: Tue, 21 Jun 2022 13:04:37 MDT Endpoint: tcp://50.16.114.201:22 Expiration Time: Tue, 21 Jun 2022 21:04:37 MDT Host ID: hst_JTzdAlOrgA Host Set ID: hsst_xvITBZHyZY ID: s_dcJqC5PgxQ Status: canceling Target ID: ttcp_eaMvjZpzx7 Type: tcp Updated Time: Tue, 21 Jun 2022 13:12:14 MDT User ID: u_qSOx2RdVhG Version: 3 Scope: ID: p_VunaJTWd3d Name: quick-start-project Parent Scope ID: o_JYLvWHgCGv Type: project Authorized Actions: no-op read read:self cancel cancel:self States: Start Time: Tue, 21 Jun 2022 13:12:14 MDT Status: canceling End Time: Tue, 21 Jun 2022 13:12:14 MDT Start Time: Tue, 21 Jun 2022 13:04:37 MDT Status: active End Time: Tue, 21 Jun 2022 13:04:37 MDT Start Time: Tue, 21 Jun 2022 13:04:37 MDT Status: pending
Cleanup and teardown
Locate the terminal session used to start the
boundary dev
command, and executectrl+c
to stop Boundary.Destroy the postgres container created for the tutorial.
$ docker rm -f postgres
Check your work by executing docker ps
and ensure there are no more postgres
containers remaining from the tutorial. If unexpected containers still exist,
execute docker rm -f <CONTAINER_ID>
against each to remove them.
Summary
The Community Edition Administration tutorial collection demonstrated the common management workflows for a self-managed Boundary deployment.
This tutorial demonstrated worker registration with Boundary Community Edition and discussed worker management.
To continue learning about Boundary, check out the Self-managed access management workflows.