Skip to content

PlexdHook CRD Reference

The PlexdHook custom resource definition enables declarative hook execution as Kubernetes Jobs. Platform operators create PlexdHook resources to trigger operational tasks (maintenance, upgrades, diagnostics) on specific nodes. The PlexdHook controller watches these resources and creates Jobs with node pinning, owner references, and configurable security contexts.

CRD metadata

PropertyValue
API groupplexd.plexsphere.com
KindPlexdHook
Pluralplexdhooks
Short nameph
ScopeNamespaced
Versionv1alpha1
Manifestdeploy/kubernetes/crds/plexdhook-crd.yaml

Printer columns: Hook Name, Privileged, Phase, Age.

CRD schema

spec

FieldTypeRequiredDefaultDescription
hookNamestringYesName of the hook to execute. Must be non-empty.
jobTemplateobjectNoContainer image and command for the Job.
parametersarray of objectsNo[]Name/value pairs passed as environment variables.
privilegedbooleanNofalseRun the Job container with elevated privileges.

jobTemplate

FieldTypeRequiredDescription
imagestringNoContainer image. Must be non-empty if set.
commandarray of stringNoContainer entrypoint override.
argsarray of stringNoArguments to the entrypoint.

When jobTemplate is omitted, the controller uses busybox:latest as the default image.

parameters

Each entry is an object with:

FieldTypeRequiredDescription
namestringYesParameter name. Non-empty.
valuestringYesParameter value.

Parameters are converted to environment variables in the Job container with the prefix PLEXD_PARAM_. The parameter name is uppercased and non-alphanumeric characters (except underscore) are replaced with _. For example, a parameter named disk-path becomes PLEXD_PARAM_DISK_PATH.

status (subresource)

FieldTypeDescription
jobNamestringName of the created Kubernetes Job.
phasestringExecution phase: Pending, Running, Succeeded, or Failed.
messagestringHuman-readable status message. Set on failure.
startedAtdate-timeTimestamp when the controller began processing.
completedAtdate-timeTimestamp when the Job completed.

PlexdHookController

The PlexdHookController watches PlexdHook resources and creates Kubernetes Jobs for hook execution.

Constructor

go
func NewPlexdHookController(client KubeClient, cfg Config, namespace, nodeName string, logger *slog.Logger) *PlexdHookController

Config defaults are applied automatically.

Lifecycle

go
func (c *PlexdHookController) Start(ctx context.Context) error
func (c *PlexdHookController) Stop()

Start blocks, watching PlexdHook resources and processing events. Returns ctx.Err() when the context is cancelled, nil when the watch channel closes, or a wrapped error if the initial watch fails.

Stop cancels the internal context, causing Start to return.

Watch-create-status cycle

  1. Controller calls WatchPlexdHooks on the KubeClient for its namespace.
  2. On each ADDED event, checks if status.jobName is already set — if so, skips (already processed).
  3. Builds a PlexdJob via buildJob and calls CreateJob.
  4. If CreateJob returns ErrAlreadyExists, the controller logs an info message and still updates status.
  5. If CreateJob fails with another error, the controller logs at error level and sets status.phase to Failed with the error message.
  6. On success, updates status.jobName, status.phase to Pending, and status.startedAt.

Structured logging

All log entries use component=plexdhook-controller. Key events:

EventLevelFields
Controller startedInfonamespace, node
Job createdInfohook, job, node
Job already existsInfohook, job
Job creation failedErrorhook, error
Status update failedWarnhook, error

Job creation details

Node pinning

Each Job includes a nodeSelector with kubernetes.io/hostname set to the controller's node name. This ensures the hook runs on the same node as the plexd DaemonSet pod.

Owner references

Jobs are created with an owner reference to the parent PlexdHook resource:

FieldValue
apiVersionplexd.plexsphere.com/v1alpha1
kindPlexdHook
namePlexdHook resource name
uidPlexdHook resource UID
controllertrue
blockOwnerDeletiontrue

Deleting a PlexdHook resource cascades to delete the associated Job via Kubernetes garbage collection.

Security context

The security context depends on the spec.privileged flag:

Non-privileged (default):

SettingValue
readOnlyRootFilesystemtrue
dropCapabilities[ALL]

Privileged:

SettingValue
privilegedtrue

Labels

Jobs are labeled for identification and querying:

LabelValue
app.kubernetes.io/managed-byplexd
plexd.plexsphere.com/hook-nameValue of spec.hookName

Environment variables

Every Job container receives:

VariableValue
PLEXD_NODE_IDController node name
PLEXD_HOOK_NAMEValue of spec.hookName
PLEXD_PARAM_{NAME}Parameter values (if any)

Other settings

SettingValue
serviceAccountNameplexd
restartPolicyNever
Job nameplexdhook-{resource-name}

RBAC permissions

The plexd ClusterRole includes the following permissions for PlexdHook operations:

API GroupResourceVerbs
plexd.plexsphere.complexdhooksget, list, watch
plexd.plexsphere.complexdhooks/statusget, update, patch
batchjobscreate, get, list, watch

A consumer role plexd-hook-reader provides read-only access to PlexdHook resources:

API GroupResourceVerbs
plexd.plexsphere.complexdhooksget, list, watch

Example

yaml
apiVersion: plexd.plexsphere.com/v1alpha1
kind: PlexdHook
metadata:
  name: disk-check-node01
  namespace: plexd-system
spec:
  hookName: disk-check
  jobTemplate:
    image: alpine:3.19
    command: ["/bin/sh", "-c"]
    args: ["df -h /dev/sda1"]
  parameters:
    - name: disk-path
      value: /dev/sda1
    - name: threshold
      value: "90"
  privileged: false

This creates a non-privileged Job on the target node that runs df -h /dev/sda1 with environment variables PLEXD_PARAM_DISK_PATH=/dev/sda1 and PLEXD_PARAM_THRESHOLD=90.

See also