Nomad
parameterized Block
Placement | job -> parameterized |
A parameterized job is used to encapsulate a set of work that can be carried out
on various inputs much like a function definition. When the parameterized
block is added to a job, the job acts as a function to the cluster as a whole.
The parameterized
block allows job operators to configure a job that carries
out a particular action, define its resource requirements and configure how
inputs and configuration are retrieved by the tasks within the job.
To invoke a parameterized job, nomad job dispatch
or the equivalent HTTP APIs are
used. When dispatching against a parameterized job, an opaque payload and
metadata may be injected into the job. These inputs to the parameterized job act
like arguments to a function. The job consumes them to change its behavior,
without exposing the implementation details to the caller.
To that end, tasks within the job can add a
dispatch_payload
block that
defines where on the filesystem this payload gets written to. An example payload
would be a task's JSON configuration.
Further, certain metadata may be marked as required when dispatching a job so it can be used to inject configuration directly into a task's arguments using interpolation. An example of this would be to require a run ID key that could be used to lookup the work the job is suppose to do from a management service or database.
Each time a job is dispatched, a unique job ID is generated. This allows a caller to track the status of the job, much like a future or promise in some programming languages. The dispatched job cannot be updated after dispatching; to update the job definition you need to update the parent job.
job "docs" {
parameterized {
payload = "required"
meta_required = ["dispatcher_email"]
meta_optional = ["pager_email"]
}
}
See the multiregion documentation for additional considerations when dispatching parameterized jobs.
parameterized
Requirements
- The job's scheduler type must be
batch
orsysbatch
.
parameterized
Parameters
meta_optional
(array<string>: nil)
- Specifies the set of metadata keys that may be provided when dispatching against the job.meta_required
(array<string>: nil)
- Specifies the set of metadata keys that must be provided when dispatching against the job.payload
(string: "optional")
- Specifies the requirement of providing a payload when dispatching against the parameterized job. The maximum size of apayload
is 16 KiB. The options for this field are:"optional"
- A payload is optional when dispatching against the job."required"
- A payload must be provided when dispatching against the job."forbidden"
- A payload is forbidden when dispatching against the job.
parameterized
Examples
The following examples show non-runnable example parameterized jobs:
Required Inputs
This example shows a parameterized job that requires both a payload and metadata:
job "video-encode" {
# ...
type = "batch"
parameterized {
payload = "required"
meta_required = ["dispatcher_email"]
}
group "encode" {
# ...
task "ffmpeg" {
driver = "exec"
config {
command = "ffmpeg-wrapper"
# When dispatched, the payload is written to a file that is then read by
# the created task upon startup
args = ["-config=${NOMAD_TASK_DIR}/config.json"]
}
dispatch_payload {
file = "config.json"
}
}
}
}
Metadata Interpolation
job "email-blast" {
# ...
type = "batch"
parameterized {
payload = "forbidden"
meta_required = ["CAMPAIGN_ID"]
}
group "emails" {
# ...
task "emailer" {
driver = "exec"
config {
command = "emailer"
# The campaign ID is interpolated and injected into the task's
# arguments
args = ["-campaign=${NOMAD_META_CAMPAIGN_ID}"]
}
}
}
}
Interactions with periodic
When trying to schedule a job that is both parameterized
and periodic
, an internal
hierarchy comes into play: A parametrized job should not be dispatched without
parameters by the periodic configuration, causing the parameterized option to take
precedence: Once the job is dispatched the first time, and given parameters, the
periodic configuration comes into play and a new job job will be dispatched according
to that job and not the originally submitted one.
For example a job with the following configuration will not trigger any new jobs until it is dispatched at least once, and after that the dispatched child will periodically trigger more children with the given parameters:
periodic {
crons = [
"*/40 * * * * * *"
]
}
parameterized {
payload = "required"
meta_required = ["dispatcher_email"]
meta_optional = ["pager_email"]
}
The first job batch/periodic/parameterized
corresponds to the parameterized and
periodic job, but it won't result in any new allocations until a dispatch
with the
necessary parameters is submitted. The second job batch/periodic
will trigger
the subsequent jobs batch
. Any new dispatch will trigger a new flow of new
periodic jobs with the corresponding parameters.
$ nomad job status
ID Type Submit Date
sync batch/periodic/parameterized 2024-11-07T10:43:30+01:00 // Original submitted job
sync/dispatch-1730972650-247c6e97 batch/periodic 2024-11-07T10:44:10+01:00 // First dispatched job with parameters A
sync/dispatch-1730972650-247c6e97/periodic-1730972680 batch 2024-11-07T10:44:40+01:00 // Cron job with parameters A
sync/dispatch-1730972650-247c6e97/periodic-1730972860 batch 2024-11-07T10:47:40+01:00 // Cron job with parameters A
sync/dispatch-1730972760-f79a96e1 batch/periodic 2024-11-07T10:46:00+01:00 // Second dispatched job with parameters B
sync/dispatch-1730972760-f79a96e1/periodic-1730972800 batch 2024-11-07T10:46:40+01:00 // Cron job with parameters B
sync/dispatch-1730972760-f79a96e1/periodic-1730972860 batch 2024-11-07T10:47:40+01:00 // Cron job with parameters B
If a periodic job needs to be forced by any reason, the corresponding parameterized job needs to be used:
$ nomad job periodic force sync/dispatch-1730972650-247c6e97