SpEL Unit Tests
SpEL unit tests are currently experimental. The backend API and JSON request format described in this document are the reference contract for the test runner, but the preferred day-to-day authoring workflow is not to write large JSON requests by hand.
For script development, use YAML test suites, fixture fragments, and SpEL Studio in Obsidian. The Obsidian YAML Test Studio plugin converts the YAML files into the JSON or multipart API request, sends the current local SpEL draft when needed, and stores the test result next to the suite. This YAML-based workflow is currently implemented in the Obsidian plugin and is expected to become part of TSN/tSM tooling in the future.
SpEL Unit Tests let you verify one SpEL script in isolation before the script is saved or deployed.
They are designed for test-driven script development:
- Define the expected behavior of the script.
- Define input data and mocked service responses.
- Run the tests against the current script body.
- Adjust the script until all tests pass.
- Save or deploy the script only after the test suite is green.
This is useful when a script depends on Spring beans, public clients, configuration services, or other platform functions, but you want a fast and side-effect-free development loop.
Recommended authoring workflow
The API accepts JSON because JSON is the stable wire format between clients and the backend test runner. For human authors, the recommended workflow is:
- Create or open a
*.spel-test.yamlsuite in Obsidian YAML Test Studio. - Keep larger reusable inputs in YAML fixture fragments.
- Open the script in SpEL Studio from the test suite.
- Edit the local script draft in SpEL Studio.
- Run the YAML test suite from Obsidian.
- Save the script to tSM only after the relevant tests are green.
In this workflow, the YAML suite normally identifies the script by code and does not need to embed the full script body. If the script is open in SpEL Studio and has local unsaved changes, the Obsidian plugin sends that local draft to the multipart endpoint as the script under test. This is useful for normal human editing and also for AI-assisted development against a local backend where the target script may not exist yet.
The examples below intentionally use JSON and multipart requests. Treat them as the backend reference format and as a guide for tool implementers. For regular test writing, prefer the YAML suite format handled by Obsidian YAML Test Studio.
YAML suites and fixture fragments
For everyday authoring, keep the test suite in a *.spel-test.yaml file and put larger reusable inputs into *.fixtures.yaml fragments. The YAML format is a convenience layer over the JSON API. The current Obsidian plugin and local tooling resolve the YAML files into the JSON or multipart request described later in this document.
Typical local layout:
Notification.Send3/
Notification.Send3.spel
Notification.Send3.spel-test.yaml
fixtures/
base.fixtures.yaml
templates.fixtures.yaml
The main suite references fixture files with fixtureFiles:
script:
code: Notification.Send3
source: ./Notification.Send3.spel
fallbackToPersistedScript: false
fixtureFiles:
- ./fixtures/base.fixtures.yaml
- ./fixtures/templates.fixtures.yaml
options:
unknownBeanMode: FORBID
unknownFunctionMode: FORBID
unknownScriptBindingMode: FORBID
caseDefaults:
context:
dryRun: true
tests:
- name: direct email is planned
useFixturePacks: [orderDryRun]
validations:
- "#result.status == 'DRY_RUN'"
- "#result.recipients[0].email == 'customer@example.test'"
A fixture file can expose reusable data through normal YAML anchors and named fixture packs:
fixtures:
orders:
install: &installOrder
id: order-install-1
key: ORD-1001
subject: Fiber installation
contacts:
directEmail: &directEmail
email: customer@example.test
fixturePacks:
orderDryRun:
context:
order: *installOrder
to: *directEmail
dryRun: true
Fixture packs are applied in this order:
- suite-level
useFixturePacks caseDefaults- test-level
useFixturePacks - the test body itself
Maps are deep-merged. Lists such as mocks, validations, and assertions are appended. Later scalar values override earlier values.
Keep fixture fragments simple. Use YAML anchors inside one file and named fixturePacks across files. Do not put process mocks, timers, jobs, user tasks, or end-to-end scenario behavior into SpEL unit fixtures; those belong to higher-level test tooling.
When to use unit tests
Use a SpEL unit test when you want to test a single script as a small, deterministic unit.
Typical examples:
- a mapping script that transforms connector responses
- a validation script used from a form, process, or event listener
- a script that decides which process path should be used
- a script that calls a configuration service and applies business rules
- a script generated or modified by AI before it is saved to tSM configuration
For multi-script flows, provide local script bindings and script overrides in the same unit-test request. Process-level and end-to-end checks should stay outside this lightweight runner and use the existing application-level test tooling.
Basic idea
A unit test request contains:
| Part | Purpose |
|---|---|
| Script under test | The SpEL script being developed. It can be passed inline, so it does not need to exist in tSM yet. |
| Script overrides | Optional working versions of scripts called by the script under test. |
| Script bindings | Optional local definitions for @script.some.binding(...) calls. |
| Test cases | Named examples that run the script with different input data. |
| Mocks and spies | Definitions of beans, methods, or functions used by the script. |
| Assertions | Expected result, expected exception, and expected service calls. |
| Options | Safety settings, such as blocking all unknown bean calls by default. |
The test runner evaluates the script in an isolated SpEL context. Bean calls are handled by the test runtime, not by the regular production resolver, unless the test explicitly allows a real call.
Unknown bean and function calls are blocked by default. A unit test should not create orders, update entities, send notifications, write audit records, or call external systems unless that behavior is explicitly allowed in the test.
Running a unit test
Unit tests can be executed from the SpEL Console or through the API.
The API endpoint accepts one script under test and one or more test cases:
POST /api/expression/script-test
Content-Type: application/json
{
"scriptUnderTest": {
"code": "Customer.CheckAllowed",
"scriptType": "SPEL",
"content": "#allowed = @configService.allowedStatuses(); #allowed.contains(#customer.status)"
},
"options": {
"unknownBeanMode": "FORBID",
"unknownFunctionMode": "FORBID",
"unknownScriptBindingMode": "FORBID",
"fallbackToPersistedScript": false
},
"tests": [
{
"name": "ACTIVE customer is allowed",
"context": {
"customer": {
"status": "ACTIVE"
}
},
"mocks": [
{
"type": "BEAN_METHOD",
"bean": "configService",
"method": "allowedStatuses",
"args": [],
"returns": ["ACTIVE", "VIP"]
}
],
"assertions": [
{
"type": "RESULT_EQUALS",
"expected": true
},
{
"type": "CALLED",
"bean": "configService",
"method": "allowedStatuses",
"times": 1
}
]
}
]
}
The script body in scriptUnderTest.content is the source that is tested. The script does not need to be saved yet.
Running with script files
When the script body is large, send it as a multipart file instead of embedding it into JSON. This is useful for IDEs and AI agents that already have the current .spel file on disk.
POST /api/expression/script-test/multipart
Content-Type: multipart/form-data
Multipart parts:
| Part | Required | Description |
|---|---|---|
request | Yes | JSON request with metadata, test cases, mocks, script bindings, and assertions. |
script | No | Main script under test. Used when scriptUnderTest.content and scriptUnderTest.source are not present. |
files | No | Additional script files referenced by scriptUnderTest.source or scriptOverrides[].source. |
Example:
curl -X POST "http://localhost:8080/tsm-ticket/api/expression/script-test/multipart" \
-H "X-Microservice: tsm-ticketing" \
-F 'request={
"scriptUnderTest": {
"code": "Order.Decision"
},
"scriptOverrides": [
{
"code": "Customer.Score",
"source": "drafts/Customer.Score.spel"
}
],
"tests": [
{
"name": "VIP customer is accepted",
"context": {
"customer": {
"segment": "VIP"
}
},
"assertions": [
"#result == true"
]
}
]
};type=application/json' \
-F "script=@Order.Decision.spel;filename=Order.Decision.spel" \
-F "files=@drafts/Customer.Score.spel;filename=drafts/Customer.Score.spel"
The backend reads only uploaded multipart files. It does not read arbitrary filesystem paths from JSON.
Inline scripts and persisted dependencies
The script under test should be supplied as inline content, source, or the dedicated multipart script part. This keeps bean mocks, script mocks, script overrides, and recorded calls inside the unit-test runtime.
Unit tests support these development modes:
| Mode | Configuration | Use-case |
|---|---|---|
| Inline only | content is provided and fallbackToPersistedScript is false. | Test-driven development of a new script. The test can run before the script exists in tSM. |
| Inline override | content is provided and fallbackToPersistedScript is true. | Test a modified version of an existing script while allowing calls to other saved scripts. |
For new script development, prefer Inline only. It keeps the test independent from configuration state and makes AI-assisted development repeatable.
fallbackToPersistedScript applies to scripts called by the script under test. It is not intended to make the script under test itself code-only, because that would execute the saved script outside the local draft body being tested.
Testing scripts that call other scripts
A script under test can call other scripts. Unit tests can keep those calls local and deterministic by providing scriptOverrides.
Direct script-service calls are supported:
@script.evalByCode('Customer.Score', {'segment': #customer.segment})
@scriptService.evalByCodeNoTx('Customer.Score', {'segment': #customer.segment})
The target script can be supplied inline:
{
"scriptUnderTest": {
"code": "Order.Decision",
"content": "@script.evalByCode('Customer.Score', {'segment': #customer.segment}) >= 80"
},
"scriptOverrides": [
{
"code": "Customer.Score",
"content": "#segment == 'VIP' ? 95 : 30"
}
],
"tests": [
{
"name": "VIP customer is accepted",
"context": {
"customer": {
"segment": "VIP"
}
},
"assertions": [
"#result == true",
{
"type": "SCRIPT_CALLED",
"code": "Customer.Score",
"times": 1
}
]
}
]
}
Or it can be supplied by source in the multipart endpoint:
{
"scriptOverrides": [
{
"code": "Customer.Score",
"source": "drafts/Customer.Score.spel"
}
]
}
If a called script is not listed in scriptOverrides, the call fails by default. To call an already saved script, set fallbackToPersistedScript to true.
Testing script bindings
tSM also supports script-to-script binding syntax:
@script.customer.score(#customer.segment)
In production this does not call a script named customer.score directly. tSM first looks up the binding customer.score in the Scripts.Bindings.ScriptInvocations register, reads the configured script code, builds the argument context, and then executes that script.
For unit tests, define the binding locally when the binding is part of the behavior being developed:
{
"scriptUnderTest": {
"code": "Order.Decision",
"content": "@script.customer.score(#customer.segment) >= 80"
},
"scriptBindings": [
{
"fullName": "customer.score",
"scriptCode": "Customer.Score",
"positionalArgs": true,
"paramNames": ["segment"],
"addCallerContext": true
}
],
"scriptOverrides": [
{
"code": "Customer.Score",
"content": "#segment == 'VIP' ? 95 : 30"
}
],
"tests": [
{
"name": "VIP customer is accepted through binding",
"context": {
"customer": {
"segment": "VIP"
}
},
"assertions": [
"#result == true",
{
"type": "SCRIPT_CALLED",
"code": "Customer.Score",
"bindingFullName": "customer.score",
"times": 1
}
]
}
]
}
Binding fields:
| Field | Description |
|---|---|
fullName | Binding name used after @script, for example customer.score. |
scriptCode | Script code executed by the binding. |
positionalArgs | When true, positional method arguments are mapped to names from paramNames. |
paramNames | Names used for positional arguments. |
addCallerContext | When true, variables from the caller script context are also passed to the called script. |
To use production binding configuration, set:
{
"options": {
"unknownScriptBindingMode": "CALL_REAL"
}
}
This resolves the binding from the production register. The target script can still be supplied by scriptOverrides, mocked as a SCRIPT mock, or executed from persisted configuration when fallbackToPersistedScript is true.
Example with production binding lookup and mocked target script:
{
"scriptUnderTest": {
"code": "Order.Decision",
"content": "@script.customer.score(#customer.segment)"
},
"options": {
"unknownScriptBindingMode": "CALL_REAL"
},
"tests": [
{
"name": "VIP score is mocked",
"context": {
"customer": {
"segment": "VIP"
}
},
"mocks": [
{
"type": "SCRIPT",
"code": "Customer.Score",
"args": [
{
"eq": "VIP"
}
],
"returns": 88
}
],
"assertions": [
{
"type": "RESULT_EQUALS",
"expected": 88
},
{
"type": "SCRIPT_CALLED_WITH",
"code": "Customer.Score",
"bindingFullName": "customer.score",
"args": [
{
"eq": "VIP"
}
],
"times": 1
}
]
}
]
}
Prefer local scriptBindings while developing a new binding or script. Use production binding lookup for regression tests of existing configuration.
Test case structure
Each test case describes one expected behavior.
{
"name": "BLOCKED customer is rejected",
"context": {
"customer": {
"status": "BLOCKED"
}
},
"mocks": [
{
"type": "BEAN_METHOD",
"bean": "configService",
"method": "allowedStatuses",
"args": [],
"returns": ["ACTIVE", "VIP"]
}
],
"assertions": [
{
"type": "RESULT_EQUALS",
"expected": false
}
]
}
The context object becomes the script input. Each key is available as a SpEL variable, for example #customer.
Mocking beans
Mocked beans replace Spring beans referenced with the @beanName syntax.
@configService.allowedStatuses()
A mock definition can return a fixed value:
{
"type": "BEAN_METHOD",
"bean": "configService",
"method": "allowedStatuses",
"returns": ["ACTIVE", "VIP"]
}
It can also throw an exception:
{
"type": "BEAN_METHOD",
"bean": "customerService",
"method": "getCustomer",
"args": ["CUST-404"],
"throws": {
"type": "ApiNotFoundException",
"message": "Customer not found"
}
}
Or return the first argument:
{
"type": "BEAN_METHOD",
"bean": "registerPublicService",
"method": "createRegisterValue",
"returnsArgument": 0
}
Returning an argument is useful for scripts that build records and pass them to a service. The test can verify the generated payload without writing anything to the database.
Argument matching
The test runner supports exact values and matchers.
| Matcher | Meaning |
|---|---|
{"any": true} | Any value is accepted. |
{"eq": "ACTIVE"} | The value must be equal to ACTIVE. |
{"contains": "VIP"} | The string or collection must contain the value. |
{"jsonPath": "$.status", "eq": "ACTIVE"} | A nested field inside an object must match. |
Example:
{
"type": "BEAN_METHOD",
"bean": "customerService",
"method": "find",
"args": [
{
"jsonPath": "$.status__in",
"contains": "ACTIVE"
},
{
"any": true
}
],
"returns": [
{
"key": "CUST-1",
"status": "ACTIVE"
}
]
}
Mock, spy, and real calls
Every bean used by the tested script has a mode.
| Mode | Behavior |
|---|---|
MOCK | The bean is replaced by test definitions. Calls without a matching mock fail. |
RELAXED_MOCK | The bean records calls and returns safe default values for unstubbed methods. Use only for low-risk dependencies. |
SPY | The real bean is called, but calls are recorded and can be verified. Individual methods can still be stubbed. |
CALL_REAL | The real bean or method is called. Use only for explicitly safe read-only services. |
FORBID | Any call fails the test. This is the default for unknown beans. |
Example:
{
"beans": {
"logger": {
"mode": "SPY"
},
"configPublicService": {
"mode": "CALL_REAL",
"allowedMethods": ["getByCode", "getValue"]
},
"notificationService": {
"mode": "FORBID"
}
}
}
Use CALL_REAL only when the method is known to be read-only and safe. For configuration lookup services this can be practical. For create, update, send, delete, or external integration methods, prefer MOCK or FORBID.
Assertions
Assertions verify the script result and the interactions that happened while the script was running.
Result assertions
{
"type": "RESULT_EQUALS",
"expected": {
"resultCode": 2
}
}
Common result assertions:
| Assertion | Description |
|---|---|
RESULT_EQUALS | The whole result must equal the expected JSON value. |
RESULT_CONTAINS | The result must contain the expected partial JSON value. |
RESULT_JSON_PATH_EQUALS | A JSON path inside the result must equal the expected value. |
RESULT_SPEL | A SpEL assertion expression must evaluate to true. The variable #result contains the script result. |
RESULT_IS_NULL | The script must return null. |
RESULT_NOT_NULL | The script must return a non-null value. |
THROWS | The script must fail with a matching exception type or message. |
Example:
{
"type": "RESULT_JSON_PATH_EQUALS",
"path": "$.data.customerKey",
"expected": "CUST-1"
}
For most test-driven script development, prefer RESULT_SPEL. It lets you write validations in the same expression language as the script:
{
"type": "RESULT_SPEL",
"expression": "#result.data.customerKey == #customer.key"
}
The assertion expression runs after the tested script finishes. Its context contains:
| Variable | Description |
|---|---|
#result | The value returned by the tested script. |
| Other test context variables | The same variables that were passed in the test case context, for example #customer. |
When the assertion object does not contain type, it is treated as RESULT_SPEL:
{
"expression": "#result.resultCode == 2"
}
For compact test templates, the assertion can also be written directly as a string:
{
"assertions": [
"#result.resultCode == 2",
"#result.data.customerKey == #customer.key",
"{'ACTIVE', 'VIP'}.contains(#result.data.status)"
]
}
This short form is useful when test cases are generated from YAML or another template format where validations are naturally written as a list.
Interaction assertions
{
"type": "CALLED",
"bean": "logger",
"method": "info",
"times": 1
}
Common interaction assertions:
| Assertion | Description |
|---|---|
CALLED | A bean method or function must be called. |
NOT_CALLED | A bean method or function must not be called. |
CALLED_WITH | A call must happen with matching arguments. |
NO_UNEXPECTED_CALLS | All calls must be covered by mocks, spies, or assertions. |
Example with arguments:
{
"type": "CALLED_WITH",
"bean": "logger",
"method": "info",
"args": [
{
"contains": "statusCode: 202"
}
],
"times": 1
}
Example: connector response mapping
The following script maps a connector response to a stable processing result.
#if(#result.statusCode == 202)
.then(
@logger.info("statusCode: 202, expect asynchronous response"),
{
"resultCode": 2
}
)
.elseif(#result.statusCode == 200)
.then(
#isNullOrEmpty(#result.body)
? {
"resultCode": 0,
"data": null
}
: {
"resultCode": 0,
"data": #result.body
}
)
.else(
{
"resultCode": -1,
"data": #result.body
}
)
A unit test can describe the expected behavior before the script is saved:
{
"scriptUnderTest": {
"code": "Connector.Response.ToProcessingResult",
"scriptType": "SPEL",
"content": "#if(#result.statusCode == 202).then(@logger.info(\"statusCode: 202, expect asynchronous response\"), {\"resultCode\": 2}).else({\"resultCode\": -1})"
},
"options": {
"unknownBeanMode": "FORBID",
"unknownFunctionMode": "FORBID"
},
"tests": [
{
"name": "202 response is asynchronous",
"context": {
"result": {
"statusCode": 202,
"body": null
}
},
"beans": {
"logger": {
"mode": "SPY"
}
},
"assertions": [
{
"type": "RESULT_EQUALS",
"expected": {
"resultCode": 2
}
},
{
"type": "CALLED_WITH",
"bean": "logger",
"method": "info",
"args": [
{
"eq": "statusCode: 202, expect asynchronous response"
}
],
"times": 1
}
]
},
{
"name": "500 response returns error result",
"context": {
"result": {
"statusCode": 500,
"body": "Internal error"
}
},
"beans": {
"logger": {
"mode": "SPY"
}
},
"assertions": [
{
"type": "RESULT_EQUALS",
"expected": {
"resultCode": -1,
"data": "Internal error"
}
},
{
"type": "NOT_CALLED",
"bean": "logger",
"method": "info"
}
]
}
]
}
Test report
The response contains a summary and one result per test case.
{
"success": false,
"summary": {
"total": 2,
"passed": 1,
"failed": 1,
"errors": 0
},
"tests": [
{
"name": "202 response is asynchronous",
"status": "PASSED",
"result": {
"resultCode": 2
},
"durationMs": 8
},
{
"name": "500 response returns error result",
"status": "FAILED",
"message": "Expected {resultCode=-1, data=Internal error}, got {resultCode=-1}",
"durationMs": 5
}
]
}
Each test case can also include captured calls:
{
"calls": [
{
"bean": "logger",
"method": "info",
"args": ["statusCode: 202, expect asynchronous response"],
"result": null
}
]
}
Captured calls are useful when a test fails. They show what the script actually did and help adjust either the test or the script.
Recommended workflow
- Create the unit test first.
- Define all input examples in
tests. - Mock every bean that can cause side effects.
- Use
SPYfor logging or harmless read-only helpers. - Use
CALL_REALonly for explicitly safe read-only methods. - Generate or write the script body.
- Run the unit test.
- Fix the script until all tests pass.
- Save the script.
- Keep the unit test as a regression check.
This workflow works well with AI-assisted scripting: the test request describes the expected behavior, and the AI can repeatedly update the script body and run the same tests until the result is correct.
Best practices
- Keep one unit test file focused on one script.
- Prefer several small test cases over one large test case.
- Name test cases after business behavior, not implementation details.
- Block unknown beans with
unknownBeanMode: "FORBID". - Avoid
RELAXED_MOCKfor business-critical dependencies. - Use
CALL_REALonly for read-only configuration or lookup methods. - Assert important interactions, especially calls that would otherwise create side effects in production.
- Keep expected results stable and explicit.
- Use
RESULT_JSON_PATH_EQUALSfor large outputs where only selected fields matter.
See also
- SpEL Scripts - developing and debugging scripts in the SpEL Console.
- Script-to-Script Bindings - calling scripts from other scripts.
- SpEL Examples - common scripting patterns.
- SpEL and Transactions - how script execution interacts with transactions.