Agentforce Dynamic Action turns Salesforce org metadata and a natural language goal into executable actions. It extends the prior Data-Aware Agent prototype by wiring a three-phase loop:
- Blueprint – Gather schema, gather intent, ask an LLM (or local heuristic) to emit structured
ActionBlueprintJSON. - Synthesize – Convert the blueprint into Apex classes, guardrails, and unit tests by rendering code templates.
- Orchestrate – Register and execute the generated actions through a runtime dispatcher with FLS/sharing validation.
The repository is ready to run with a stubbed LLM client so you can experiment offline, but it is structured to drop in a production LLM provider without touching core logic.
You now have two cooperative agent capabilities living in one framework:
| Agent | Description | Core question |
|---|---|---|
| Data-Aware Agent | Understands Salesforce schema, metadata, and relationships. | What can I do in this org? |
| Dynamic Action Agent | Generates and executes Apex actions from a goal-aligned blueprint. | How do I do it right now? |
By exposing the Dynamic Action agent through an invocable surface (Flow/@InvocableMethod/REST), the two can chain together inside Einstein Copilot, Flow, or any Agentforce orchestration layer:
User prompt → Data-Aware Agent → recommends a blueprint → Dynamic Action agent (invocable) → generates & executes Apex → returns structured result
The new DynamicActionInvoke class is the bridge. It wraps recommendation, synthesis, and optional artifact offloading into a Flow-friendly response payload so other agents (LLMs or automation) can call it as a tool, inspect the plan/artifacts, and continue reasoning.
- Discover schema – Invoke
SchemaDiscoverInvoke.runto retrieve a scoped snapshot of the org metadata (or limit fields per object for lean payloads). - Generate & execute – Pass the same goal/object hints to
DynamicActionInvoke.runso it can recommend a blueprint, synthesize Apex/tests, and optionally offload large artifacts. - Orchestrate the result – Feed the structured response into Flow, Einstein Copilot, or another agent loop to deploy, request confirmation, or execute the generated action plan.
This flow mirrors the “two agents, one framework” vision: the Data-Aware agent answers what can I do? while the Dynamic Action agent answers how do I do it right now?
| Path | Purpose |
|---|---|
force-app/main/default/classes/ |
Apex sources for schema discovery, blueprint generation, template rendering, orchestration, and tests. |
force-app/main/default/staticresources/ |
Curated blueprint library (zip) |
blueprints/ |
Raw JSON blueprints (dev-time) |
docs/ |
Architecture notes, blueprint schema contract, guardrail catalogue, and integration guides. |
scripts/ |
CLI helper files (bash, apex, node) |
config/ |
Scratch org def if you don't already have it |
.github/workflows/ |
CI |
sfdx-project.json |
Salesforce DX project descriptor. |
Key Apex entry points:
DynamicActionPipeline– End-to-end driver that returns both the plan and generated code artifacts.SchemaDiscoverInvoke– Invocable wrapper that surfaces schema discovery to Flow, Copilot, or other agents.BlueprintSynthesisService– Handles prompt orchestration and response parsing.CodeTemplateEngine– Renders Apex/test classes with runtime guardrails baked in.DynamicActionOrchestrator– Executes generated actions with safety checks.
To enable GitHub Actions to create scratch orgs automatically:
-
Login to Dev Hub locally (if not already):
sf org login web --set-default-dev-hub --alias DevHub
-
Export auth URL:
sf org display --target-org DevHub --verbose --json | jq -r '.result.sfdxAuthUrl' > sfdx_auth_url.txt
-
Add to GitHub Secrets:
- Go to your repo → Settings → Secrets and variables → Actions → New repository secret
- Name:
SFDX_AUTH_URL - Value: Paste the contents of
sfdx_auth_url.txt
This allows CI workflows to authenticate and create scratch orgs for testing.
./scripts/init.shThis script spins up a scratch org, deploys source, assigns the runtime permset, generates the sample action, deploys the artifacts, and runs tests.
Requires Node.js 18+, the Salesforce sf CLI, and a configured Dev Hub (SFDX_AUTH_URL).
- Clone & Authorize
git clone <repo> cd Agentforce-Dynamic-Action node scripts/build-blueprint-library.js sf org create scratch -f config/project-scratch-def.json -a dynamicAction -s sf project deploy start -o dynamicAction
- Assign runtime permissions
sf org assign permset -n DynamicAction_Permissions -o dynamicAction
- (Optional) Register an LLM client
The repository includes an OpenAI client (
OpenAIClient.cls) ready to use. First, configure theLLM_ProviderNamed Credential in Setup with your OpenAI API key as an "Authorization: Bearer YOUR_KEY" header. Then runsf apex run -o dynamicAction -f scripts/register-llm.apexto register the client. Until then the stub heuristics will map common goals to example blueprints. - Generate code artifacts
mkdir -p .tmp sf apex run -o dynamicAction -f scripts/generate.apex --json > .tmp/generate.json node scripts/deploy-artifacts.js .tmp/generate.json dynamicAction - Run tests
sf apex run test -o dynamicAction -l RunLocalTests -r human
Use SchemaIntentPipeline to execute the full schema → recommendation → implementation flow in one call.
SchemaIntentPipeline.Options options = new SchemaIntentPipeline.Options();
options.schemaOptions.maxObjects = 5;
options.schemaOptions.maxFieldsPerObject = 10;
PlanModels.PipelineResult pipeline = SchemaIntentPipeline.run(
'Recommend follow-up actions for high value opportunities',
options
);
System.debug(pipeline.schema);
System.debug(pipeline.recommendations);
System.debug(pipeline.artifacts);pipeline.schemacontains a trimmed org snapshot (objects, fields, relationships).pipeline.recommendationslists ranked action blueprints with rationale and scores.pipeline.planmirrors the orchestrator plan used for execution.pipeline.artifactsdelivers generated Apex classes, tests, and metadata.
Tie the result into DynamicActionOrchestrator.run once users confirm the checkpoint displayed in the plan.
Run the full loop with a single script. It will: recommend candidates, synthesize from the top result, deploy artifacts, and run tests.
./scripts/e2e.sh dynamicActionUse the Step-2 recommendation API to get ranked blueprints, then feed the top result into Step-3 code generation.
Step 2: Get Recommendations
String narrative = 'For AAA insurance sales, when a deal is approved, set the opportunity to Closed Won.';
List<String> includeObjects = new List<String>{'Opportunity','Lead','Case'};
RecommendFunctionalities.Response r = RecommendFunctionalities.run(narrative, includeObjects, 3);
System.debug(JSON.serializePretty(r));Sample narratives for quick testing (use scripts/recommend.apex):
- Case escalation: For P1 customer issues, escalate the case to Tier 2 and set priority to High.
- Lead qualification: When an SDR qualifies interest, move the lead to Qualified and set rating to Hot.
- Opportunity closed-lost: If negotiations fail, mark the opportunity Closed Lost and capture a short loss reason.
- Case follow-up task: After resolving the issue, create a follow-up task on the case for a satisfaction call.
If you already have an external schema snapshot (from a Data‑Aware agent), use SchemaIntentPipeline.run(goal, externalSchema, options) to blend recommendations and synthesis in one call (see End-to-End Pipeline above).
Step 3: Generate from Top Recommendation
// Re-run recommendation for simplicity (or load from Step 2 results)
String narrative = 'For AAA insurance sales, when a deal is approved, set the opportunity to Closed Won.';
List<String> includeObjects = new List<String>{'Opportunity'};
RecommendFunctionalities.Response r = RecommendFunctionalities.run(narrative, includeObjects, 1);
PlanModels.ActionBlueprint bp = r.recommendations.isEmpty() ? null : r.recommendations[0].blueprint;
DynamicActionPipeline.Result result = DynamicActionPipeline.executeFromBlueprint(
bp,
null, // optional schema slice
null // optional constraints
);
System.debug(JSON.serialize(result));| Mode | How to run | Notes |
|---|---|---|
| Goal text only | SchemaIntentPipeline.run(goal, options) with heuristics |
Works offline using curated heuristics and guardrails. |
| Goal text + LLM | Register a client via scripts/register-llm.apex, then call SchemaIntentPipeline.run(goal, options) |
Prompts include schema slice + goal; enable telemetry to capture prompts. |
| Curated blueprint JSON | DynamicActionPipeline.executeWithBlueprint('oppty_closed_won', null, null) or BlueprintLibrary.getByName(...) |
Bypasses LLM and uses the curated catalog in /blueprints. |
If providing external schema snapshots (instead of letting SchemaSnapshot.buildSnapshot() gather them), use this JSON structure:
{
"objects": {
"Opportunity": {
"apiName": "Opportunity",
"fields": {
"Id": {"apiName": "Id", "type": "Id", "nillable": false, "createable": false, "updateable": false},
"StageName": {"apiName": "StageName", "type": "Picklist", "nillable": false, "createable": true, "updateable": true, "picklistValues": ["Prospecting", "Closed Won"]},
"CloseDate": {"apiName": "CloseDate", "type": "Date", "nillable": true, "createable": true, "updateable": true}
},
"childRelationships": ["OpportunityLineItems.OpportunityId"]
}
}
}Each object includes actionable fields (createable/updateable) with their metadata and picklist values where applicable.
Recommendation ranking blends curated exemplars (from /blueprints, built into the BlueprintLibrary static resource) with LLM/heuristic suggestions. Tags in curated entries (e.g., object names, verbs like Update/Convert, guardrail hints) are used to score proximity to the narrative and schema.
Curated blueprint keys available out of the box:
account_create– Create a new account with basic informationcase_create– Create a new case with subject, description, and prioritycase_create_followup_task– Create a follow-up task tied to a casecase_escalate– Escalate case with new priority and ownercase_escalate_tier2– Escalate case to Tier 2 with priority/status updatescampaign_create– Create a new campaign with basic informationcampaign_member_add– Add contact/lead to a campaigncontact_create– Create a new contact with basic informationcontract_create– Create a new contract with account and datesevent_create– Create a new event with scheduling detailslead_create– Create a new lead with basic informationlead_mark_qualified– Move lead to qualified state with ratinglead_qualify– Qualify lead and update status/ratingopportunity_close_lost– Mark opportunity as Closed Lost with reasonopportunity_closed_won– Set opportunity to Closed Won (legacy format)opportunity_create– Create a new opportunity with basic informationoppty_closed_won– Set opportunity to Closed Won with CloseDateorder_create– Create a new order with account and effective datepricebook_entry_create– Create pricebook entry with product and pricingproduct_create– Create a new product with basic informationquote_create– Create a new quote with opportunity and expirationquote_line_create– Create quote line item with product and quantityquote_line_discount_set– Set discount on quote line itemtask_create– Create a new task with basic information
Off by default. Enables offloading large JSON payloads (schema snapshots and artifacts) to reduce heap pressure. Falls back to inline if a store is unavailable.
- Per‑run options: set
SchemaIntentPipeline.Options.offloadOptionsor useDynamicActionPipeline.executeWithOptions/executeFromBlueprintWithOptions.
Example:
OffloadModels.Options off = new OffloadModels.Options();
off.offloadArtifacts = true; // only offload artifacts
off.sizeThresholdKB = 64; // if > 64KB
off.artifactsStore = 'ContentVersion';
// Combined pipeline
SchemaIntentPipeline.Options opts = new SchemaIntentPipeline.Options();
opts.offloadOptions = off;
PlanModels.PipelineResult pipe = SchemaIntentPipeline.run('...', opts);
// Generation-only
DynamicActionPipeline.Result r = DynamicActionPipeline.executeWithOptions('...', null, null, off);See docs/memory-offloading.md for details and tradeoffs.
Generated actions follow the PlanModels.ActionBlueprint schema defined in docs/blueprint-contract.md. Each blueprint lists:
inputs(payload → SObject field bindings, types, required flags)guardrails(FLS, numeric ranges, enum constraints, sharing)operation+targetSObjectcheckpointtext for user confirmation
Blueprints may come from:
- The stub heuristics (
HeuristicBlueprintFactory) - A live LLM call via
LLMClientGateway - A stored library of curated JSON blueprints
GuardrailEvaluator centralizes validation of generated actions. Review docs/guardrails.md for the supported rule set (FLS, sharing, numeric checks, enums) and extension points for privacy or policy enforcement. The template engine automatically injects guardrail hooks into every generated Apex class.
- Template Tokens – Extend
CodeTemplateEngineto add Flow or SOQL emitters and bundle them inCodeGenService. - Telemetry – Capture prompt/response pairs through a custom
LLMClientimplementation and store in an analytics object. - Domain Libraries – Replace or augment the heuristics with curated blueprint libraries for your vertical (
/blueprints). - Deployment Automation – Add CI tasks that persist generated artifacts directly into the repo or packaging pipelines.
See docs/llm-integration.md, docs/code-synthesis.md, and docs/runtime.md for hands-on guides.
The repository includes a comprehensive evaluation harness for validating blueprint synthesis and artifact generation.
Run deterministic tests using curated blueprints (no LLM required):
./scripts/e2e-eval.shThis creates a scratch org, deploys code, and validates artifact generation against golden test definitions in goldens/tests.json. Tests include fuzzy checks for required content and strict diffs against expected files if present.
- Expand Coverage: Add more test cases to
goldens/tests.jsonfor new blueprints - Strict Snapshots: For stable templates, add expected files under
goldens/<test_name>/expected/to enable exact diffs - CI Integration: The
evalworkflow runs on every PR/push, failing builds on regressions
For exercising LLM ranking and synthesis:
- Configure API Key: Add
OPENAI_API_KEYas a GitHub secret - Register Client: The
eval-llmworkflow registers an OpenAI client in the scratch org - Run Fuzzy Tests: Uses the same harness but with live LLM calls (keeps tests fuzzy due to output variability)
Review docs/evaluation.md for detailed setup and goldens/tests.json for test definitions.
- Run
GenerationBenchmark.summarize()to compare current generation output with golden blueprints. - Golden reference assets live under
tests/generation/golden/so the expected behavior stays visible in code review.
- FLS/Sharing Errors: Generated actions include guardrails but may fail if your user lacks FLS on target fields. Assign
DynamicAction_Permissionsor ensure your profile has read/write access to Opportunity/Case fields. - Missing Objects: If Opportunity or Case objects aren't available, use
config/project-scratch-def.jsonwhich enables Sales Cloud features, or modifyincludeObjectsin recommendation calls to use available objects. - LLM Callouts Blocked: Configure the
LLM_ProviderNamed Credential with your OpenAI API key (add "Authorization: Bearer YOUR_KEY" header), then runscripts/register-llm.apexto register the OpenAI client. - Deployment writes no files: Ensure
scripts/generate.apexcompleted successfully and that Node.js is installed forscripts/deploy-artifacts.js. - Permission errors: Run
sf org assign permset -n DynamicAction_Permissions -o <alias>after deploying metadata. - Scripts not executable: Run
chmod +x scripts/*.shon Unix systems if scripts fail with "permission denied".
- Ensure your permission set (e.g.,
DynamicAction_Permissions) grants Apex Class Access toSchemaDiscoverInvoke,DynamicActionInvoke, and supporting pipeline classes. - Add field-level permissions for any objects/fields the generated actions will update, or rely on guardrail enforcement to block unsafe access.
- If an action is missing from Flow or Copilot, confirm the running user holds the permission set and that the class status is Active.
- After deploying metadata, assign the perm set to the automation user:
sf org assign permset -n DynamicAction_Permissions -o <alias>
- Flow: Drop the Data-Aware: Discover Schema action followed by Dynamic Action: Recommend+Generate into an autolaunched or screen flow; bind the outputs to screens or downstream invocations. A minimal canvas looks like:
- Start → add an Action element, pick Data-Aware: Discover Schema, and pass any CSV filters through input variables.
- Store
SchemaDiscoverInvoke’ssnapshotJsonin a text variable (for logs, screens, or downstream parsing). - Add a second Action element (Dynamic Action: Recommend+Generate) and map its
includeObjectsCsvandgoalinputs. Optionally setoffloadArtifacts = truefor large outputs. - Route the response to a Screen or Apex-defined subflow to present
message,planJson, and artifact info. - Optionally add a Decision element that checks
ok == truebefore continuing to deployment or notifications.
- Apex/Agent: Call the two invocable classes back-to-back to simulate agent-to-agent collaboration or to seed golden scenarios.
- Einstein Copilot: Register both invocable actions as tools with descriptive prompts and example inputs so Copilot can route intents appropriately:
- In Copilot Builder, create a Tool → Action and select the Apex action for
DynamicActionInvoke. - Set a description such as "Generate & deploy a Salesforce action from a business goal; returns Apex artifacts or an offload reference."
- Provide example input JSON (e.g.,
{ "goal": "After a case closes, create a follow-up task", "includeObjectsCsv": "Case,Task" }) so the intent classifier learns when to call it. - Add a second tool for
SchemaDiscoverInvokewith guidance like "Summarize schema for objects a dynamic action might need". - Test in Preview: ask Copilot to “generate a closed-won automation” and confirm it invokes schema discovery before dynamic action generation.
- In Copilot Builder, create a Tool → Action and select the Apex action for
- Check scratch org features:
sf org display -o <alias>should show Sales Cloud enabled - Verify permissions:
sf org assign permset -n DynamicAction_Permissions -o <alias>
Continuous Integration
e2e.ymlruns the full recommend → synthesize → deploy → test loop on PRs from this repo and pushes tomain.- Secrets required (choose one auth method):
- SFDX URL: add
SFDX_AUTH_URLunder GitHub → Settings → Secrets → Actions.- Locally, retrieve with:
sf org display --verbose -o <DevHubAlias>and copy “Sfdx Auth Url”.
- Locally, retrieve with:
- JWT: add
SF_CONSUMER_KEY,SF_JWT_KEY(private key PEM),SF_USERNAME(Dev Hub username).
- SFDX URL: add
- The workflow never exposes secrets to forked PRs; it runs on repo PRs, pushes to
main, andworkflow_dispatch.
CLI helper for Step‑2
- Run a single scenario:
npm run recommend --org=<alias> -- --scenario 2 - Run all scenarios:
npm run recommend --org=<alias> -- --all
Local npm aliases
- Generate artifacts:
npm run generate --org=<alias>(writes.tmp/generate.json) - Deploy generated artifacts:
npm run deploy:artifacts --org=<alias> - Run Apex tests:
npm run tests --org=<alias> - Full E2E loop:
npm run e2e --org=<alias> - Chain generate → deploy → tests:
npm run local:loop --org=<alias>
- Test basic generation:
sf apex run -f scripts/generate.apex -o <alias> - Check logs: Add
System.debug()statements and monitor withsf apex tail log -o <alias>
- Fork and create a feature branch.
- Write Apex tests for new functionality and execute them in a scratch org.
- Run linting/formatting as needed (Apex code style matches Salesforce defaults).
- Open a pull request referencing the Jira or work item.
Issues and feature ideas are tracked in docs/roadmap.md. Feel free to suggest enhancements there before submitting a PR.
- Invoke
SchemaDiscoverInvoke.run(Flow, Apex anonymous, or Copilot tool) to capture a scoped schema snapshot. - Invoke
DynamicActionInvoke.runwith a goal and optional object filters to generate Apex artifacts, tests, and metadata. - Inspect the response for plan JSON, rationale, and artifact content/offload references; optionally persist the snapshot for downstream reasoning.
- Deploy generated classes (if not already) and execute the emitted tests to validate the action.
- Close with the narrative: “Two cooperating agents — one understands the data, the other takes dynamic action — communicating through a standard Invocable interface so Flow, Copilot, or custom agents can orchestrate new logic on demand.”
Released under the MIT License.
{ "plan": { "goal": "Update opportunity stage to Closed Won", "actions": [ { "name": "UpdateOpportunityStage", "targetSObject": "Opportunity" } ] }, "artifacts": { "force-app/main/default/classes/DynamicAction_UpdateOpportunityStage.cls": "// ... Apex implementation ...", "force-app/main/default/classes/DynamicAction_UpdateOpportunityStage.cls-meta.xml": "<?xml version=\\"1.0\\" ...>" } }