Skip to content

[codex] M2 Save Lab: hardening + UX completion (P0-P2)#46

Merged
Prekzursil merged 3 commits intomainfrom
feature/m2-save-lab
Feb 18, 2026
Merged

[codex] M2 Save Lab: hardening + UX completion (P0-P2)#46
Prekzursil merged 3 commits intomainfrom
feature/m2-save-lab

Conversation

@Prekzursil
Copy link
Owner

@Prekzursil Prekzursil commented Feb 17, 2026

User description

Summary

This PR delivers the integrated M2 Save Lab hardening + UX completion wave (P0+P1+P2) on top of the existing Save Lab foundation.

Primary outcomes:

  • Typed schema-path patch packs are validated more strictly and load robustly from disk.
  • Preview/apply behavior is corrected for target-profile compatibility and stale selector resilience.
  • Save Editor UX now supports strict/non-strict apply mode directly in the UI.
  • Save editing type coverage is expanded (float, double, ascii) for deterministic offline workflows.
  • CLI wrappers are hardened and validated with end-to-end export/apply smoke evidence.

Refs #8

Problem and User Impact

Before this change set:

  • Contract validation gaps allowed malformed packs to fail late and opaquely.
  • Preview could validate against the wrong profile context.
  • Applies could fail if fieldPath drifted even when fieldId was correct.
  • UI forced strict hash mode with no operator control.
  • Save edit support for floating/text values was incomplete.
  • Script wrappers used nullable logger wiring that could fail depending on runtime assembly shape.

Effect on users: patch-pack workflows were less predictable and less recoverable, especially when moving artifacts across save variants.

Root Cause

  • Validation and schema intent were partially misaligned (newValue requirement and raw JSON checks).
  • Preview contract/call-site semantics did not enforce explicit target profile usage.
  • Apply selector routing had a single-point dependency on fieldPath freshness.
  • UI mode wiring hardcoded strict behavior.
  • Codec/edit parser coverage lagged model/schema capability.
  • Wrapper logger binding assumed a property-only NullLogger<T>.Instance shape.

What Changed

P0 correctness

  • Removed machine-specific save root override in app startup.
  • Strengthened patch-pack contract checks in SavePatchPackService:
    • metadata.schemaVersion == "1.0"
    • metadata.createdAtUtc required
    • operation kind must be SetValue
    • operation offset >= 0
    • operations[].newValue required (oldValue remains optional)
  • Added raw JSON contract validation and case-insensitive property handling for disk roundtrip robustness.
  • Updated tools/schemas/save-patch-pack.schema.json and validator script to require newValue.

P1 service/API behavior

  • Updated ISavePatchPackService.PreviewApplyAsync(...) to require targetProfileId.
  • Updated all call sites to pass selected target profile.
  • Added preview warning surfacing for FieldId/FieldPath drift.
  • Implemented apply fallback: try FieldPath, then retry FieldId on selector mismatch.

P1 UX completion

  • Added Save Editor checkbox: Strict (require exact source hash) (default ON).
  • Apply path now obeys IsStrictPatchApply.
  • Preview/compatibility panel now shows strict-gate implications and source hash state explicitly.

P2 type and tooling hardening

  • Expanded codec edit support for float, double, and ascii writes.
  • Expanded ViewModel primitive parsing for invariant float/double support.
  • Hardened wrapper scripts to resolve NullLogger<T>.Instance across field/property runtime variants.

Epic #8 / M2 Acceptance Mapping

  • M2/S1 Patch-pack contract + schema: implemented and validated.
  • M2/S2 Export/import + compatibility checks: strengthened; preview/profile semantics fixed.
  • M2/S3 Atomic apply + rollback: fallback selector resilience added while preserving rollback semantics.
  • M2/S4 Full Save Editor integration: strict toggle and improved diagnostics delivered.
  • M2/S5 Deterministic coverage: new regression tests for JSON roundtrip, selector fallback, strict-mode behavior, and expanded value types.

Verification Evidence

Build and deterministic tests

  • dotnet build SwfocTrainer.sln -c Release --no-restore
  • dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~SavePatch"
    • Result: Passed 17, Failed 0
  • dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName!~SwfocTrainer.Tests.Profiles.Live&FullyQualifiedName!~RuntimeAttachSmokeTests"
    • Result: Passed 85, Failed 0

Schema/tooling checks

  • powershell.exe -NoProfile -ExecutionPolicy Bypass -Command "& '.\\tools\\validate-save-patch-pack.ps1' -PatchPackPath 'tools/fixtures/save_patch_pack_sample.json' -SchemaPath 'tools/schemas/save-patch-pack.schema.json' -Strict"
    • Result: validation passed

Wrapper smoke (offline synthetic saves)

  • tools/export-save-patch-pack.ps1 executed successfully with validation pass
  • tools/apply-save-patch-pack.ps1 executed with Classification=Applied on strict hash match
  • Backup and receipt artifacts emitted under TestResults/savepatch-smoke/

Notes

  • Scope remains offline/deterministic Save Lab workflow (no live game process required for this PR’s acceptance).
  • Existing broad XML-doc warnings in untouched surfaces are unchanged by this PR.

PR Type

Enhancement, Tests, Documentation


Description

M2 Save Lab hardening and UX completion wave (P0-P2)

Core patch-pack workflow implementation:

  • Strict schema-path patch-pack validation with contract enforcement (newValue required, metadata checks, operation kind validation)

  • Atomic patch apply with backup/receipt generation and rollback capability

  • Fallback selector strategy: try fieldPath, then retry fieldId on mismatch with drift warnings

  • Extended codec support for float, double, and ascii field types with invariant-culture parsing

UI and UX enhancements:

  • Save Editor patch-pack workflow integration with load/export/preview/apply/restore commands

  • Strict mode checkbox (default ON) for source-hash validation control

  • Operation preview and compatibility diagnostic grids in editor

  • Profile-aware preview and apply semantics with explicit target profile requirement

Service hardening:

  • ISavePatchPackService for export, load, compatibility validation, and preview

  • ISavePatchApplyService for atomic apply with strict/non-strict mode control

  • Raw JSON contract validation and case-insensitive property handling for disk roundtrip robustness

  • Hardened PowerShell wrappers with NullLogger<T> instance resolution across field/property variants

Tooling and governance:

  • New validation scripts: validate-save-patch-pack.ps1, validate-repro-bundle.ps1, validate-policy-contracts.ps1

  • New operational scripts: export-save-patch-pack.ps1, apply-save-patch-pack.ps1, collect-mod-repro-bundle.ps1, compare-visual-pack.ps1

  • JSON schemas for patch-pack and repro-bundle contracts

  • Evidence-first operating model with repro bundle classification and launch-context tracking

  • Policy contract enforcement workflow for PR reliability evidence

Documentation and testing:

  • Comprehensive test coverage for patch-pack export, validation, apply, and rollback workflows

  • New runbooks: SAVE_LAB_RUNBOOK.md, LIVE_VALIDATION_RUNBOOK.md, VISUAL_AUDIT_RUNBOOK.md

  • Architecture and profile format documentation for patch-pack pipeline

  • Agent operating contract (AGENTS.md) and AI-native team operating model

  • Windows batch launchers for Debug/Release app and deterministic/live test runners

  • CI workflows for schema validation, policy enforcement, code quality, and duplication detection

Verification:

  • 17 passing deterministic tests for patch-pack operations

  • End-to-end export/apply/restore smoke evidence

  • Schema validation for all patch-pack and repro-bundle artifacts

  • Removed machine-specific save root override for portability


Diagram Walkthrough

flowchart LR
  Load["Load Save File"]
  Edit["Edit Fields"]
  Export["Export Patch Pack"]
  Validate["Validate Schema"]
  Preview["Preview + Compatibility Check"]
  Apply["Apply Atomically"]
  Backup["Generate Backup/Receipt"]
  Restore["Restore from Backup"]
  
  Load --> Edit
  Edit --> Export
  Export --> Validate
  Validate --> Preview
  Preview --> Apply
  Apply --> Backup
  Backup --> Restore
Loading

File Walkthrough

Relevant files
Enhancement
25 files
MainViewModel.cs
Save Lab patch-pack UI integration and workflow commands 

src/SwfocTrainer.App/ViewModels/MainViewModel.cs

  • Added patch-pack workflow properties and commands (SavePatchPackPath,
    IsStrictPatchApply, SavePatchOperations, SavePatchCompatibility)
  • Implemented export, load, preview, apply, and restore backup
    operations for save patch packs
  • Enhanced primitive parsing to support invariant-culture float/double
    with suffix detection
  • Integrated ISavePatchPackService and ISavePatchApplyService
    dependencies
+307/-2 
SavePatchPackService.cs
Patch-pack export, load, and compatibility validation service

src/SwfocTrainer.Saves/Services/SavePatchPackService.cs

  • Implements schema-path patch-pack export/import with strict contract
    validation
  • Validates patch-pack metadata, operations, and raw JSON structure with
    case-insensitive property handling
  • Provides compatibility checking and non-mutating preview of apply
    operations
  • Supports field resolution fallback from fieldPath to fieldId with
    drift warnings
+440/-0 
SavePatchApplyService.cs
Atomic patch apply with backup and rollback pipeline         

src/SwfocTrainer.Saves/Services/SavePatchApplyService.cs

  • Implements atomic patch apply with backup/receipt generation and
    rollback capability
  • Enforces strict source-hash validation when requested; allows
    non-strict mode for drift tolerance
  • Applies field mutations with fallback selector strategy (fieldPath
    then fieldId)
  • Provides backup restoration from receipt metadata or file scan
+349/-0 
SavePatchFieldCodec.cs
Patch field codec with extended type support                         

src/SwfocTrainer.Saves/Internal/SavePatchFieldCodec.cs

  • Provides field value reading/writing for int32, uint32, int64, float,
    double, byte, bool, and ascii types
  • Normalizes patch values from JSON elements to typed scalars with
    invariant culture
  • Computes SHA256 hashes for source/target save validation
  • Handles endianness-aware binary serialization for floating-point types
+106/-0 
BinarySaveCodec.cs
Extended codec support for float, double, and ascii types

src/SwfocTrainer.Saves/Services/BinarySaveCodec.cs

  • Added double field type reading support in schema tree building
  • Expanded field edit support for float, double, and ascii value types
  • Implemented endianness-aware floating-point write helper
+50/-0   
SavePatchModels.cs
Core domain models for save patch-pack workflows                 

src/SwfocTrainer.Core/Models/SavePatchModels.cs

  • Defined SavePatchPack top-level record with metadata, compatibility,
    and operations
  • Defined SavePatchMetadata with schema version, profile, schema ID,
    source hash, and creation timestamp
  • Defined SavePatchOperation with kind, field path/ID, value type,
    old/new values, and offset
  • Defined result records for compatibility checks, previews, apply
    outcomes, and rollback operations
+89/-0   
Enums.cs
Enums for patch operation kinds and apply classifications

src/SwfocTrainer.Core/Models/Enums.cs

  • Added SavePatchOperationKind enum with SetValue variant
  • Added SavePatchApplyClassification enum with Applied,
    ValidationFailed, CompatibilityFailed, WriteFailed, and RolledBack
    states
+20/-0   
ISavePatchPackService.cs
Service contract for patch-pack operations                             

src/SwfocTrainer.Core/Contracts/ISavePatchPackService.cs

  • Defines contract for patch-pack export, load, compatibility
    validation, and preview operations
  • Requires target profile ID for all operations to enforce profile-aware
    semantics
+41/-0   
ISavePatchApplyService.cs
Service contract for patch apply and rollback                       

src/SwfocTrainer.Core/Contracts/ISavePatchApplyService.cs

  • Defines contract for atomic patch apply with strict/non-strict mode
    control
  • Defines contract for backup restoration
+24/-0   
SavePatchOperationViewItem.cs
View model for patch operation preview rows                           

src/SwfocTrainer.App/Models/SavePatchOperationViewItem.cs

  • UI row model for patch-pack operation preview with kind, field
    path/ID, value type, and old/new values
+12/-0   
SavePatchCompatibilityViewItem.cs
View model for compatibility diagnostic rows                         

src/SwfocTrainer.App/Models/SavePatchCompatibilityViewItem.cs

  • UI row model for compatibility diagnostics with severity, code, and
    message
+9/-0     
run-live-validation.ps1
Live validation script with scoped execution and repro-bundle support

tools/run-live-validation.ps1

  • Refactored test execution to support scoped runs (AOTR, ROE, TACTICAL,
    FULL) with run ID tracking
  • Added repro-bundle generation and validation with launch-context
    detection
  • Improved error handling with fatal error propagation and missing
    artifact detection
  • Enhanced summary reporting with structured JSON output and markdown
    templates
+220/-47
collect-mod-repro-bundle.ps1
Repro bundle collection and classification script               

tools/collect-mod-repro-bundle.ps1

  • New script to collect process snapshots, launch context, runtime mode,
    and live test results
  • Generates JSON bundle and markdown summary with classification and
    next-action guidance
  • Detects STEAMMOD IDs from process command lines and determines runtime
    mode from test outcomes
+428/-0 
validate-save-patch-pack.ps1
Patch-pack JSON schema validation script                                 

tools/validate-save-patch-pack.ps1

  • New script to validate patch-pack JSON against schema with strict
    contract checks
  • Validates metadata, compatibility, and operations with required field
    enforcement
  • Supports strict mode to require non-empty operations array
+127/-0 
apply-save-patch-pack.ps1
PowerShell wrapper for atomic save patch apply                     

tools/apply-save-patch-pack.ps1

  • New PowerShell script for applying patch packs to save files
    atomically
  • Loads .NET assemblies and instantiates codec/patch services
  • Resolves NullLogger instance across field/property runtime variants
    for robustness
  • Executes apply with strict mode toggle and outputs JSON result with
    backup/receipt paths
+111/-0 
export-save-patch-pack.ps1
PowerShell wrapper for save patch export                                 

tools/export-save-patch-pack.ps1

  • New PowerShell script for exporting patch packs from original/edited
    save pairs
  • Loads assemblies and instantiates codec/patch services with hardened
    logger resolution
  • Calls export service and validates output against schema before
    writing to disk
  • Integrates with validation script for contract enforcement
+104/-0 
validate-repro-bundle.ps1
Repro bundle schema and semantic validator                             

tools/validate-repro-bundle.ps1

  • New validation script for repro bundle JSON artifacts with
    strict/lenient modes
  • Enforces required fields, enum constraints, and semantic rules (e.g.,
    classification consistency)
  • Validates nested structures (liveTests, processSnapshot,
    launchContext, diagnostics)
  • Provides detailed error reporting for bundle contract violations
+133/-0 
compare-visual-pack.ps1
Visual pack baseline comparison and reporting                       

tools/compare-visual-pack.ps1

  • New script for comparing baseline vs candidate visual pack directories
  • Computes SHA256 hashes for image files and detects changes, new files,
    and missing files
  • Outputs structured JSON report with status (baseline_missing, no_diff,
    diff_detected)
  • Supports optional baseline absence for initial capture workflows
+110/-0 
validate-policy-contracts.ps1
Policy contract validation for governance                               

tools/validate-policy-contracts.ps1

  • New script enforcing repository policy contracts (AGENTS.md files, PR
    templates, issue templates)
  • Validates required headers and field presence in documentation and
    GitHub templates
  • Ensures consistency across runtime, tooling, and test agent contracts
  • Fails explicitly when policy violations are detected
+90/-0   
MainWindow.xaml
Save Lab patch-pack UI integration in editor                         

src/SwfocTrainer.App/MainWindow.xaml

  • Added new GroupBox "Save Lab Patch Packs" with comprehensive patch
    workflow UI
  • Integrated patch pack path input, load/export/preview/apply/restore
    buttons
  • Added strict mode checkbox for controlling source hash validation
    behavior
  • Added DataGrids for operation details and compatibility warnings
    display
+70/-1   
launch-app-debug.cmd
Windows Debug app launcher                                                             

launch-app-debug.cmd

  • New Windows batch launcher for Debug configuration
  • Builds Debug if needed, then starts SwfocTrainer.App.exe
  • Provides error handling for missing build artifacts and SDK issues
  • Complements release launcher for developer convenience
+32/-0   
launch-app-release.cmd
Windows Release app launcher                                                         

launch-app-release.cmd

  • New Windows batch launcher for Release configuration
  • Builds Release if needed, then starts SwfocTrainer.App.exe
  • Provides error handling for missing build artifacts and SDK issues
  • Enables quick app launch from repo root without manual build steps
+32/-0   
run-live-tests.cmd
Windows live tests runner                                                               

run-live-tests.cmd

  • New Windows batch script for running live profile tests
  • Filters to live test classes and handles expected skips when SWFOC not
    running
  • Provides clear messaging about prerequisites and exit codes
  • Complements deterministic test runner for full test coverage
+25/-0   
run-deterministic-tests.cmd
Windows deterministic tests runner                                             

run-deterministic-tests.cmd

  • New Windows batch script for running deterministic test suite
  • Excludes live profile tests and runtime attach smoke tests
  • Provides clear success/failure messaging and SDK requirement notes
  • Enables quick validation from repo root without manual filter syntax
+23/-0   
ci.yml
CI workflow schema validation additions                                   

.github/workflows/ci.yml

  • Added repro-bundle schema smoke test validating sample fixture
  • Added save patch-pack schema smoke test validating sample fixture
  • Both tests use strict mode and run after launch-context smoke test
  • Ensures schema contracts remain valid across CI runs
+10/-0   
Configuration changes
1 files
App.xaml.cs
Service registration and save root hardening                         

src/SwfocTrainer.App/App.xaml.cs

  • Removed machine-specific DefaultSaveRootPath override from SaveOptions
  • Registered ISavePatchPackService and ISavePatchApplyService in
    dependency injection container
+3/-2     
Tests
3 files
SavePatchPackServiceTests.cs
Comprehensive tests for patch-pack export and validation 

tests/SwfocTrainer.Tests/Saves/SavePatchPackServiceTests.cs

  • Tests deterministic export of changed fields with correct metadata and
    operation ordering
  • Tests compatibility validation for schema/profile mismatches and
    source-hash warnings
  • Tests contract validation rejection for missing newValue and invalid
    metadata
  • Tests JSON roundtrip with type normalization and field-path drift
    warnings
+256/-0 
SavePatchApplyServiceTests.cs
Comprehensive tests for patch apply and rollback workflows

tests/SwfocTrainer.Tests/Saves/SavePatchApplyServiceTests.cs

  • Tests atomic apply with backup/receipt generation and rollback
    restoration
  • Tests compatibility failure on profile mismatch and validation failure
    rollback
  • Tests strict source-hash blocking and non-strict mode allowance
  • Tests fallback selector strategy when fieldPath is stale
+213/-0 
SaveCodecTests.cs
Extended type support tests for codec                                       

tests/SwfocTrainer.Tests/Saves/SaveCodecTests.cs

  • Added test for float, double, and ascii field editing with roundtrip
    validation
  • Verifies codec correctly reads/writes extended types with invariant
    precision
+73/-0   
Documentation
11 files
TODO.md
M2 Save Lab completion and evidence tracking                         

TODO.md

  • Added evidence rule for bundle artifacts
    (TestResults/runs//repro-bundle.json)
  • Documented reliability rule requiring runId, classification, and
    launch reasonCode for runtime/mod tasks
  • Marked M2 Save Lab tasks complete with evidence links (tests, manual
    runs, schema validation)
  • Updated M1 carryover and added M2 hardening wave completion notes
+25/-10 
LIVE_VALIDATION_RUNBOOK.md
Live validation runbook for evidence collection                   

docs/LIVE_VALIDATION_RUNBOOK.md

  • Updated runbook title and scope to evidence-first methodology
  • Added scope-specific run examples (AOTR, ROE, TACTICAL, FULL)
  • Documented repro bundle validation command and artifact contract
  • Clarified closure criteria requiring valid repro-bundle.json linked in
    issue evidence
+52/-17 
TEST_PLAN.md
Test plan expansion for M2 Save Lab                                           

docs/TEST_PLAN.md

  • Added extended edit type coverage (float, double, ascii) to
    SaveCodecTests
  • Documented SavePatchPackServiceTests and SavePatchApplyServiceTests
    with specific coverage areas
  • Added tooling contract tests for launch-context, repro bundle, and
    save patch-pack schema validation
  • Expanded manual runtime checks to include patch pack
    export/apply/restore workflows
+39/-11 
SAVE_LAB_RUNBOOK.md
Save Lab operator workflow documentation                                 

docs/SAVE_LAB_RUNBOOK.md

  • New comprehensive runbook for end-to-end M2 Save Lab workflow
  • Covers in-app flow: load, edit, export, preview, apply with strict
    toggle, restore
  • Documents tooling commands for validation, export, and apply with
    parameter examples
  • Explains failure semantics and backup/receipt artifact locations
+83/-0   
README.md
README updates for M2 Save Lab features                                   

README.md

  • Added Save Lab patch-pack workflow to feature list with schema-path
    export/import, compatibility preview, strict toggle, atomic apply, and
    rollback
  • Documented quick Windows launchers (launch-app-release.cmd,
    launch-app-debug.cmd, test runners)
  • Added repro bundle and save patch-pack tooling command sections with
    examples
  • Updated roadmap reference to include SAVE_LAB_RUNBOOK.md
+31/-0   
AGENTS.md
Repository agent operating contract                                           

AGENTS.md

  • New repository-level agent operating contract defining evidence-first
    workflow
  • Establishes required evidence types: deterministic tests, repro
    bundles, profile IDs, reason codes
  • Defines reliability loop: intake → live validation → classification →
    fix → evidence → closure
  • Specifies safety rules: no blind addresses, no silent success,
    explicit profile compatibility
+38/-0   
ARCHITECTURE.md
Architecture documentation for M2 Save Lab                             

docs/ARCHITECTURE.md

  • Added Save Lab patch-pack pipeline section describing M2 workflow
    stages
  • Documents load → export → validate/preview → apply atomically →
    rollback sequence
  • Explains backup/receipt artifact generation and schema validation
    before write
  • Updated SwfocTrainer.Saves project description to include patch-pack
    operations
+15/-1   
AI_NATIVE_TEAM_OPERATING_MODEL.md
AI-native team operating model for reliability                     

docs/AI_NATIVE_TEAM_OPERATING_MODEL.md

  • New document defining AI-native team operating model for SWFOC
    runtime/mod debugging
  • Establishes evidence-first loop: intake → live validation →
    classification → fix → evidence → closure
  • Defines decision rules requiring deterministic and live evidence for
    code changes
  • Specifies PR readiness criteria: affected profiles, reason codes, test
    output, repro bundle
+40/-0   
PROFILE_FORMAT.md
Save patch-pack contract documentation                                     

docs/PROFILE_FORMAT.md

  • Added Save Patch-Pack Contract section documenting schema-path JSON
    artifact structure
  • Defines metadata, compatibility, and operations fields with v1 scope
    constraints
  • Specifies newValue requirement, optional oldValue, and typed field
    operations only
  • Documents strict-mode apply behavior and automatic backup/receipt
    generation
+33/-0   
VISUAL_AUDIT_RUNBOOK.md
Visual audit runbook for screenshot comparison                     

docs/VISUAL_AUDIT_RUNBOOK.md

  • New runbook for visual pack naming and comparison workflow
  • Defines visual pack contract: TestResults/runs//visual-pack//.png
  • Documents baseline layout under artifacts/visual-baselines/
  • Describes comparison flow and optional Applitools integration
+33/-0   
EXTERNAL_TOOLS_SETUP.md
External tools setup and configuration                                     

docs/EXTERNAL_TOOLS_SETUP.md

  • New document for configuring optional external quality tools
  • Documents SonarCloud setup with project key and token configuration
  • Documents optional Applitools visual audit integration
  • Documents jscpd duplication detection configuration and workflow
+28/-0   
Configuration
11 files
repro-bundle.schema.json
Repro bundle JSON schema definition                                           

tools/schemas/repro-bundle.schema.json

  • New JSON Schema v2020-12 for repro bundle contract validation
  • Defines required fields: schemaVersion, runId, scope, processSnapshot,
    launchContext, runtimeMode, liveTests, diagnostics, classification
  • Enforces enum constraints for scope (AOTR, ROE, TACTICAL, FULL) and
    classification values
  • Supports nested object validation for process snapshots, test
    outcomes, and diagnostics
+103/-0 
bug.yml
Bug template with repro bundle evidence fields                     

.github/ISSUE_TEMPLATE/bug.yml

  • Added required fields for runtime/mod issues: run_id,
    repro_bundle_json, repro_bundle_md, classification
  • Added dropdown for bundle classification (passed, skipped, failed,
    blocked_environment, blocked_profile_mismatch)
  • Added launch_context textarea for launchKind, reasonCode, confidence,
    and STEAMMOD/MODPATH data
  • Updated diagnostics field description to include runtimeMode and
    helper readiness details
+47/-2   
save-patch-pack.schema.json
Save patch-pack JSON schema definition                                     

tools/schemas/save-patch-pack.schema.json

  • New JSON Schema v2020-12 for save patch-pack contract validation
  • Defines metadata (schemaVersion 1.0, profileId, schemaId, sourceHash,
    createdAtUtc)
  • Defines compatibility (allowedProfileIds, requiredSchemaId,
    saveBuildHint)
  • Defines operations array with required newValue, optional oldValue,
    and strict field constraints
+53/-0   
visual-audit.yml
Visual audit workflow for baseline comparison                       

.github/workflows/visual-audit.yml

  • New workflow for visual pack comparison triggered on workflow_dispatch
  • Accepts run_id, baseline_dir, and candidate_dir inputs
  • Executes compare-visual-pack.ps1 and uploads visual-compare.json
    report
  • Includes note about optional Applitools integration when API key is
    configured
+72/-0   
policy-contract.yml
Policy contract enforcement workflow                                         

.github/workflows/policy-contract.yml

  • New workflow validating policy contracts on push/PR to main
  • Runs validate-policy-contracts.ps1 to enforce AGENTS.md and template
    presence
  • Enforces PR evidence policy for runtime/tooling/test changes (repro
    bundle + classification)
  • Fails PR if sensitive surfaces touched without required reliability
    evidence fields
+67/-0   
calibration.yml
Calibration template with bundle evidence fields                 

.github/ISSUE_TEMPLATE/calibration.yml

  • Added required fields: run_id, bundle_json, bundle_md for repro bundle
    evidence
  • Added runtime_mode textarea for
    runtimeModeHint/runtimeModeEffective/runtimeModeReasonCode
  • Made artifacts field required for calibration report submission
  • Aligned with evidence-first operating model for reliability tracking
+30/-0   
repro_bundle_sample.json
Sample repro bundle fixture                                                           

tools/fixtures/repro_bundle_sample.json

  • New sample repro bundle fixture for schema validation testing
  • Demonstrates complete bundle structure with AOTR profile, tactical
    mode, and passed test outcome
  • Includes process snapshot, launch context, runtime mode, and
    diagnostics sections
  • Serves as reference for tooling contract and CI smoke tests
+44/-0   
save_patch_pack_sample.json
Sample save patch-pack fixture                                                     

tools/fixtures/save_patch_pack_sample.json

  • New sample save patch-pack fixture for schema validation testing
  • Demonstrates metadata (schemaVersion 1.0, sourceHash, createdAtUtc),
    compatibility, and operations
  • Includes two SetValue operations with fieldPath, fieldId, valueType,
    oldValue, newValue, offset
  • Serves as reference for patch-pack contract and CI smoke tests
+36/-0   
sonarcloud.yml
SonarCloud integration workflow                                                   

.github/workflows/sonarcloud.yml

  • New workflow for SonarCloud code quality scanning on push/PR to main
  • Restores, builds Release configuration, then runs SonarCloud scan
  • Requires SONAR_TOKEN secret; skips if not configured
  • Provides continuous code quality metrics and duplication detection
+39/-0   
duplication-check.yml
Code duplication detection workflow                                           

.github/workflows/duplication-check.yml

  • New workflow for jscpd duplication detection on push/PR to main
  • Runs on ubuntu-latest with Node 20 and jscpd v4
  • Uses .jscpd.json configuration and uploads report artifact
  • Provides continuous code duplication metrics
+35/-0   
pull_request_template.md
PR template with reliability evidence fields                         

.github/pull_request_template.md

  • Added "Affected Profiles" section with checkboxes for base_sweaw,
    base_swfoc, AOTR, ROE, custom
  • Added "Reliability Evidence" section requiring repro bundle paths,
    reason codes, runtime mode, classification
  • Added repro bundle validation command to validation evidence checklist
  • Aligns PR template with evidence-first policy enforcement
+17/-0   
Additional files
6 files
CONTRIBUTING.md +1/-0     
ROADMAP_WORKFLOW.md +8/-0     
sonar-project.properties +10/-0   
AGENTS.md +18/-0   
AGENTS.md +17/-0   
AGENTS.md +17/-0   

Summary by CodeRabbit

  • New Features

    • Added Save Lab patch-pack system for exporting, previewing, applying, and rolling back save modifications
    • Added live validation with reproducible bundle collection and diagnostics
    • Added visual audit workflow for comparing candidate vs baseline visuals
    • Added Windows launcher scripts for quick app build/start and test execution
  • CI/Workflows

    • Added code duplication detection workflow
    • Added SonarCloud quality analysis integration
    • Added patch-pack and repro-bundle schema validation checks
  • Documentation

    • Added runbooks and configuration guides for Save Lab, live validation, and visual audits
    • Added tooling and external setup documentation
    • Updated contribution guidelines with reliability evidence requirements

Policy Evidence Tokens

Repro bundle JSON: tools/fixtures/repro_bundle_sample.json
Classification: justified skip (save/editor + tooling scope in this PR; live runtime evidence tracked separately in #19/#34)

@devloai
Copy link

devloai bot commented Feb 17, 2026

Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔
Please upgrade your plan or buy additional credits from the subscription page.

@coderabbitai
Copy link

coderabbitai bot commented Feb 17, 2026

Warning

Rate limit exceeded

@Prekzursil has exceeded the limit for the number of commits that can be reviewed per hour. Please wait 20 minutes and 17 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📝 Walkthrough

Walkthrough

A comprehensive enhancement adding save patch-pack functionality (export, compatibility validation, atomic apply with backup/rollback), repro bundle collection and validation for runtime issues, GitHub Actions workflows for SonarCloud/duplication analysis/policy enforcement/visual audits, evidence-driven operating model documentation, Windows launch scripts, and schema definitions backing these features.

Changes

Cohort / File(s) Summary
Save Patch Pack Core Services
src/SwfocTrainer.Core/Contracts/ISavePatchPackService.cs, src/SwfocTrainer.Core/Contracts/ISavePatchApplyService.cs, src/SwfocTrainer.Core/Models/SavePatchModels.cs, src/SwfocTrainer.Core/Models/Enums.cs, src/SwfocTrainer.Saves/Services/SavePatchPackService.cs, src/SwfocTrainer.Saves/Services/SavePatchApplyService.cs, src/SwfocTrainer.Saves/Internal/SavePatchFieldCodec.cs
Introduces interfaces and implementations for exporting, loading, validating, and previewing save patch packs with atomic apply, backup/receipt generation, and rollback support; adds enums and immutable data models for operations, compatibility, and apply results.
Save Codec Enhancements
src/SwfocTrainer.Saves/Services/BinarySaveCodec.cs
Extends binary codec to support float, double, and ASCII field types; adds endian-aware floating-point write operations and ASCII encoding/decoding for editing operations.
UI Integration
src/SwfocTrainer.App/MainWindow.xaml, src/SwfocTrainer.App/ViewModels/MainViewModel.cs, src/SwfocTrainer.App/App.xaml.cs, src/SwfocTrainer.App/Models/SavePatchOperationViewItem.cs, src/SwfocTrainer.App/Models/SavePatchCompatibilityViewItem.cs
Adds Save Lab Patch Packs UI group with browse/load/export/preview/apply/restore buttons, DataGrids for operations and compatibility, command bindings, and observable collections; registers new services; introduces UI data models.
Deterministic & Patch Pack Tests
tests/SwfocTrainer.Tests/Saves/SaveCodecTests.cs, tests/SwfocTrainer.Tests/Saves/SavePatchPackServiceTests.cs, tests/SwfocTrainer.Tests/Saves/SavePatchApplyServiceTests.cs
Adds comprehensive unit tests for codec float/double/ASCII editing, patch pack export/compatibility/validation/loading, and atomic apply with backup/rollback/strict-mode/field-fallback scenarios; includes helper utilities for synthetic data generation and hash computation.
GitHub Actions Workflows
.github/workflows/sonarcloud.yml, .github/workflows/duplication-check.yml, .github/workflows/policy-contract.yml, .github/workflows/visual-audit.yml, .github/workflows/ci.yml
Introduces new workflows: SonarCloud analysis with .NET build, jscpd duplication detection, policy-contract validation (checking sensitive-area file touches and PR evidence requirements), manual visual-audit with baseline comparison, and CI enhancements for repro-bundle and patch-pack schema smoke tests.
GitHub Issue & PR Templates
.github/ISSUE_TEMPLATE/bug.yml, .github/ISSUE_TEMPLATE/calibration.yml, .github/pull_request_template.md
Extends bug report with run_id, repro_bundle_json/md, classification dropdown, launch_context, and diagnostics fields; adds calibration form inputs for run_id/bundle_json/md/runtime_mode; adds PR template sections for affected profiles and reliability evidence.
Evidence & Operating Model Documentation
docs/AI_NATIVE_TEAM_OPERATING_MODEL.md, AGENTS.md, src/SwfocTrainer.Runtime/AGENTS.md, tools/AGENTS.md, tests/AGENTS.md, docs/EXTERNAL_TOOLS_SETUP.md, docs/ROADMAP_WORKFLOW.md, CONTRIBUTING.md
Introduces AI-native operating contracts specifying required evidence (repro bundles, classification, launch reason codes), reliability rules, PR readiness criteria, and setup/guidance for external tooling (SonarCloud, Applitools, jscpd) across runtime, tools, and test agents.
Runbooks & Guides
docs/LIVE_VALIDATION_RUNBOOK.md, docs/SAVE_LAB_RUNBOOK.md, docs/VISUAL_AUDIT_RUNBOOK.md, docs/PROFILE_FORMAT.md, docs/ARCHITECTURE.md, docs/TEST_PLAN.md, README.md, TODO.md
Expands documentation with Save Lab patch-pack workflow, visual audit naming/comparison, updated live validation with repro-bundle contract, Save patch-pack and spawn preset schema details, M2 patch-pack pipeline in architecture, test plan updates for SavePatchPack/SelectedUnitTransaction/schema contract tests, and roadmap refocus toward M2 Save Lab work.
Repro Bundle Tooling
tools/collect-mod-repro-bundle.ps1, tools/run-live-validation.ps1, tools/schemas/repro-bundle.schema.json, tools/fixtures/repro_bundle_sample.json, tools/validate-repro-bundle.ps1
Adds comprehensive repro bundle collection (process snapshot, launch context detection, live test mapping, runtime mode hints, diagnostics classification), validation against schema, and integration into live validation script with scope-based test execution (AOTR/ROE/TACTICAL/FULL) and conditional emission.
Save Patch Pack Tooling
tools/export-save-patch-pack.ps1, tools/apply-save-patch-pack.ps1, tools/validate-save-patch-pack.ps1, tools/schemas/save-patch-pack.schema.json, tools/fixtures/save_patch_pack_sample.json
Introduces PowerShell utilities for exporting patch packs from original/edited save pairs, applying patches atomically with strict validation, and schema validation; includes JSON schema and sample fixture defining metadata, compatibility, and operation structure.
Visual Audit Tooling
tools/compare-visual-pack.ps1, .github/workflows/visual-audit.yml, docs/VISUAL_AUDIT_RUNBOOK.md
Adds PowerShell script to compare baseline vs. candidate visual packs with SHA256-based change detection and JSON report output; GitHub Actions workflow for manual triggering with baseline/candidate directory resolution; runbook documenting pack naming and comparison flow.
Configuration & Build Helpers
sonar-project.properties, .jscpd.json, launch-app-debug.cmd, launch-app-release.cmd, run-deterministic-tests.cmd, run-live-tests.cmd
Adds SonarQube project config, jscpd duplication detection config (C#, PowerShell, JSON, YAML, Markdown, JavaScript, Python formats), and Windows batch scripts for building/running app in debug/release modes and executing test suites with proper error handling and exit codes.

Sequence Diagram

sequenceDiagram
    participant UI as Save Editor UI
    participant VM as MainViewModel
    participant PPS as SavePatchPackService
    participant SPA as SavePatchApplyService
    participant SC as SaveCodec
    participant FS as File System

    UI->>VM: ExportPatchPack
    activate VM
    VM->>PPS: ExportAsync(original, edited, profile)
    activate PPS
    PPS->>PPS: Diff documents
    PPS->>PPS: Build operations for changed fields
    PPS->>PPS: Generate metadata & compatibility
    PPS-->>VM: SavePatchPack
    deactivate PPS
    VM->>FS: Write patch to JSON
    VM->>VM: SetLoadedPatchPack

    UI->>VM: PreviewPatchPack
    activate VM
    VM->>PPS: PreviewApplyAsync(pack, target, profile)
    activate PPS
    PPS->>PPS: Validate compatibility
    PPS->>PPS: Generate operations preview
    PPS-->>VM: SavePatchPreview (Errors, Warnings, Operations)
    deactivate PPS
    VM->>UI: Populate SavePatchOperations & Compatibility DataGrids
    deactivate VM

    UI->>VM: ApplyPatchPack(strict=true)
    activate VM
    VM->>SPA: ApplyAsync(target, pack, profile, strict)
    activate SPA
    SPA->>SC: Load target document
    activate SC
    SC-->>SPA: SaveDocument
    deactivate SC
    SPA->>PPS: ValidateCompatibilityAsync
    activate PPS
    PPS-->>SPA: CompatibilityResult
    deactivate PPS
    alt Strict Mode
        SPA->>SPA: Check source hash match
    end
    SPA->>SPA: Create backup snapshot
    SPA->>SPA: Apply each operation in order
    SPA->>SC: Validate edited document
    activate SC
    SC-->>SPA: Validation result
    deactivate SC
    SPA->>FS: Write backup
    SPA->>FS: Write temp file
    SPA->>FS: Move to target
    SPA->>FS: Write receipt JSON
    SPA-->>VM: SavePatchApplyResult (Applied, Paths)
    deactivate SPA
    VM->>UI: Update summary, reload save
    deactivate VM

    UI->>VM: RestoreBackup
    activate VM
    VM->>SPA: RestoreLastBackupAsync(target)
    activate SPA
    SPA->>FS: Resolve latest backup path
    SPA->>FS: Read backup bytes
    SPA->>FS: Restore to target
    SPA-->>VM: SaveRollbackResult (Restored)
    deactivate SPA
    VM->>UI: Reload save
    deactivate VM
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related issues

  • [Ops][Slice] Visual audit pack workflow #43: PR directly implements the visual audit pack workflow with tools/compare-visual-pack.ps1, .github/workflows/visual-audit.yml, and docs/VISUAL_AUDIT_RUNBOOK.md described in the issue.

Poem

🐰 A bunny's ode to patch-pack grace
With backups safe in every place,
And repro bundles holding fast,
We validate and make things last!
From export to restore with care—
Evidence-driven, all we share! 🐇✨

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 7.32% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The PR title '[codex] M2 Save Lab: hardening + UX completion (P0-P2)' directly summarizes the main change—M2 Save Lab hardening and UX completion across priority levels—aligning with the comprehensive changes throughout the codebase.
Description check ✅ Passed The PR description is comprehensive and well-structured, covering summary, problem statement, root causes, changes across P0/P1/P2 priorities, and verification evidence with test results.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feature/m2-save-lab

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@github-actions github-actions bot added area:runtime Runtime attach, memory actions, symbol resolution area:app WPF app/UI/ViewModel area:saves Save codec and save editing area:ci CI/CD and automation area:tooling Scripts and developer tooling area:docs Documentation/process related work. labels Feb 17, 2026
@Prekzursil Prekzursil marked this pull request as ready for review February 17, 2026 21:49
@Prekzursil
Copy link
Owner Author

M2 Save Lab verification evidence (offline/deterministic):

  • Build:

    • dotnet build SwfocTrainer.sln -c Release --no-restore
    • Result: success (0 errors)
  • SavePatch-focused tests:

    • dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~SavePatch"
    • Result: Passed 17, Failed 0
  • Deterministic non-live suite:

    • dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName!~SwfocTrainer.Tests.Profiles.Live&FullyQualifiedName!~RuntimeAttachSmokeTests"
    • Result: Passed 85, Failed 0
  • Patch-pack schema validation:

    • powershell.exe -NoProfile -ExecutionPolicy Bypass -Command "& '.\\tools\\validate-save-patch-pack.ps1' -PatchPackPath 'tools/fixtures/save_patch_pack_sample.json' -SchemaPath 'tools/schemas/save-patch-pack.schema.json' -Strict"
    • Result: save patch-pack validation passed
  • Wrapper smoke (synthetic saves, strict apply):

    • pwsh ./tools/export-save-patch-pack.ps1 ...
    • pwsh ./tools/apply-save-patch-pack.ps1 ... -Strict:$true
    • Result: Classification=Applied, Applied=true
    • Artifacts: TestResults/savepatch-smoke/target.sav.bak.<runId>.sav, TestResults/savepatch-smoke/target.sav.apply-receipt.<runId>.json

@codacy-production
Copy link

codacy-production bot commented Feb 17, 2026

Codacy's Analysis Summary

14 new issues (≤ 0 issue)
0 new security issue
211 complexity
14 duplications

Review Pull Request in Codacy →

AI Reviewer available: add the codacy-review label to get contextual insights without leaving GitHub.

@qodo-code-review
Copy link

qodo-code-review bot commented Feb 17, 2026

PR Compliance Guide 🔍

Below is a summary of compliance checks for this PR:

Security Compliance
🟢
No security concerns identified No security vulnerabilities detected by AI analysis. Human verification advised for critical code.
Ticket Compliance
🟡
🎫 #8
🟢 Full-schema save editing supports validation before write.
Diff/patch pack export-import works across compatible save targets.
Rollback snapshot exists before write.
Restore path is documented.
Corpus round-trip checks pass for supported schemas.
Codebase Duplication Compliance
Codebase context is not defined

Follow the guide to enable codebase context checks.

Custom Compliance
🟢
Generic: Meaningful Naming and Self-Documenting Code

Objective: Ensure all identifiers clearly express their purpose and intent, making code
self-documenting

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Security-First Input Validation and Data Handling

Objective: Ensure all data inputs are validated, sanitized, and handled securely to prevent
vulnerabilities

Status: Passed

Learn more about managing compliance generic rules or creating your own custom rules

🔴
Generic: Secure Error Handling

Objective: To prevent the leakage of sensitive system information through error messages while
providing sufficient detail for internal debugging.

Status:
Leaks internal errors: User-consumable result messages directly embed exception messages (and potentially file
paths/internal details) rather than providing a generic user message with details only in
secure logs.

Referred Code
SaveDocument targetDoc;
try
{
    targetDoc = await _saveCodec.LoadAsync(normalizedTargetPath, pack.Metadata.SchemaId, cancellationToken);
}
catch (Exception ex)
{
    return new SavePatchApplyResult(
        SavePatchApplyClassification.CompatibilityFailed,
        Applied: false,
        Message: $"Target load failed: {ex.Message}",
        Failure: new SavePatchApplyFailure("target_load_failed", ex.Message));
}

var compatibility = await _patchPackService.ValidateCompatibilityAsync(pack, targetDoc, targetProfileId, cancellationToken);
if (!compatibility.IsCompatible)
{
    return new SavePatchApplyResult(
        SavePatchApplyClassification.CompatibilityFailed,
        Applied: false,
        Message: $"Compatibility failed: {string.Join(" | ", compatibility.Errors)}",


 ... (clipped 53 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Comprehensive Audit Trails

Objective: To create a detailed and reliable record of critical system actions for security analysis
and compliance.

Status:
Missing audit context: The patch apply/restore pipeline persists a receipt and logs some failures but does not
record a consistent audit event with actor/user identity and outcome for critical write
operations, so audit-trail completeness cannot be verified from the diff alone.

Referred Code
public async Task<SavePatchApplyResult> ApplyAsync(
    string targetSavePath,
    SavePatchPack pack,
    string targetProfileId,
    bool strict = true,
    CancellationToken cancellationToken = default)
{
    var normalizedTargetPath = NormalizeTargetPath(targetSavePath);
    var runId = DateTimeOffset.UtcNow.ToString("yyyyMMddHHmmssfff");
    var backupPath = $"{normalizedTargetPath}.bak.{runId}.sav";
    var receiptPath = $"{normalizedTargetPath}.apply-receipt.{runId}.json";
    var tempOutputPath = $"{normalizedTargetPath}.tmp.{runId}.sav";

    SaveDocument targetDoc;
    try
    {
        targetDoc = await _saveCodec.LoadAsync(normalizedTargetPath, pack.Metadata.SchemaId, cancellationToken);
    }
    catch (Exception ex)
    {
        return new SavePatchApplyResult(


 ... (clipped 165 lines)

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Robust Error Handling and Edge Case Management

Objective: Ensure comprehensive error handling that provides meaningful context and graceful
degradation

Status:
Swallowed exceptions: Backup resolution silently swallows JSON parse/deserialize failures when reading the
latest receipt, which may hinder diagnosis unless higher-level logging/telemetry exists.

Referred Code
if (!string.IsNullOrWhiteSpace(latestReceiptPath))
{
    try
    {
        var json = await File.ReadAllTextAsync(latestReceiptPath, cancellationToken);
        var receipt = JsonSerializer.Deserialize<SavePatchApplyReceipt>(json, ReceiptJsonOptions);
        if (receipt is not null && !string.IsNullOrWhiteSpace(receipt.BackupPath))
        {
            return receipt.BackupPath;
        }
    }
    catch
    {
        // Fall back to backup file scan.
    }
}

Learn more about managing compliance generic rules or creating your own custom rules

Generic: Secure Logging Practices

Objective: To ensure logs are useful for debugging and auditing without exposing sensitive
information like PII, PHI, or cardholder data.

Status:
Path in logs: Error/debug logs include the full TargetSavePath, which may be sensitive depending on
environment and log collection/retention policies and cannot be assessed from the diff
alone.

Referred Code
_logger.LogError(ex, "Patch apply write path failed for {TargetSavePath}", normalizedTargetPath);
if (File.Exists(tempOutputPath))
{
    File.Delete(tempOutputPath);
}

try
{
    await File.WriteAllBytesAsync(normalizedTargetPath, preApplyBytes, cancellationToken);
    return new SavePatchApplyResult(
        SavePatchApplyClassification.RolledBack,
        Applied: false,
        Message: $"Write failed and rollback restored original bytes: {ex.Message}",
        BackupPath: File.Exists(backupPath) ? backupPath : null,
        ReceiptPath: File.Exists(receiptPath) ? receiptPath : null,
        Failure: new SavePatchApplyFailure("write_failed_rolled_back", ex.Message));
}
catch (Exception rollbackEx)
{
    _logger.LogError(rollbackEx, "Rollback failed after write failure for {TargetSavePath}", normalizedTargetPath);
    return new SavePatchApplyResult(


 ... (clipped 123 lines)

Learn more about managing compliance generic rules or creating your own custom rules

  • Update
Compliance status legend 🟢 - Fully Compliant
🟡 - Partial Compliant
🔴 - Not Compliant
⚪ - Requires Further Human Verification
🏷️ - Compliance label

@qodo-code-review
Copy link

qodo-code-review bot commented Feb 17, 2026

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
Possible issue
Improve backup resolution logic

Improve the backup resolution logic by logging exceptions when a receipt file
fails to parse and verifying that the backup file from the receipt exists before
returning its path.

src/SwfocTrainer.Saves/Services/SavePatchApplyService.cs [249-288]

-private static async Task<string?> ResolveLatestBackupPathAsync(string targetPath, CancellationToken cancellationToken)
+private async Task<string?> ResolveLatestBackupPathAsync(string targetPath, CancellationToken cancellationToken)
 {
     var directory = Path.GetDirectoryName(targetPath);
     var fileName = Path.GetFileName(targetPath);
     if (string.IsNullOrWhiteSpace(directory) || string.IsNullOrWhiteSpace(fileName))
     {
         return null;
     }
 
     var receiptPattern = $"{fileName}.apply-receipt.*.json";
     var latestReceiptPath = Directory.EnumerateFiles(directory, receiptPattern)
         .Select(path => new FileInfo(path))
         .OrderByDescending(info => info.LastWriteTimeUtc)
         .Select(info => info.FullName)
         .FirstOrDefault();
 
     if (!string.IsNullOrWhiteSpace(latestReceiptPath))
     {
         try
         {
             var json = await File.ReadAllTextAsync(latestReceiptPath, cancellationToken);
             var receipt = JsonSerializer.Deserialize<SavePatchApplyReceipt>(json, ReceiptJsonOptions);
-            if (receipt is not null && !string.IsNullOrWhiteSpace(receipt.BackupPath))
+            if (receipt is not null && !string.IsNullOrWhiteSpace(receipt.BackupPath) && File.Exists(receipt.BackupPath))
             {
                 return receipt.BackupPath;
             }
         }
-        catch
+        catch (Exception ex)
         {
-            // Fall back to backup file scan.
+            _logger.LogWarning(ex, "Failed to read backup path from receipt file {ReceiptPath}. Falling back to file scan.", latestReceiptPath);
         }
     }
 
     var backupPattern = $"{fileName}.bak.*.sav";
     return Directory.EnumerateFiles(directory, backupPattern)
         .Select(path => new FileInfo(path))
         .OrderByDescending(info => info.LastWriteTimeUtc)
         .Select(info => info.FullName)
         .FirstOrDefault();
 }
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies that swallowing exceptions is poor practice and improves the method's robustness by adding logging and a file existence check.

Low
Improve dependency path resolution robustness

Replace hardcoded dependency paths in Resolve-LoggingAbstractionsPath with
dynamic discovery using commands like nuget locals to make the script more
portable.

tools/apply-save-patch-pack.ps1 [17-46]

 function Resolve-LoggingAbstractionsPath {
     param([string]$RepoRoot)
 
     $candidates = New-Object System.Collections.Generic.List[string]
     $candidates.Add((Join-Path $RepoRoot "src/SwfocTrainer.Saves/bin/Release/net8.0/Microsoft.Extensions.Logging.Abstractions.dll"))
     $candidates.Add((Join-Path $RepoRoot "src/SwfocTrainer.Core/bin/Release/net8.0/Microsoft.Extensions.Logging.Abstractions.dll"))
 
-    $sharedRoot = "C:\\Program Files\\dotnet\\shared\\Microsoft.AspNetCore.App"
-    if (Test-Path $sharedRoot) {
-        $sharedDll = Get-ChildItem -Path $sharedRoot -Directory |
-            Sort-Object Name -Descending |
-            ForEach-Object { Join-Path $_.FullName "Microsoft.Extensions.Logging.Abstractions.dll" } |
-            Where-Object { Test-Path $_ } |
-            Select-Object -First 1
-        if ($sharedDll) {
-            $candidates.Add($sharedDll)
+    try {
+        $nugetPath = (nuget locals global-packages -l).Split(':', 2)[1].Trim()
+        if (-not [string]::IsNullOrWhiteSpace($nugetPath)) {
+            $nugetPackage = Get-ChildItem -Path (Join-Path $nugetPath "microsoft.extensions.logging.abstractions") -Directory |
+                Sort-Object Name -Descending |
+                Select-Object -First 1
+            if ($nugetPackage) {
+                $candidates.Add((Join-Path $nugetPackage.FullName "lib/net8.0/Microsoft.Extensions.Logging.Abstractions.dll"))
+            }
         }
     }
-
-    $nugetDll = Join-Path $env:USERPROFILE ".nuget\\packages\\microsoft.extensions.logging.abstractions\\8.0.3\\lib\\net8.0\\Microsoft.Extensions.Logging.Abstractions.dll"
-    $candidates.Add($nugetDll)
+    catch {
+        Write-Warning "Could not resolve NuGet cache path. Continuing without it."
+    }
 
     foreach ($candidate in $candidates) {
         if (-not [string]::IsNullOrWhiteSpace($candidate) -and (Test-Path $candidate)) {
             return $candidate
         }
     }
 
     throw "Could not locate Microsoft.Extensions.Logging.Abstractions.dll."
 }
  • Apply / Chat
Suggestion importance[1-10]: 6

__

Why: The suggestion correctly identifies that hardcoded paths make the script brittle and proposes a more robust, dynamic approach to locate dependencies, improving portability.

Low
General
Avoid unnecessary memory allocations

Optimize ReadSingle and ReadDouble methods to avoid memory allocation by using
stackalloc for temporary buffers instead of ToArray().

src/SwfocTrainer.Saves/Internal/SavePatchFieldCodec.cs [85-105]

 private static float ReadSingle(ReadOnlySpan<byte> bytes, bool littleEndian)
 {
-    var buffer = bytes.ToArray();
-    if (BitConverter.IsLittleEndian != littleEndian)
+    if (BitConverter.IsLittleEndian == littleEndian)
     {
-        Array.Reverse(buffer);
+        return BitConverter.ToSingle(bytes);
     }
 
-    return BitConverter.ToSingle(buffer, 0);
+    Span<byte> buffer = stackalloc byte[bytes.Length];
+    bytes.CopyTo(buffer);
+    buffer.Reverse();
+    return BitConverter.ToSingle(buffer);
 }
 
 private static double ReadDouble(ReadOnlySpan<byte> bytes, bool littleEndian)
 {
-    var buffer = bytes.ToArray();
-    if (BitConverter.IsLittleEndian != littleEndian)
+    if (BitConverter.IsLittleEndian == littleEndian)
     {
-        Array.Reverse(buffer);
+        return BitConverter.ToDouble(bytes);
     }
 
-    return BitConverter.ToDouble(buffer, 0);
+    Span<byte> buffer = stackalloc byte[bytes.Length];
+    bytes.CopyTo(buffer);
+    buffer.Reverse();
+    return BitConverter.ToDouble(buffer);
 }
  • Apply / Chat
Suggestion importance[1-10]: 5

__

Why: The suggestion offers a valid performance optimization by using stackalloc to avoid heap allocations, which is a good practice for performance-sensitive code.

Low
Use regex for robust PR validation

Use case-insensitive regular expressions instead of simple string matching for
PR body validation to make the check more robust against formatting variations.

.github/workflows/policy-contract.yml [31-67]

 - name: Enforce runtime/tooling/test evidence policy
   uses: actions/github-script@v7
   with:
     script: |
       const pr = context.payload.pull_request;
       const owner = context.repo.owner;
       const repo = context.repo.repo;
       const number = pr.number;
 
       const files = await github.paginate(github.rest.pulls.listFiles, {
         owner,
         repo,
         pull_number: number,
         per_page: 100
       });
 
       const touchesSensitiveSurface = files.some(f =>
         f.filename.startsWith('src/SwfocTrainer.Runtime/') ||
         f.filename.startsWith('tools/') ||
         f.filename.startsWith('tests/')
       );
 
       if (!touchesSensitiveSurface) {
         core.info('Sensitive surfaces not touched; evidence policy not enforced for this PR.');
         return;
       }
 
-      const body = (pr.body || '').toLowerCase();
-      const hasBundleField = body.includes('repro bundle json:');
-      const hasClassification = body.includes('classification:');
-      const hasSkipJustification = body.includes('justified skip');
+      const body = pr.body || '';
+      const hasBundleField = /repro bundle json:/im.test(body);
+      const hasClassification = /classification:/im.test(body);
+      const hasSkipJustification = /justified skip/im.test(body);
 
       if (!hasBundleField || (!hasClassification && !hasSkipJustification)) {
         core.setFailed(
           'PR touches runtime/tooling/tests but is missing reliability evidence. Required: Repro bundle JSON field and either classification or justified skip rationale.'
         );
       }
  • Apply / Chat
Suggestion importance[1-10]: 5

__

Why: The suggestion correctly identifies that simple string matching is brittle for PR body validation and proposes using regular expressions for a more robust CI check.

Low
Support double suffix parsing

Add support for parsing double values with an explicit 'd' suffix (e.g.,
"1.23d") in the ParsePrimitive method.

src/SwfocTrainer.App/ViewModels/MainViewModel.cs [1472-1482]

 var trimmed = input.Trim();
 if (trimmed.EndsWith("f", StringComparison.OrdinalIgnoreCase) &&
     float.TryParse(trimmed[..^1], NumberStyles.Float | NumberStyles.AllowThousands, CultureInfo.InvariantCulture, out var floatValue))
 {
     return floatValue;
 }
 
+if (trimmed.EndsWith("d", StringComparison.OrdinalIgnoreCase) &&
+    double.TryParse(trimmed[..^1], NumberStyles.Float | NumberStyles.AllowThousands, CultureInfo.InvariantCulture, out var doubleWithSuffix))
+{
+    return doubleWithSuffix;
+}
+
 if (double.TryParse(trimmed, NumberStyles.Float | NumberStyles.AllowThousands, CultureInfo.InvariantCulture, out var doubleValue))
 {
     return doubleValue;
 }
  • Apply / Chat
Suggestion importance[1-10]: 3

__

Why: The suggestion adds a minor feature enhancement by supporting an explicit 'd' suffix for double values, improving consistency with the existing 'f' suffix for floats.

Low
  • Update

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 356aa7c8d0

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +302 to +303
await _saveCodec.EditAsync(targetDoc, operation.FieldPath, value, cancellationToken);
return;

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Prefer fieldId before fieldPath when applying operations

The apply path currently executes EditAsync with operation.FieldPath first and returns on success, so a pack whose fieldPath is stale-but-still-valid (points to a different existing field) will silently mutate the wrong field and never reach the fieldId fallback. This is especially risky because preview resolves by fieldId first and only warns on path mismatch, so previewed behavior can differ from the actual write behavior on the same pack.

Useful? React with 👍 / 👎.

.Select(x => x with
{
OldValue = SavePatchFieldCodec.NormalizePatchValue(x.OldValue, x.ValueType),
NewValue = SavePatchFieldCodec.NormalizePatchValue(x.NewValue, x.ValueType)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Validate null newValue before value normalization

LoadPackAsync normalizes NewValue before running ValidatePackContract, and for numeric/bool value types NormalizePatchValue uses Convert.*, which coerces null into default values (for example 0/false). That means an input operation like "newValue": null can bypass the later newValue is required check and be accepted as a real write, causing unintended default-value patches instead of a contract error.

Useful? React with 👍 / 👎.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 12

Note

Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
tools/run-live-validation.ps1 (1)

116-129: ⚠️ Potential issue | 🟡 Minor

$args is a PowerShell automatic variable — rename to avoid shadowing it

Assigning to $args is flagged by PSAvoidAssignmentToAutomaticVariable. While local shadowing works in practice, it is confusing and could break if the function ever forwards positional arguments. Rename to $dotnetArgs (or similar).

🔧 Proposed fix
-    $args = @(
+    $dotnetArgs = @(
         "test",
         "tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj",
         "-c", $Configuration,
         "--filter", $Filter,
         "--logger", "trx;LogFileName=$TrxName",
         "--results-directory", $runResultsDirectory
     )
 
     if ($NoBuild) {
-        $args += "--no-build"
+        $dotnetArgs += "--no-build"
     }
 
-    & $dotnetExe `@args`
+    & $dotnetExe `@dotnetArgs`
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/run-live-validation.ps1` around lines 116 - 129, The script assigns to
the PowerShell automatic variable $args which can shadow caller arguments;
rename that variable (e.g. to $dotnetArgs) wherever it is declared and
referenced in this block (the array literal currently assigned to $args, the
conditional that appends "--no-build" when $NoBuild is true, and the invocation
& $dotnetExe `@args`) and update the final invocation to use the new variable
(e.g. & $dotnetExe `@dotnetArgs`) so no automatic variable is overwritten.
🟡 Minor comments (19)
AGENTS.md-13-16 (1)

13-16: ⚠️ Potential issue | 🟡 Minor

Sub-bullets under item 2 are not indented — will render as a top-level list, not children of the numbered item.

In CommonMark/GitHub Flavored Markdown, continuation content under a numbered list item must be indented by the list's content column (typically 3 spaces for single-digit markers). Without indentation these - bullets break out of the ordered list and render as a separate unordered list.

📝 Proposed fix
 2. Mod/runtime bugfixes must include a reproducible bundle:
-- `TestResults/runs/<runId>/repro-bundle.json`
-- `TestResults/runs/<runId>/repro-bundle.md`
+   - `TestResults/runs/<runId>/repro-bundle.json`
+   - `TestResults/runs/<runId>/repro-bundle.md`
 3. PRs must include affected profile IDs...
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@AGENTS.md` around lines 13 - 16, The two dash-prefixed lines meant to be
sub-bullets of item "2. Mod/runtime bugfixes must include a reproducible
bundle:" (the lines starting with `- TestResults/runs/<runId>/repro-bundle.json`
and `- TestResults/runs/<runId>/repro-bundle.md`) are not indented and thus
render as a top-level list; fix by indenting those two `-` lines to the list
content column (e.g., prefix with three spaces or align under the item text) so
they are nested under item 2 and verify item "3. PRs must include..." remains
the next top-level numbered item.
launch-app-debug.cmd-9-9 (1)

9-9: ⚠️ Potential issue | 🟡 Minor

Hardcoded net8.0-windows TFM will silently break if the project target framework changes.

If the project is ever retargeted (e.g., net9.0-windows), the exe won't be found at this path and the script falls through to a dotnet build, which may also fail or produce the binary at a different path.

♻️ Suggested improvement — derive the path from the build output

One option is to let the build step always run and derive the exe path dynamically, or at minimum document the TFM assumption clearly with a comment:

+rem NOTE: Update net8.0-windows below if the TFM in SwfocTrainer.App.csproj changes.
 set "APP_EXE=src\SwfocTrainer.App\bin\Debug\net8.0-windows\SwfocTrainer.App.exe"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@launch-app-debug.cmd` at line 9, The script currently hardcodes APP_EXE to a
specific TFM ("net8.0-windows"); change launch-app-debug.cmd to always run the
build and then discover the actual output TFM instead of hardcoding it: run
dotnet build for the SwfocTrainer.App project (Debug), enumerate subfolders
under src\SwfocTrainer.App\bin\Debug to find the net*-windows folder, and set
APP_EXE to the discovered path (bin\Debug\<found-tfm>\SwfocTrainer.App.exe);
include a fallback error if no matching exe is found and a brief comment
documenting the intent.
.github/workflows/duplication-check.yml-29-35 (1)

29-35: ⚠️ Potential issue | 🟡 Minor

Correct the artifact path or configure jscpd output directory — currently they don't match.

The repository has no .jscpd.json file, so jscpd v4 will use its default output directory ./report. However, the workflow tries to upload from jscpd-report (lines 29-35), which will result in an empty artifact. Either:

  • Create .jscpd.json with "output": "jscpd-report" to match the workflow, or
  • Change the workflow path to report to match jscpd's default.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/duplication-check.yml around lines 29 - 35, The workflow
uploads an artifact from path "jscpd-report" but jscpd v4 defaults to output
"./report" and this repo lacks a .jscpd.json, so the artifact will be empty; fix
by either adding a .jscpd.json that sets "output": "jscpd-report" so jscpd
writes to the workflow's expected directory, or update the GitHub Actions upload
step to use path: report (and keep name: jscpd-report) so it uploads jscpd's
default output directory; adjust whichever of these matches your intended
behavior.
src/SwfocTrainer.Runtime/AGENTS.md-9-18 (1)

9-18: ⚠️ Potential issue | 🟡 Minor

Add blank lines around numbered lists for standard Markdown compliance.

Both list blocks (lines 10–12 and 15–18) are missing a blank line before and after them, which Codacy and several Markdown renderers flag as malformed.

📝 Proposed fix
 ## Required Evidence
+
 1. Include tests for attach/mode/resolution behavior changes.
 2. Include launch-context reason code impact in PR notes.
 3. Include repro bundle evidence for live-only fixes.
+
 ## Runtime Safety Rules
+
 1. Signature-first; never rely on blind fixed addresses.
 2. If runtime confidence is uncertain, block mutating actions.
 3. Record actionable reason codes in diagnostics.
 4. When both `swfoc.exe` and `StarWarsG.exe` exist for FoC, attach host selection must be deterministic.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Runtime/AGENTS.md` around lines 9 - 18, The numbered lists
under the "Required Evidence" and "Runtime Safety Rules" headings in AGENTS.md
lack blank lines before and after the list blocks; edit the file to add a single
blank line immediately above the "1." of each list and a single blank line
immediately after the final numbered item so both lists are separated from
surrounding paragraphs/headings (i.e., update the blocks under the "Required
Evidence" and "Runtime Safety Rules" sections to have blank lines before the
first numbered item and after the last numbered item).
docs/SAVE_LAB_RUNBOOK.md-46-65 (1)

46-65: ⚠️ Potential issue | 🟡 Minor

Backslash line continuations are invalid in PowerShell.

PowerShell uses the backtick (`) for line continuation, not \. Users copying these multi-line commands will get parsing errors or mangled arguments.

📝 Proposed fix
 pwsh ./tools/export-save-patch-pack.ps1 `
-  -OriginalSavePath <original.sav> \
-  -EditedSavePath <edited.sav> \
-  -ProfileId base_swfoc \
-  -SchemaId base_swfoc_steam_v1 \
-  -OutputPath TestResults/patches/example.patch.json \
+  -OriginalSavePath <original.sav> `
+  -EditedSavePath <edited.sav> `
+  -ProfileId base_swfoc `
+  -SchemaId base_swfoc_steam_v1 `
+  -OutputPath TestResults/patches/example.patch.json `
   -BuildIfNeeded
 pwsh ./tools/apply-save-patch-pack.ps1 `
-  -TargetSavePath <target.sav> \
-  -PatchPackPath <patch.json> \
-  -TargetProfileId base_swfoc \
-  -Strict $true \
+  -TargetSavePath <target.sav> `
+  -PatchPackPath <patch.json> `
+  -TargetProfileId base_swfoc `
+  -Strict $true `
   -BuildIfNeeded
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docs/SAVE_LAB_RUNBOOK.md` around lines 46 - 65, The PowerShell examples use
backslash line continuations which are invalid; update the multi-line
invocations of ./tools/export-save-patch-pack.ps1 and
./tools/apply-save-patch-pack.ps1 to use PowerShell backtick (`) continuations
(or present them as single-line commands) so parameters like -OriginalSavePath,
-EditedSavePath, -ProfileId, -SchemaId, -OutputPath, -BuildIfNeeded,
-TargetSavePath, -PatchPackPath, -TargetProfileId and -Strict are passed
correctly; edit the two shown code blocks to replace each trailing backslash
with a backtick (or collapse into one-line examples) so the commands parse in
PowerShell.
.github/workflows/policy-contract.yml-58-66 (1)

58-66: ⚠️ Potential issue | 🟡 Minor

The justified skip bypass still requires repro bundle json: — verify this is intentional.

The condition at line 63 (!hasBundleField || (!hasClassification && !hasSkipJustification)) means a PR with "justified skip" in the body but no "repro bundle json:" field will still fail. If the intent of "justified skip" is to waive the evidence requirement entirely, the condition should be:

-            if (!hasBundleField || (!hasClassification && !hasSkipJustification)) {
+            if (!hasSkipJustification && (!hasBundleField || !hasClassification)) {

Also, the includes('repro bundle json:') check is case-folded but whitespace-sensitive — a contributor writing repro bundle json : or embedding it inside a markdown table cell will silently fail the gate.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/workflows/policy-contract.yml around lines 58 - 66, The current
logic requires hasBundleField even when hasSkipJustification is true and the
bundle string match is whitespace-sensitive; update hasBundleField to use a
case-insensitive regex that tolerates extra spaces (e.g.
/repro\s*bundle\s*json\s*:/i.test(body)) and change the validation condition to
require at least one of the three tokens (bundle, classification, or justified
skip) by replacing the iff with: if (!hasBundleField && !hasClassification &&
!hasSkipJustification) then call core.setFailed; this makes "justified skip"
waive the bundle requirement and makes bundle detection robust to
spacing/casing.
tools/validate-save-patch-pack.ps1-61-62 (1)

61-62: ⚠️ Potential issue | 🟡 Minor

$null should be on the left-hand side of equality comparisons

Using $pack.metadata -eq $null (and $pack.compatibility -eq $null) is flagged by PSScriptAnalyzer's PSPossibleIncorrectComparisonWithNull rule. When the operand is a collection, the right-hand $null form returns matching elements instead of a boolean.

🛡️ Proposed fix
-if ($pack.metadata -eq $null) {
+if ($null -eq $pack.metadata) {
-if ($pack.compatibility -eq $null) {
+if ($null -eq $pack.compatibility) {

Also applies to: 78-80

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/validate-save-patch-pack.ps1` around lines 61 - 62, The null-comparison
uses the variable on the left (e.g., $pack.metadata -eq $null and
$pack.compatibility -eq $null) which can misbehave for collections; change
comparisons to place $null on the left (use $null -eq $pack.metadata and $null
-eq $pack.compatibility) so PSScriptAnalyzer
PSPossibleIncorrectComparisonWithNull is satisfied and the checks return proper
booleans; update all occurrences around the checks that currently reference
$pack.metadata and the block at the $pack.compatibility check (lines referenced
in the review) to the left-$null form.
tools/apply-save-patch-pack.ps1-36-37 (1)

36-37: ⚠️ Potential issue | 🟡 Minor

Hardcoded NuGet package version 8.0.3 in fallback path

If Microsoft.Extensions.Logging.Abstractions is upgraded in the project, this fallback silently misses the new version and falls through to the throw. Consider resolving the version dynamically from the .csproj/packages.lock.json, or at minimum add a comment noting the version must be kept in sync.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/apply-save-patch-pack.ps1` around lines 36 - 37, The script currently
hardcodes the NuGet package version when building $nugetDll which will break
when Microsoft.Extensions.Logging.Abstractions is upgraded; change the logic
that constructs $nugetDll (and the $candidates.Add call) to resolve the package
version dynamically (e.g., read packages.lock.json or csproj to find the
installed version, or enumerate the
"%USERPROFILE%\.nuget\packages\microsoft.extensions.logging.abstractions\*"
directories with Get-ChildItem and pick the highest-semver folder) and then join
that folder with the relative path to
Microsoft.Extensions.Logging.Abstractions.dll; if you cannot implement dynamic
resolution now, at minimum replace the hardcoded "8.0.3" with a clearly marked
TODO comment explaining the requirement to keep it in sync.
tools/apply-save-patch-pack.ps1-107-107 (1)

107-107: ⚠️ Potential issue | 🟡 Minor

Write-Host prevents capturing the JSON result from the pipeline

Write-Host writes to the Information stream (stream 6), not the Success stream. Any caller attempting $result = pwsh ./tools/apply-save-patch-pack.ps1 ... will get an empty $result. Use Write-Output so the JSON is capturable.

🛡️ Proposed fix
-Write-Host $serialized
+Write-Output $serialized
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/apply-save-patch-pack.ps1` at line 107, The script uses Write-Host
$serialized which writes to the Information stream and prevents callers from
capturing the JSON output; change the call in the script (the line that invokes
Write-Host with the $serialized variable) to use Write-Output (or return the
string) so the JSON is written to the Success/standard output stream and can be
assigned by callers; also ensure no other Write-Host/Write-Information calls
emit extraneous text before this JSON output.
tools/validate-save-patch-pack.ps1-113-117 (1)

113-117: ⚠️ Potential issue | 🟡 Minor

Unguarded [int] cast on offset breaks the accumulated-error model

If offset is a non-numeric string (e.g. "abc"), the cast [int]$operation.offset throws a terminating exception under $ErrorActionPreference = "Stop" before the validation error can be added and aggregated. The surrounding validation logic accumulates errors rather than exiting early, so a type-safe check is needed here.

🛡️ Proposed fix
-    if ([int]$operation.offset -lt 0) {
-        Add-Error "operations[$index].offset must be >= 0"
-    }
+    $offsetVal = 0L
+    if (-not [long]::TryParse([string]$operation.offset, [ref]$offsetVal)) {
+        Add-Error "operations[$index].offset must be an integer"
+    } elseif ($offsetVal -lt 0) {
+        Add-Error "operations[$index].offset must be >= 0"
+    }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/validate-save-patch-pack.ps1` around lines 113 - 117, The direct cast
[int]$operation.offset can throw and abort validation; instead validate the type
safely using a numeric parse before comparing. Use [int]::TryParse to attempt
parsing $operation.offset into a temp variable and if parsing fails call
Add-Error with a clear message (e.g. operations[$index].offset must be an
integer >= 0); if it parses, then check the parsed value >= 0 and call Add-Error
as currently done. Update the code around $operation.offset, the parsing temp
variable, and the Add-Error call so errors are accumulated rather than throwing.
tools/apply-save-patch-pack.ps1-48-50 (1)

48-50: ⚠️ Potential issue | 🟡 Minor

Resolve-Path "." resolves to CWD, not the script root — breaks when called from any other directory

All DLL paths ($coreDll, $savesDll, $loggingDll) are computed relative to $repoRoot. If the script is invoked from outside the repo root (e.g., from a CI working directory that differs, or via absolute path), every path resolves incorrectly and the script fails with a misleading "assemblies not found" error.

🛡️ Proposed fix
-$repoRoot = Resolve-Path "."
+$repoRoot = Resolve-Path (Join-Path $PSScriptRoot "..")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/apply-save-patch-pack.ps1` around lines 48 - 50, The script uses
Resolve-Path "." to set $repoRoot which points to the current working directory
(CWD) instead of the script's directory, causing $coreDll, $savesDll (and
$loggingDll) to be built from the wrong base when invoked from elsewhere; change
the root calculation to derive the repository root from the script location (use
$PSScriptRoot or $MyInvocation.MyCommand.Path and Split-Path as appropriate) and
then recompute $coreDll, $savesDll, and $loggingDll with Join-Path against that
script-derived repo root so the DLL paths are always correct regardless of CWD.
tools/collect-mod-repro-bundle.ps1-1-11 (1)

1-11: ⚠️ Potential issue | 🟡 Minor

Missing PowerShell 7 guard unlike sibling scripts.

export-save-patch-pack.ps1 and other scripts in this PR check $PSVersionTable.PSEdition -ne "Core" and throw. This script uses PS7-specific syntax (e.g., ternary-like inline expressions on Line 374) but lacks the same guard, risking confusing failures on Windows PowerShell 5.1.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/collect-mod-repro-bundle.ps1` around lines 1 - 11, This script lacks
the PowerShell 7 runtime guard present in sibling scripts; add a check near the
top (after the param block/Set-StrictMode) that inspects
$PSVersionTable.PSEdition and throws a clear error when it is not "Core" so the
script fails fast on Windows PowerShell 5.1; ensure the message references this
script (collect-mod-repro-bundle.ps1) and why (it uses PS7-only syntax such as
the ternary-like inline expression referenced later) so callers get a
deterministic, informative failure.
src/SwfocTrainer.Saves/Services/SavePatchPackService.cs-187-191 (1)

187-191: ⚠️ Potential issue | 🟡 Minor

ToDictionary will throw on duplicate Path or Id values in FieldDefs.

If a schema accidentally defines two fields with the same Path (or Id), ToDictionary throws an unhandled ArgumentException. While this would indicate a schema defect, a more descriptive error (or using a grouped lookup) would improve debuggability.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Saves/Services/SavePatchPackService.cs` around lines 187 -
191, The current use of ToDictionary on schema.FieldDefs (for fieldByPath and
fieldById) will throw a generic ArgumentException on duplicate Path/Id; replace
this with code that first groups by Path and by Id (e.g., GroupBy(x => x.Path,
StringComparer.OrdinalIgnoreCase) and GroupBy(x => x.Id,
StringComparer.OrdinalIgnoreCase)), detect any groups with Count() > 1 and throw
a clearer exception (or return a ToLookup if duplicates should be tolerated)
that includes the duplicate key name and the involved field Ids/Paths; update
the initialization of fieldByPath and fieldById to use the grouped result (or
ToLookup) or to build a dictionary after validating no duplicates to provide a
descriptive error when duplicates are found.
tools/export-save-patch-pack.ps1-49-49 (1)

49-49: ⚠️ Potential issue | 🟡 Minor

Repo root resolution depends on current working directory.

Resolve-Path "." assumes the script is invoked from the repository root. If invoked from another directory, all derived paths ($coreDll, $savesDll, schema paths) will be wrong. Consider using $PSScriptRoot to resolve the repo root, similar to what collect-mod-repro-bundle.ps1 does on its Line 13.

Proposed fix
-$repoRoot = Resolve-Path "."
+$repoRoot = Resolve-Path (Join-Path $PSScriptRoot "..")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/export-save-patch-pack.ps1` at line 49, The repo root is being resolved
from the current working directory via $repoRoot = Resolve-Path ".", which
breaks if the script is invoked from elsewhere; change the assignment to derive
the repo root from the script location instead (use $PSScriptRoot), e.g. set
$repoRoot = Resolve-Path $PSScriptRoot (or Resolve-Path (Join-Path $PSScriptRoot
'..') if this script lives in a subfolder) so all downstream variables like
$coreDll and $savesDll compute correctly; mirror the approach used with
$PSScriptRoot in the other script and update any path joins that assume
$repoRoot accordingly.
src/SwfocTrainer.Saves/Services/SavePatchApplyService.cs-167-170 (1)

167-170: ⚠️ Potential issue | 🟡 Minor

File.Delete in catch block could throw and mask the original exception.

If the temp file exists but cannot be deleted (e.g., locked by antivirus), the resulting exception will mask the original write failure. Wrap in a try/catch to ensure the original error is preserved.

Proposed fix
-            if (File.Exists(tempOutputPath))
-            {
-                File.Delete(tempOutputPath);
-            }
+            try
+            {
+                if (File.Exists(tempOutputPath))
+                {
+                    File.Delete(tempOutputPath);
+                }
+            }
+            catch (Exception cleanupEx)
+            {
+                _logger.LogWarning(cleanupEx, "Failed to clean up temp file {TempPath}", tempOutputPath);
+            }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.Saves/Services/SavePatchApplyService.cs` around lines 167 -
170, The catch block currently calls File.Exists(tempOutputPath) and
File.Delete(tempOutputPath) directly, which can throw and mask the original
exception; update the cleanup to wrap the delete in its own try/catch so any
IOException/UnauthorizedAccessException from File.Delete is swallowed or logged
but not rethrown, thereby preserving the original exception. Locate the cleanup
code in SavePatchApplyService (the block referencing tempOutputPath, File.Exists
and File.Delete) and change it to check File.Exists then attempt File.Delete
inside a nested try/catch that catches deletion-specific exceptions and does not
throw, optionally logging the deletion failure.
tools/collect-mod-repro-bundle.ps1-42-42 (1)

42-42: ⚠️ Potential issue | 🟡 Minor

$matches shadows PowerShell's automatic variable.

$matches is a built-in automatic variable populated by the -match operator. Although this code uses [regex]::Matches() and the shadowing is scoped to the function, it can confuse readers and tooling. Rename to $regexMatches or $steamMatches.

Proposed fix
-    $matches = [regex]::Matches($CommandLine, "STEAMMOD\s*=\s*([0-9]{4,})", [System.Text.RegularExpressions.RegexOptions]::IgnoreCase)
+    $regexMatches = [regex]::Matches($CommandLine, "STEAMMOD\s*=\s*([0-9]{4,})", [System.Text.RegularExpressions.RegexOptions]::IgnoreCase)
     $ids = New-Object System.Collections.Generic.HashSet[string]([StringComparer]::OrdinalIgnoreCase)
-    foreach ($match in $matches) {
+    foreach ($match in $regexMatches) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/collect-mod-repro-bundle.ps1` at line 42, The code uses the automatic
PowerShell variable $matches for the result of [regex]::Matches($CommandLine,
"STEAMMOD\s*=\s*([0-9]{4,})", ...), which shadows the built-in and can confuse
readers and tools; rename the variable (for example to $regexMatches or
$steamMatches) wherever it's declared and used (the occurrence calling
[regex]::Matches in the same function) so the automatic $matches is not
shadowed, updating any subsequent references to the new name.
tools/collect-mod-repro-bundle.ps1-145-156 (1)

145-156: ⚠️ Potential issue | 🟡 Minor

$args shadows PowerShell's automatic variable — can cause subtle bugs.

$args is a built-in automatic variable that holds unbound arguments passed to a function. Overwriting it here works but is fragile and flagged by PSScriptAnalyzer. Rename to $pythonArgs or $launchArgs.

Proposed fix
-    $args = @()
+    $pythonArgs = @()
     if ($pythonCmd.Count -gt 1) {
-        $args += $pythonCmd[1..($pythonCmd.Count - 1)]
+        $pythonArgs += $pythonCmd[1..($pythonCmd.Count - 1)]
     }
-    $args += @(
+    $pythonArgs += @(
         "tools/detect-launch-context.py",
         "--command-line", $Process.commandLine,
         "--process-name", $Process.name,
         "--process-path", $Process.path,
         "--profile-root", $ProfileRootPath,
         "--pretty"
     )
 
     try {
-        $output = & $pythonCmd[0] `@args` 2>&1
+        $output = & $pythonCmd[0] `@pythonArgs` 2>&1
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/collect-mod-repro-bundle.ps1` around lines 145 - 156, The script is
overwriting PowerShell's automatic $args variable; rename the local variable to
something like $pythonArgs to avoid shadowing: replace the $args declaration and
all subsequent uses in this block (where $pythonCmd, $Process, $ProfileRootPath
are referenced and where the argument array is built for
"tools/detect-launch-context.py") with $pythonArgs (or $launchArgs), preserving
the same contents and concatenation logic; ensure any later invocation that
relied on $args is updated to use the new variable name.
tools/run-live-validation.ps1-5-5 (1)

5-5: ⚠️ Potential issue | 🟡 Minor

[switch]$NoBuild = $true should be typed as [bool] for consistent caller ergonomics

A [switch] defaulting to $true means callers can only opt out via -NoBuild:$false, which is non-obvious and violates the expected switch semantics. $EmitReproBundle on line 8 already uses [bool] for this pattern; $NoBuild should match.

🔧 Proposed fix
-    [switch]$NoBuild = $true,
+    [bool]$NoBuild = $true,
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/run-live-validation.ps1` at line 5, Change the parameter type for
NoBuild from a PowerShell switch to a boolean so callers can pass
-NoBuild:$false naturally; replace the declaration "[switch]$NoBuild = $true"
with "[bool]$NoBuild = $true" and update any callsites or conditional checks
that expect a switch (look for references to $NoBuild and compare to how
$EmitReproBundle is declared/used) to treat it as a bool.
src/SwfocTrainer.App/ViewModels/MainViewModel.cs-1479-1482 (1)

1479-1482: ⚠️ Potential issue | 🟡 Minor

NumberStyles.AllowThousands in double.TryParse causes unexpected type coercion for comma-formatted input

With NumberStyles.Float | NumberStyles.AllowThousands, the input "1,000" (locale-formatted integer) skips the int/long branches (which use NumberStyles.Integer, disallowing the comma separator) and lands in the double branch as 1000.0. If the target save field expects an int32, the codec will receive a double instead of failing early with a descriptive error.

Remove NumberStyles.AllowThousands unless grouped thousands notation is a deliberate requirement for save-field values:

🔧 Proposed fix
-        if (double.TryParse(trimmed, NumberStyles.Float | NumberStyles.AllowThousands, CultureInfo.InvariantCulture, out var doubleValue))
+        if (double.TryParse(trimmed, NumberStyles.Float, CultureInfo.InvariantCulture, out var doubleValue))
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.App/ViewModels/MainViewModel.cs` around lines 1479 - 1482,
The double parsing branch uses NumberStyles.Float | NumberStyles.AllowThousands
which lets comma-grouped integers like "1,000" bypass the int/long parsing and
be coerced to a double; update the double.TryParse call (the trimmed variable
and the double.TryParse(...) branch) to remove NumberStyles.AllowThousands (use
NumberStyles.Float only) so comma-formatted integers don't get parsed as double
and instead fall back to int/long error handling (or explicitly validate
thousands grouping where intended).

id: repro_bundle_json
attributes:
label: Repro bundle JSON path/link
description: `TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

YAML syntax error: unquoted backtick in description will break template rendering.

YAMLlint rejects a backtick as the first character of an unquoted scalar. This can prevent the issue template from loading on GitHub.

🔧 Proposed fix
-      description: `TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL
+      description: "`TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
description: `TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL
description: "`TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL"
🧰 Tools
🪛 YAMLlint (1.38.0)

[error] 29-29: syntax error: found character '`' that cannot start any token

(syntax)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In @.github/ISSUE_TEMPLATE/bug.yml at line 29, The YAML description scalar
contains an unquoted leading backtick which breaks parsing; update the
description field (the `description` key) to use a quoted string (single or
double quotes) and remove the surrounding backticks so the value becomes a
proper quoted scalar (e.g., replace the backtick-wrapped
`TestResults/runs/<runId>/repro-bundle.json` with a quoted string like
"TestResults/runs/<runId>/repro-bundle.json").

Comment on lines +1 to +32
@echo off
setlocal

set "REPO_ROOT=%~dp0"
if "%REPO_ROOT:~-1%"=="\" set "REPO_ROOT=%REPO_ROOT:~0,-1%"
set "DOTNET_NOLOGO=1"
set "SOLUTION_PATH=%REPO_ROOT%\SwfocTrainer.sln"

set "APP_EXE=src\SwfocTrainer.App\bin\Debug\net8.0-windows\SwfocTrainer.App.exe"

cd /d "%REPO_ROOT%"
if exist "%APP_EXE%" goto :start_app

echo [launch-app-debug] Debug EXE not found. Building Debug...
dotnet build "%SOLUTION_PATH%" -c Debug --nologo
set "BUILD_EXIT=%ERRORLEVEL%"
if not "%BUILD_EXIT%"=="0" goto :build_failed
if not exist "%APP_EXE%" goto :missing_exe

:start_app
echo [launch-app-debug] Starting %APP_EXE%
start "" "%APP_EXE%"
exit /b 0

:build_failed
echo [launch-app-debug] Build failed with exit code %BUILD_EXIT%.
echo [launch-app-debug] Ensure the .NET SDK required by global.json is installed.
exit /b %BUILD_EXIT%

:missing_exe
echo [launch-app-debug] Could not locate: %APP_EXE%
exit /b 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Unix line endings (LF-only) — highest risk of the three batch scripts, with three GOTO labels in use.

32 LF sequences detected; :start_app, :build_failed, and :missing_exe are all reachable GOTO targets. Apply the same *.cmd text eol=crlf .gitattributes rule.

🧰 Tools
🪛 Blinter (1.0.112)

[error] 1-1: Unix line endings detected. Explanation: Batch file uses Unix line endings (LF-only) which can cause GOTO/CALL label parsing failures and script malfunction due to Windows batch parser 512-byte boundary bugs. Recommendation: Convert file to Windows line endings (CRLF). Use tools like dos2unix, notepad++, or configure git with 'git config core.autocrlf true'. Context: File uses Unix line endings (LF-only) - 32 LF sequences found

(E018)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@launch-app-debug.cmd` around lines 1 - 32, Add a .gitattributes rule to force
CRLF for this batch script and convert the file to CRLF to avoid GOTO label
issues: add either a global entry "*.cmd text eol=crlf" (preferred) or
"launch-app-debug.cmd text eol=crlf" to .gitattributes, then normalize/convert
launch-app-debug.cmd in Git (checkout/commit or core.autocrlf tools) and
recommit so the labels (:start_app, :build_failed, :missing_exe) remain
reachable with CRLF endings.

Comment on lines +1 to +32
@echo off
setlocal

set "REPO_ROOT=%~dp0"
if "%REPO_ROOT:~-1%"=="\" set "REPO_ROOT=%REPO_ROOT:~0,-1%"
set "DOTNET_NOLOGO=1"
set "SOLUTION_PATH=%REPO_ROOT%\SwfocTrainer.sln"

set "APP_EXE=src\SwfocTrainer.App\bin\Release\net8.0-windows\SwfocTrainer.App.exe"

cd /d "%REPO_ROOT%"
if exist "%APP_EXE%" goto :start_app

echo [launch-app-release] Release EXE not found. Building Release...
dotnet build "%SOLUTION_PATH%" -c Release --nologo
set "BUILD_EXIT=%ERRORLEVEL%"
if not "%BUILD_EXIT%"=="0" goto :build_failed
if not exist "%APP_EXE%" goto :missing_exe

:start_app
echo [launch-app-release] Starting %APP_EXE%
start "" "%APP_EXE%"
exit /b 0

:build_failed
echo [launch-app-release] Build failed with exit code %BUILD_EXIT%.
echo [launch-app-release] Ensure the .NET SDK required by global.json is installed.
exit /b %BUILD_EXIT%

:missing_exe
echo [launch-app-release] Could not locate: %APP_EXE%
exit /b 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Batch file must use CRLF line endings — LF-only will break GOTO label resolution on Windows.

Blinter confirms 32 LF sequences with no CR. The Windows CMD parser uses 512-byte boundary alignment when scanning for labels; LF-only files cause GOTO :build_failed and GOTO :missing_exe to silently fail or jump to the wrong location. Convert the file to CRLF before merging (e.g., with dos2unix -D reversed, Notepad++, or git config core.autocrlf true).

🧰 Tools
🪛 Blinter (1.0.112)

[error] 1-1: Unix line endings detected. Explanation: Batch file uses Unix line endings (LF-only) which can cause GOTO/CALL label parsing failures and script malfunction due to Windows batch parser 512-byte boundary bugs. Recommendation: Convert file to Windows line endings (CRLF). Use tools like dos2unix, notepad++, or configure git with 'git config core.autocrlf true'. Context: File uses Unix line endings (LF-only) - 32 LF sequences found

(E018)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@launch-app-release.cmd` around lines 1 - 32, The batch file uses LF-only line
endings which breaks CMD label resolution for labels like :start_app,
:build_failed and :missing_exe; convert the file to CRLF line endings before
merging (e.g., update the file’s line endings in your editor or run a conversion
tool) and ensure future commits preserve CRLF by adding a rule (e.g.,
.gitattributes or set core.autocrlf) so launch-app-release.cmd is committed with
CRLF line endings.

sonar.sources=src,tools
sonar.tests=tests
sonar.exclusions=**/bin/**,**/obj/**,1397421866(original mod)/**,3447786229(submod)/**,3661482670(cheat_mode_example)/**
sonar.cs.vstest.reportsPaths=TestResults/*.trx
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

sonar.cs.vstest.reportsPaths will always be empty — missing dotnet test step in the workflow.

The SonarCloud workflow (sonarcloud.yml) restores and builds but never runs dotnet test, so TestResults/*.trx is never written. SonarCloud will silently find no test results or coverage data.

Add a test step between Build and SonarCloud Scan in sonarcloud.yml:

🔧 Proposed addition to sonarcloud.yml
       - name: Build
         run: dotnet build SwfocTrainer.sln -c Release --no-restore
+
+      - name: Test
+        run: |
+          dotnet test SwfocTrainer.sln -c Release --no-build \
+            --filter "FullyQualifiedName!~SwfocTrainer.Tests.Profiles.Live&FullyQualifiedName!~RuntimeAttachSmokeTests" \
+            --logger trx --results-directory TestResults
+
       - name: SonarCloud Scan
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@sonar-project.properties` at line 9, The SonarCloud config sets
sonar.cs.vstest.reportsPaths=TestResults/*.trx but no tests are run, so
TestResults/*.trx is never produced; update the CI workflow (sonarcloud.yml) to
add a dotnet test step between the Build and SonarCloud Scan stages that runs
tests and outputs TRX results (e.g., run dotnet test with --logger
"trx;LogFileName=TestResults/test_results.trx" or the equivalent) so the
sonar.cs.vstest.reportsPaths path is populated for the scan.

Comment on lines +1296 to +1315
var expectedOperationCount = _loadedPatchPreview?.OperationsToApply.Count ?? _loadedPatchPack.Operations.Count;
var result = await _savePatchApplyService.ApplyAsync(SavePath, _loadedPatchPack, SelectedProfileId, strict: IsStrictPatchApply);
SavePatchApplySummary = $"{result.Classification}: {result.Message}";
if (result.Applied)
{
await LoadSaveAsync();
if (!string.IsNullOrWhiteSpace(result.BackupPath))
{
SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "backup_path", result.BackupPath));
}

if (!string.IsNullOrWhiteSpace(result.ReceiptPath))
{
SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "receipt_path", result.ReceiptPath));
}
}

Status = result.Applied
? $"Patch applied successfully ({result.Classification}, ops={expectedOperationCount})."
: $"Patch apply failed ({result.Classification}): {result.Message}";
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

SavePatchApplySummary is silently cleared by LoadSaveAsync on the success path

LoadSaveAsync (line 1114) always sets SavePatchApplySummary = string.Empty. Because line 1298 sets the summary before the await LoadSaveAsync() call at line 1301, the user-visible apply summary is wiped out on every successful apply. Only Status is updated afterwards; the dedicated summary panel shows blank.

The same ordering bug exists in RestoreBackupAsync: line 1321 sets SavePatchApplySummary, then line 1324 calls LoadSaveAsync() which clears it again.

Fix: move both assignments to after LoadSaveAsync() returns.

🐛 Proposed fix for `ApplyPatchPackAsync`
     var result = await _savePatchApplyService.ApplyAsync(SavePath, _loadedPatchPack, SelectedProfileId, strict: IsStrictPatchApply);
-    SavePatchApplySummary = $"{result.Classification}: {result.Message}";
     if (result.Applied)
     {
         await LoadSaveAsync();
         if (!string.IsNullOrWhiteSpace(result.BackupPath))
         {
             SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "backup_path", result.BackupPath));
         }

         if (!string.IsNullOrWhiteSpace(result.ReceiptPath))
         {
             SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "receipt_path", result.ReceiptPath));
         }
     }

+    SavePatchApplySummary = $"{result.Classification}: {result.Message}";
     Status = result.Applied
         ? $"Patch applied successfully ({result.Classification}, ops={expectedOperationCount})."
         : $"Patch apply failed ({result.Classification}): {result.Message}";
🐛 Proposed fix for `RestoreBackupAsync`
     var result = await _savePatchApplyService.RestoreLastBackupAsync(SavePath);
-    SavePatchApplySummary = result.Message;
     if (result.Restored && !string.IsNullOrWhiteSpace(SelectedProfileId))
     {
         await LoadSaveAsync();
     }

+    SavePatchApplySummary = result.Message;
     Status = result.Restored
         ? $"Backup restored: {result.BackupPath}"
         : $"Backup restore skipped: {result.Message}";
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
var expectedOperationCount = _loadedPatchPreview?.OperationsToApply.Count ?? _loadedPatchPack.Operations.Count;
var result = await _savePatchApplyService.ApplyAsync(SavePath, _loadedPatchPack, SelectedProfileId, strict: IsStrictPatchApply);
SavePatchApplySummary = $"{result.Classification}: {result.Message}";
if (result.Applied)
{
await LoadSaveAsync();
if (!string.IsNullOrWhiteSpace(result.BackupPath))
{
SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "backup_path", result.BackupPath));
}
if (!string.IsNullOrWhiteSpace(result.ReceiptPath))
{
SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "receipt_path", result.ReceiptPath));
}
}
Status = result.Applied
? $"Patch applied successfully ({result.Classification}, ops={expectedOperationCount})."
: $"Patch apply failed ({result.Classification}): {result.Message}";
var expectedOperationCount = _loadedPatchPreview?.OperationsToApply.Count ?? _loadedPatchPack.Operations.Count;
var result = await _savePatchApplyService.ApplyAsync(SavePath, _loadedPatchPack, SelectedProfileId, strict: IsStrictPatchApply);
if (result.Applied)
{
await LoadSaveAsync();
if (!string.IsNullOrWhiteSpace(result.BackupPath))
{
SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "backup_path", result.BackupPath));
}
if (!string.IsNullOrWhiteSpace(result.ReceiptPath))
{
SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "receipt_path", result.ReceiptPath));
}
}
SavePatchApplySummary = $"{result.Classification}: {result.Message}";
Status = result.Applied
? $"Patch applied successfully ({result.Classification}, ops={expectedOperationCount})."
: $"Patch apply failed ({result.Classification}): {result.Message}";
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/SwfocTrainer.App/ViewModels/MainViewModel.cs` around lines 1296 - 1315,
The apply/restore flow sets SavePatchApplySummary before awaiting LoadSaveAsync,
but LoadSaveAsync always clears SavePatchApplySummary so the UI summary is lost;
in both ApplyPatchPackAsync and RestoreBackupAsync move the assignment
SavePatchApplySummary = $"{result.Classification}: {result.Message}"; to after
the await LoadSaveAsync() returns, and also move any
SavePatchCompatibility.Add(...) calls that depend on
result.BackupPath/result.ReceiptPath to after LoadSaveAsync so they are not
discarded; keep the Status update after these operations. Ensure you update the
code paths that reference SavePatchApplySummary, LoadSaveAsync,
SavePatchCompatibility.Add, Status, and the result variable in both
ApplyPatchPackAsync and RestoreBackupAsync.

Comment on lines 82 to 87
if ($errors.Count -gt 0) {
Write-Host "policy contract validation failed:" -ForegroundColor Red
foreach ($error in $errors) {
Write-Host " - $error" -ForegroundColor Red
}
exit 1
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

$error is a reserved readonly automatic variable — the error-reporting block will throw under strict mode.

Under Set-StrictMode -Version Latest (line 1) + $ErrorActionPreference = "Stop" (line 2), foreach ($error in $errors) immediately throws "The Variable 'error' cannot be assigned since it is a readonly automatic variable", terminating the script before exit 1 is reached. Individual policy violations are never printed, making CI failures impossible to triage from the log.

🐛 Proposed fix — rename the loop variable
 if ($errors.Count -gt 0) {
     Write-Host "policy contract validation failed:" -ForegroundColor Red
-    foreach ($error in $errors) {
-        Write-Host " - $error" -ForegroundColor Red
+    foreach ($msg in $errors) {
+        Write-Host " - $msg" -ForegroundColor Red
     }
     exit 1
 }
🧰 Tools
🪛 PSScriptAnalyzer (1.24.0)

[error] 84-84: The Variable 'error' cannot be assigned since it is a readonly automatic variable that is built into PowerShell, please use a different name.

(PSAvoidAssignmentToAutomaticVariable)


[warning] 83-83: File 'validate-policy-contracts.ps1' uses Write-Host. Avoid using Write-Host because it might not work in all hosts, does not work when there is no host, and (prior to PS 5.0) cannot be suppressed, captured, or redirected. Instead, use Write-Output, Write-Verbose, or Write-Information.

(PSAvoidUsingWriteHost)


[warning] 85-85: File 'validate-policy-contracts.ps1' uses Write-Host. Avoid using Write-Host because it might not work in all hosts, does not work when there is no host, and (prior to PS 5.0) cannot be suppressed, captured, or redirected. Instead, use Write-Output, Write-Verbose, or Write-Information.

(PSAvoidUsingWriteHost)

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/validate-policy-contracts.ps1` around lines 82 - 87, The foreach loop
uses the reserved automatic variable $error which is readonly under
Set-StrictMode and $ErrorActionPreference="Stop", causing the loop to throw
before printing violations; rename the loop variable (e.g., change foreach
($error in $errors) to foreach ($err in $errors)) and update the body to
Write-Host " - $err" -ForegroundColor Red so each policy violation is printed
and the subsequent exit 1 is reached.

Comment on lines 71 to 74
$liveTests = @($bundle.liveTests)
if ($liveTests.Count -eq 0) {
Add-Error "liveTests must contain at least one entry"
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

@($bundle.liveTests) produces a single-element array containing $null when liveTests is missing — validation bypass.

In PowerShell, @($null) produces an array with one $null element, so $liveTests.Count will be 1 (not 0), and the "must contain at least one entry" check on Line 72 will pass incorrectly. The same issue applies to @($bundle.processSnapshot) on Line 95.

Proposed fix
-$liveTests = @($bundle.liveTests)
-if ($liveTests.Count -eq 0) {
+$liveTests = if ($null -ne $bundle.liveTests) { @($bundle.liveTests) } else { @() }
+if ($liveTests.Count -eq 0) {

Apply the same pattern for $processSnapshot:

-$processSnapshot = @($bundle.processSnapshot)
+$processSnapshot = if ($null -ne $bundle.processSnapshot) { @($bundle.processSnapshot) } else { @() }
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
$liveTests = @($bundle.liveTests)
if ($liveTests.Count -eq 0) {
Add-Error "liveTests must contain at least one entry"
}
$liveTests = if ($null -ne $bundle.liveTests) { @($bundle.liveTests) } else { @() }
if ($liveTests.Count -eq 0) {
Add-Error "liveTests must contain at least one entry"
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/validate-repro-bundle.ps1` around lines 71 - 74, The current validation
uses $liveTests = @($bundle.liveTests) which yields an array with a single $null
when liveTests is missing, allowing the count check to pass; fix by building
$liveTests as an array that filters out null/empty entries (e.g. pipe
$bundle.liveTests through a Where-Object that removes null/empty values) then
validate Count -eq 0 against that filtered array, and apply the identical change
for $processSnapshot (replace the @($bundle.processSnapshot) creation with a
filtered array) so both $liveTests and $processSnapshot accurately reflect
presence/absence of entries.

Comment on lines 102 to 104
foreach ($required in @("profileId", "reasonCode", "confidence", "launchKind")) {
Require-Field -Object $bundle.launchContext -Field $required -AllowNull
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Null-dereference risk if $bundle.launchContext is missing.

If the bundle JSON has no launchContext property, $bundle.launchContext is $null, and calling $null.PSObject.Properties[...] inside Require-Field will throw. The same applies to $bundle.runtimeMode (Line 107) and $bundle.diagnostics (Line 111). Guard with a null check or verify the field exists on $bundle first.

Proposed fix
+if ($null -eq $bundle.launchContext) {
+    Add-Error "missing required field: launchContext"
+} else {
     foreach ($required in @("profileId", "reasonCode", "confidence", "launchKind")) {
         Require-Field -Object $bundle.launchContext -Field $required -AllowNull
     }
+}

Apply a similar guard for runtimeMode and diagnostics.

📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
foreach ($required in @("profileId", "reasonCode", "confidence", "launchKind")) {
Require-Field -Object $bundle.launchContext -Field $required -AllowNull
}
if ($null -eq $bundle.launchContext) {
Add-Error "missing required field: launchContext"
} else {
foreach ($required in @("profileId", "reasonCode", "confidence", "launchKind")) {
Require-Field -Object $bundle.launchContext -Field $required -AllowNull
}
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tools/validate-repro-bundle.ps1` around lines 102 - 104, The code calls
Require-Field with $bundle.launchContext, $bundle.runtimeMode and
$bundle.diagnostics which can be $null and cause a null-dereference inside
Require-Field; add guards that verify those properties exist on $bundle before
invoking Require-Field (e.g. if ($null -ne $bundle.launchContext) { foreach
($required in @(...)) { Require-Field -Object $bundle.launchContext -Field
$required -AllowNull } } else { Throw "Missing launchContext in bundle" } ), and
apply the same pattern for $bundle.runtimeMode and $bundle.diagnostics so
Require-Field always receives a non-null -Object.

@Prekzursil Prekzursil added the needs-reviewer No eligible non-author reviewer available label Feb 17, 2026
@Prekzursil
Copy link
Owner Author

Reviewer automation could not request a non-author reviewer for this PR.

  • PR author: @Prekzursil
  • configured users: Prekzursil
  • configured teams:
  • fallback label applied: needs-reviewer

Next steps:

  1. Add at least one non-author collaborator/team to config/reviewer-roster.json.
  2. Re-run reviewer automation (or request review manually).

Prekzursil added a commit that referenced this pull request Feb 18, 2026
@Prekzursil Prekzursil merged commit ac29fe8 into main Feb 18, 2026
13 of 14 checks passed
@Prekzursil Prekzursil deleted the feature/m2-save-lab branch February 18, 2026 00:14
Prekzursil added a commit that referenced this pull request Feb 18, 2026
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: ea1b83e9fa

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".


var allowedProfiles = pack.Compatibility.AllowedProfileIds ?? Array.Empty<string>();
var wildcardAllowed = allowedProfiles.Any(x => x == "*");
var profileAllowed = wildcardAllowed || allowedProfiles.Any(x => x.Equals(targetProfileId, StringComparison.OrdinalIgnoreCase));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Reject null allowedProfileIds entries before compatibility checks

LoadPackAsync currently accepts compatibility.allowedProfileIds entries that are null because contract validation only checks list presence/count, but ValidateCompatibilityAsync then calls x.Equals(...) on each entry and will throw NullReferenceException when it hits a null item. A patch pack containing "allowedProfileIds": [null] will crash preview/apply instead of returning a deterministic compatibility error, so null/blank profile IDs need to be rejected (or compared null-safely) during validation.

Useful? React with 👍 / 👎.

bool strict = true,
CancellationToken cancellationToken = default)
{
var normalizedTargetPath = NormalizeTargetPath(targetSavePath);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Return classified load failure for missing target save path

ApplyAsync validates the target path before entering its try/catch around load, so a missing save file or invalid extension throws immediately and bypasses the structured SavePatchApplyResult (target_load_failed) path. This makes callers fall back to exception handling instead of reason-code diagnostics for a common failure mode; normalize/validate the path inside the guarded load flow or convert that exception into a classified result.

Useful? React with 👍 / 👎.


public async Task<SaveRollbackResult> RestoreLastBackupAsync(string targetSavePath, CancellationToken cancellationToken = default)
{
var normalizedTargetPath = NormalizeTargetPath(targetSavePath);

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Allow backup restore when target save file is missing

RestoreLastBackupAsync requires targetSavePath to already exist via NormalizeTargetPath, which means restore fails before backup discovery if the primary .sav was deleted or lost. In that recovery scenario, backups may still be valid, but the method throws instead of restoring them, undermining the rollback mechanism.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:app WPF app/UI/ViewModel area:ci CI/CD and automation area:docs Documentation/process related work. area:runtime Runtime attach, memory actions, symbol resolution area:saves Save codec and save editing area:tooling Scripts and developer tooling needs-reviewer No eligible non-author reviewer available Review effort 4/5

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant

Comments