[codex] M2 Save Lab: hardening + UX completion (P0-P2)#46
Conversation
|
Unable to trigger custom agent "Code Reviewer". You have run out of credits 😔 |
|
Warning Rate limit exceeded
⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📝 WalkthroughWalkthroughA comprehensive enhancement adding save patch-pack functionality (export, compatibility validation, atomic apply with backup/rollback), repro bundle collection and validation for runtime issues, GitHub Actions workflows for SonarCloud/duplication analysis/policy enforcement/visual audits, evidence-driven operating model documentation, Windows launch scripts, and schema definitions backing these features. Changes
Sequence DiagramsequenceDiagram
participant UI as Save Editor UI
participant VM as MainViewModel
participant PPS as SavePatchPackService
participant SPA as SavePatchApplyService
participant SC as SaveCodec
participant FS as File System
UI->>VM: ExportPatchPack
activate VM
VM->>PPS: ExportAsync(original, edited, profile)
activate PPS
PPS->>PPS: Diff documents
PPS->>PPS: Build operations for changed fields
PPS->>PPS: Generate metadata & compatibility
PPS-->>VM: SavePatchPack
deactivate PPS
VM->>FS: Write patch to JSON
VM->>VM: SetLoadedPatchPack
UI->>VM: PreviewPatchPack
activate VM
VM->>PPS: PreviewApplyAsync(pack, target, profile)
activate PPS
PPS->>PPS: Validate compatibility
PPS->>PPS: Generate operations preview
PPS-->>VM: SavePatchPreview (Errors, Warnings, Operations)
deactivate PPS
VM->>UI: Populate SavePatchOperations & Compatibility DataGrids
deactivate VM
UI->>VM: ApplyPatchPack(strict=true)
activate VM
VM->>SPA: ApplyAsync(target, pack, profile, strict)
activate SPA
SPA->>SC: Load target document
activate SC
SC-->>SPA: SaveDocument
deactivate SC
SPA->>PPS: ValidateCompatibilityAsync
activate PPS
PPS-->>SPA: CompatibilityResult
deactivate PPS
alt Strict Mode
SPA->>SPA: Check source hash match
end
SPA->>SPA: Create backup snapshot
SPA->>SPA: Apply each operation in order
SPA->>SC: Validate edited document
activate SC
SC-->>SPA: Validation result
deactivate SC
SPA->>FS: Write backup
SPA->>FS: Write temp file
SPA->>FS: Move to target
SPA->>FS: Write receipt JSON
SPA-->>VM: SavePatchApplyResult (Applied, Paths)
deactivate SPA
VM->>UI: Update summary, reload save
deactivate VM
UI->>VM: RestoreBackup
activate VM
VM->>SPA: RestoreLastBackupAsync(target)
activate SPA
SPA->>FS: Resolve latest backup path
SPA->>FS: Read backup bytes
SPA->>FS: Restore to target
SPA-->>VM: SaveRollbackResult (Restored)
deactivate SPA
VM->>UI: Reload save
deactivate VM
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
M2 Save Lab verification evidence (offline/deterministic):
|
Codacy's Analysis Summary14 new issues (≤ 0 issue) Review Pull Request in Codacy →
|
PR Compliance Guide 🔍Below is a summary of compliance checks for this PR:
Compliance status legend🟢 - Fully Compliant🟡 - Partial Compliant 🔴 - Not Compliant ⚪ - Requires Further Human Verification 🏷️ - Compliance label |
||||||||||||||||||||||||||||||||
PR Code Suggestions ✨Explore these optional code suggestions:
|
||||||||||||||||||
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 356aa7c8d0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| await _saveCodec.EditAsync(targetDoc, operation.FieldPath, value, cancellationToken); | ||
| return; |
There was a problem hiding this comment.
Prefer fieldId before fieldPath when applying operations
The apply path currently executes EditAsync with operation.FieldPath first and returns on success, so a pack whose fieldPath is stale-but-still-valid (points to a different existing field) will silently mutate the wrong field and never reach the fieldId fallback. This is especially risky because preview resolves by fieldId first and only warns on path mismatch, so previewed behavior can differ from the actual write behavior on the same pack.
Useful? React with 👍 / 👎.
| .Select(x => x with | ||
| { | ||
| OldValue = SavePatchFieldCodec.NormalizePatchValue(x.OldValue, x.ValueType), | ||
| NewValue = SavePatchFieldCodec.NormalizePatchValue(x.NewValue, x.ValueType) |
There was a problem hiding this comment.
Validate null newValue before value normalization
LoadPackAsync normalizes NewValue before running ValidatePackContract, and for numeric/bool value types NormalizePatchValue uses Convert.*, which coerces null into default values (for example 0/false). That means an input operation like "newValue": null can bypass the later newValue is required check and be accepted as a real write, causing unintended default-value patches instead of a contract error.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 12
Note
Due to the large number of review comments, Critical, Major severity comments were prioritized as inline comments.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
tools/run-live-validation.ps1 (1)
116-129:⚠️ Potential issue | 🟡 Minor
$argsis a PowerShell automatic variable — rename to avoid shadowing itAssigning to
$argsis flagged byPSAvoidAssignmentToAutomaticVariable. While local shadowing works in practice, it is confusing and could break if the function ever forwards positional arguments. Rename to$dotnetArgs(or similar).🔧 Proposed fix
- $args = @( + $dotnetArgs = @( "test", "tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj", "-c", $Configuration, "--filter", $Filter, "--logger", "trx;LogFileName=$TrxName", "--results-directory", $runResultsDirectory ) if ($NoBuild) { - $args += "--no-build" + $dotnetArgs += "--no-build" } - & $dotnetExe `@args` + & $dotnetExe `@dotnetArgs`🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/run-live-validation.ps1` around lines 116 - 129, The script assigns to the PowerShell automatic variable $args which can shadow caller arguments; rename that variable (e.g. to $dotnetArgs) wherever it is declared and referenced in this block (the array literal currently assigned to $args, the conditional that appends "--no-build" when $NoBuild is true, and the invocation & $dotnetExe `@args`) and update the final invocation to use the new variable (e.g. & $dotnetExe `@dotnetArgs`) so no automatic variable is overwritten.
🟡 Minor comments (19)
AGENTS.md-13-16 (1)
13-16:⚠️ Potential issue | 🟡 MinorSub-bullets under item 2 are not indented — will render as a top-level list, not children of the numbered item.
In CommonMark/GitHub Flavored Markdown, continuation content under a numbered list item must be indented by the list's content column (typically 3 spaces for single-digit markers). Without indentation these
-bullets break out of the ordered list and render as a separate unordered list.📝 Proposed fix
2. Mod/runtime bugfixes must include a reproducible bundle: -- `TestResults/runs/<runId>/repro-bundle.json` -- `TestResults/runs/<runId>/repro-bundle.md` + - `TestResults/runs/<runId>/repro-bundle.json` + - `TestResults/runs/<runId>/repro-bundle.md` 3. PRs must include affected profile IDs...🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AGENTS.md` around lines 13 - 16, The two dash-prefixed lines meant to be sub-bullets of item "2. Mod/runtime bugfixes must include a reproducible bundle:" (the lines starting with `- TestResults/runs/<runId>/repro-bundle.json` and `- TestResults/runs/<runId>/repro-bundle.md`) are not indented and thus render as a top-level list; fix by indenting those two `-` lines to the list content column (e.g., prefix with three spaces or align under the item text) so they are nested under item 2 and verify item "3. PRs must include..." remains the next top-level numbered item.launch-app-debug.cmd-9-9 (1)
9-9:⚠️ Potential issue | 🟡 MinorHardcoded
net8.0-windowsTFM will silently break if the project target framework changes.If the project is ever retargeted (e.g.,
net9.0-windows), the exe won't be found at this path and the script falls through to adotnet build, which may also fail or produce the binary at a different path.♻️ Suggested improvement — derive the path from the build output
One option is to let the build step always run and derive the exe path dynamically, or at minimum document the TFM assumption clearly with a comment:
+rem NOTE: Update net8.0-windows below if the TFM in SwfocTrainer.App.csproj changes. set "APP_EXE=src\SwfocTrainer.App\bin\Debug\net8.0-windows\SwfocTrainer.App.exe"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@launch-app-debug.cmd` at line 9, The script currently hardcodes APP_EXE to a specific TFM ("net8.0-windows"); change launch-app-debug.cmd to always run the build and then discover the actual output TFM instead of hardcoding it: run dotnet build for the SwfocTrainer.App project (Debug), enumerate subfolders under src\SwfocTrainer.App\bin\Debug to find the net*-windows folder, and set APP_EXE to the discovered path (bin\Debug\<found-tfm>\SwfocTrainer.App.exe); include a fallback error if no matching exe is found and a brief comment documenting the intent..github/workflows/duplication-check.yml-29-35 (1)
29-35:⚠️ Potential issue | 🟡 MinorCorrect the artifact path or configure jscpd output directory — currently they don't match.
The repository has no
.jscpd.jsonfile, sojscpdv4 will use its default output directory./report. However, the workflow tries to upload fromjscpd-report(lines 29-35), which will result in an empty artifact. Either:
- Create
.jscpd.jsonwith"output": "jscpd-report"to match the workflow, or- Change the workflow
pathtoreportto match jscpd's default.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/duplication-check.yml around lines 29 - 35, The workflow uploads an artifact from path "jscpd-report" but jscpd v4 defaults to output "./report" and this repo lacks a .jscpd.json, so the artifact will be empty; fix by either adding a .jscpd.json that sets "output": "jscpd-report" so jscpd writes to the workflow's expected directory, or update the GitHub Actions upload step to use path: report (and keep name: jscpd-report) so it uploads jscpd's default output directory; adjust whichever of these matches your intended behavior.src/SwfocTrainer.Runtime/AGENTS.md-9-18 (1)
9-18:⚠️ Potential issue | 🟡 MinorAdd blank lines around numbered lists for standard Markdown compliance.
Both list blocks (lines 10–12 and 15–18) are missing a blank line before and after them, which Codacy and several Markdown renderers flag as malformed.
📝 Proposed fix
## Required Evidence + 1. Include tests for attach/mode/resolution behavior changes. 2. Include launch-context reason code impact in PR notes. 3. Include repro bundle evidence for live-only fixes. + ## Runtime Safety Rules + 1. Signature-first; never rely on blind fixed addresses. 2. If runtime confidence is uncertain, block mutating actions. 3. Record actionable reason codes in diagnostics. 4. When both `swfoc.exe` and `StarWarsG.exe` exist for FoC, attach host selection must be deterministic.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Runtime/AGENTS.md` around lines 9 - 18, The numbered lists under the "Required Evidence" and "Runtime Safety Rules" headings in AGENTS.md lack blank lines before and after the list blocks; edit the file to add a single blank line immediately above the "1." of each list and a single blank line immediately after the final numbered item so both lists are separated from surrounding paragraphs/headings (i.e., update the blocks under the "Required Evidence" and "Runtime Safety Rules" sections to have blank lines before the first numbered item and after the last numbered item).docs/SAVE_LAB_RUNBOOK.md-46-65 (1)
46-65:⚠️ Potential issue | 🟡 MinorBackslash line continuations are invalid in PowerShell.
PowerShell uses the backtick (
`) for line continuation, not\. Users copying these multi-line commands will get parsing errors or mangled arguments.📝 Proposed fix
pwsh ./tools/export-save-patch-pack.ps1 ` - -OriginalSavePath <original.sav> \ - -EditedSavePath <edited.sav> \ - -ProfileId base_swfoc \ - -SchemaId base_swfoc_steam_v1 \ - -OutputPath TestResults/patches/example.patch.json \ + -OriginalSavePath <original.sav> ` + -EditedSavePath <edited.sav> ` + -ProfileId base_swfoc ` + -SchemaId base_swfoc_steam_v1 ` + -OutputPath TestResults/patches/example.patch.json ` -BuildIfNeededpwsh ./tools/apply-save-patch-pack.ps1 ` - -TargetSavePath <target.sav> \ - -PatchPackPath <patch.json> \ - -TargetProfileId base_swfoc \ - -Strict $true \ + -TargetSavePath <target.sav> ` + -PatchPackPath <patch.json> ` + -TargetProfileId base_swfoc ` + -Strict $true ` -BuildIfNeeded🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/SAVE_LAB_RUNBOOK.md` around lines 46 - 65, The PowerShell examples use backslash line continuations which are invalid; update the multi-line invocations of ./tools/export-save-patch-pack.ps1 and ./tools/apply-save-patch-pack.ps1 to use PowerShell backtick (`) continuations (or present them as single-line commands) so parameters like -OriginalSavePath, -EditedSavePath, -ProfileId, -SchemaId, -OutputPath, -BuildIfNeeded, -TargetSavePath, -PatchPackPath, -TargetProfileId and -Strict are passed correctly; edit the two shown code blocks to replace each trailing backslash with a backtick (or collapse into one-line examples) so the commands parse in PowerShell..github/workflows/policy-contract.yml-58-66 (1)
58-66:⚠️ Potential issue | 🟡 MinorThe
justified skipbypass still requiresrepro bundle json:— verify this is intentional.The condition at line 63 (
!hasBundleField || (!hasClassification && !hasSkipJustification)) means a PR with "justified skip" in the body but no "repro bundle json:" field will still fail. If the intent of "justified skip" is to waive the evidence requirement entirely, the condition should be:- if (!hasBundleField || (!hasClassification && !hasSkipJustification)) { + if (!hasSkipJustification && (!hasBundleField || !hasClassification)) {Also, the
includes('repro bundle json:')check is case-folded but whitespace-sensitive — a contributor writingrepro bundle json :or embedding it inside a markdown table cell will silently fail the gate.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/policy-contract.yml around lines 58 - 66, The current logic requires hasBundleField even when hasSkipJustification is true and the bundle string match is whitespace-sensitive; update hasBundleField to use a case-insensitive regex that tolerates extra spaces (e.g. /repro\s*bundle\s*json\s*:/i.test(body)) and change the validation condition to require at least one of the three tokens (bundle, classification, or justified skip) by replacing the iff with: if (!hasBundleField && !hasClassification && !hasSkipJustification) then call core.setFailed; this makes "justified skip" waive the bundle requirement and makes bundle detection robust to spacing/casing.tools/validate-save-patch-pack.ps1-61-62 (1)
61-62:⚠️ Potential issue | 🟡 Minor
$nullshould be on the left-hand side of equality comparisonsUsing
$pack.metadata -eq $null(and$pack.compatibility -eq $null) is flagged by PSScriptAnalyzer'sPSPossibleIncorrectComparisonWithNullrule. When the operand is a collection, the right-hand$nullform returns matching elements instead of a boolean.🛡️ Proposed fix
-if ($pack.metadata -eq $null) { +if ($null -eq $pack.metadata) {-if ($pack.compatibility -eq $null) { +if ($null -eq $pack.compatibility) {Also applies to: 78-80
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/validate-save-patch-pack.ps1` around lines 61 - 62, The null-comparison uses the variable on the left (e.g., $pack.metadata -eq $null and $pack.compatibility -eq $null) which can misbehave for collections; change comparisons to place $null on the left (use $null -eq $pack.metadata and $null -eq $pack.compatibility) so PSScriptAnalyzer PSPossibleIncorrectComparisonWithNull is satisfied and the checks return proper booleans; update all occurrences around the checks that currently reference $pack.metadata and the block at the $pack.compatibility check (lines referenced in the review) to the left-$null form.tools/apply-save-patch-pack.ps1-36-37 (1)
36-37:⚠️ Potential issue | 🟡 MinorHardcoded NuGet package version
8.0.3in fallback pathIf
Microsoft.Extensions.Logging.Abstractionsis upgraded in the project, this fallback silently misses the new version and falls through to thethrow. Consider resolving the version dynamically from the.csproj/packages.lock.json, or at minimum add a comment noting the version must be kept in sync.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/apply-save-patch-pack.ps1` around lines 36 - 37, The script currently hardcodes the NuGet package version when building $nugetDll which will break when Microsoft.Extensions.Logging.Abstractions is upgraded; change the logic that constructs $nugetDll (and the $candidates.Add call) to resolve the package version dynamically (e.g., read packages.lock.json or csproj to find the installed version, or enumerate the "%USERPROFILE%\.nuget\packages\microsoft.extensions.logging.abstractions\*" directories with Get-ChildItem and pick the highest-semver folder) and then join that folder with the relative path to Microsoft.Extensions.Logging.Abstractions.dll; if you cannot implement dynamic resolution now, at minimum replace the hardcoded "8.0.3" with a clearly marked TODO comment explaining the requirement to keep it in sync.tools/apply-save-patch-pack.ps1-107-107 (1)
107-107:⚠️ Potential issue | 🟡 Minor
Write-Hostprevents capturing the JSON result from the pipeline
Write-Hostwrites to the Information stream (stream 6), not the Success stream. Any caller attempting$result = pwsh ./tools/apply-save-patch-pack.ps1 ...will get an empty$result. UseWrite-Outputso the JSON is capturable.🛡️ Proposed fix
-Write-Host $serialized +Write-Output $serialized🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/apply-save-patch-pack.ps1` at line 107, The script uses Write-Host $serialized which writes to the Information stream and prevents callers from capturing the JSON output; change the call in the script (the line that invokes Write-Host with the $serialized variable) to use Write-Output (or return the string) so the JSON is written to the Success/standard output stream and can be assigned by callers; also ensure no other Write-Host/Write-Information calls emit extraneous text before this JSON output.tools/validate-save-patch-pack.ps1-113-117 (1)
113-117:⚠️ Potential issue | 🟡 MinorUnguarded
[int]cast onoffsetbreaks the accumulated-error modelIf
offsetis a non-numeric string (e.g."abc"), the cast[int]$operation.offsetthrows a terminating exception under$ErrorActionPreference = "Stop"before the validation error can be added and aggregated. The surrounding validation logic accumulates errors rather than exiting early, so a type-safe check is needed here.🛡️ Proposed fix
- if ([int]$operation.offset -lt 0) { - Add-Error "operations[$index].offset must be >= 0" - } + $offsetVal = 0L + if (-not [long]::TryParse([string]$operation.offset, [ref]$offsetVal)) { + Add-Error "operations[$index].offset must be an integer" + } elseif ($offsetVal -lt 0) { + Add-Error "operations[$index].offset must be >= 0" + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/validate-save-patch-pack.ps1` around lines 113 - 117, The direct cast [int]$operation.offset can throw and abort validation; instead validate the type safely using a numeric parse before comparing. Use [int]::TryParse to attempt parsing $operation.offset into a temp variable and if parsing fails call Add-Error with a clear message (e.g. operations[$index].offset must be an integer >= 0); if it parses, then check the parsed value >= 0 and call Add-Error as currently done. Update the code around $operation.offset, the parsing temp variable, and the Add-Error call so errors are accumulated rather than throwing.tools/apply-save-patch-pack.ps1-48-50 (1)
48-50:⚠️ Potential issue | 🟡 Minor
Resolve-Path "."resolves to CWD, not the script root — breaks when called from any other directoryAll DLL paths (
$coreDll,$savesDll,$loggingDll) are computed relative to$repoRoot. If the script is invoked from outside the repo root (e.g., from a CI working directory that differs, or via absolute path), every path resolves incorrectly and the script fails with a misleading "assemblies not found" error.🛡️ Proposed fix
-$repoRoot = Resolve-Path "." +$repoRoot = Resolve-Path (Join-Path $PSScriptRoot "..")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/apply-save-patch-pack.ps1` around lines 48 - 50, The script uses Resolve-Path "." to set $repoRoot which points to the current working directory (CWD) instead of the script's directory, causing $coreDll, $savesDll (and $loggingDll) to be built from the wrong base when invoked from elsewhere; change the root calculation to derive the repository root from the script location (use $PSScriptRoot or $MyInvocation.MyCommand.Path and Split-Path as appropriate) and then recompute $coreDll, $savesDll, and $loggingDll with Join-Path against that script-derived repo root so the DLL paths are always correct regardless of CWD.tools/collect-mod-repro-bundle.ps1-1-11 (1)
1-11:⚠️ Potential issue | 🟡 MinorMissing PowerShell 7 guard unlike sibling scripts.
export-save-patch-pack.ps1and other scripts in this PR check$PSVersionTable.PSEdition -ne "Core"and throw. This script uses PS7-specific syntax (e.g., ternary-like inline expressions on Line 374) but lacks the same guard, risking confusing failures on Windows PowerShell 5.1.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/collect-mod-repro-bundle.ps1` around lines 1 - 11, This script lacks the PowerShell 7 runtime guard present in sibling scripts; add a check near the top (after the param block/Set-StrictMode) that inspects $PSVersionTable.PSEdition and throws a clear error when it is not "Core" so the script fails fast on Windows PowerShell 5.1; ensure the message references this script (collect-mod-repro-bundle.ps1) and why (it uses PS7-only syntax such as the ternary-like inline expression referenced later) so callers get a deterministic, informative failure.src/SwfocTrainer.Saves/Services/SavePatchPackService.cs-187-191 (1)
187-191:⚠️ Potential issue | 🟡 Minor
ToDictionarywill throw on duplicatePathorIdvalues inFieldDefs.If a schema accidentally defines two fields with the same
Path(orId),ToDictionarythrows an unhandledArgumentException. While this would indicate a schema defect, a more descriptive error (or using a grouped lookup) would improve debuggability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Saves/Services/SavePatchPackService.cs` around lines 187 - 191, The current use of ToDictionary on schema.FieldDefs (for fieldByPath and fieldById) will throw a generic ArgumentException on duplicate Path/Id; replace this with code that first groups by Path and by Id (e.g., GroupBy(x => x.Path, StringComparer.OrdinalIgnoreCase) and GroupBy(x => x.Id, StringComparer.OrdinalIgnoreCase)), detect any groups with Count() > 1 and throw a clearer exception (or return a ToLookup if duplicates should be tolerated) that includes the duplicate key name and the involved field Ids/Paths; update the initialization of fieldByPath and fieldById to use the grouped result (or ToLookup) or to build a dictionary after validating no duplicates to provide a descriptive error when duplicates are found.tools/export-save-patch-pack.ps1-49-49 (1)
49-49:⚠️ Potential issue | 🟡 MinorRepo root resolution depends on current working directory.
Resolve-Path "."assumes the script is invoked from the repository root. If invoked from another directory, all derived paths ($coreDll,$savesDll, schema paths) will be wrong. Consider using$PSScriptRootto resolve the repo root, similar to whatcollect-mod-repro-bundle.ps1does on its Line 13.Proposed fix
-$repoRoot = Resolve-Path "." +$repoRoot = Resolve-Path (Join-Path $PSScriptRoot "..")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/export-save-patch-pack.ps1` at line 49, The repo root is being resolved from the current working directory via $repoRoot = Resolve-Path ".", which breaks if the script is invoked from elsewhere; change the assignment to derive the repo root from the script location instead (use $PSScriptRoot), e.g. set $repoRoot = Resolve-Path $PSScriptRoot (or Resolve-Path (Join-Path $PSScriptRoot '..') if this script lives in a subfolder) so all downstream variables like $coreDll and $savesDll compute correctly; mirror the approach used with $PSScriptRoot in the other script and update any path joins that assume $repoRoot accordingly.src/SwfocTrainer.Saves/Services/SavePatchApplyService.cs-167-170 (1)
167-170:⚠️ Potential issue | 🟡 Minor
File.Deletein catch block could throw and mask the original exception.If the temp file exists but cannot be deleted (e.g., locked by antivirus), the resulting exception will mask the original write failure. Wrap in a try/catch to ensure the original error is preserved.
Proposed fix
- if (File.Exists(tempOutputPath)) - { - File.Delete(tempOutputPath); - } + try + { + if (File.Exists(tempOutputPath)) + { + File.Delete(tempOutputPath); + } + } + catch (Exception cleanupEx) + { + _logger.LogWarning(cleanupEx, "Failed to clean up temp file {TempPath}", tempOutputPath); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.Saves/Services/SavePatchApplyService.cs` around lines 167 - 170, The catch block currently calls File.Exists(tempOutputPath) and File.Delete(tempOutputPath) directly, which can throw and mask the original exception; update the cleanup to wrap the delete in its own try/catch so any IOException/UnauthorizedAccessException from File.Delete is swallowed or logged but not rethrown, thereby preserving the original exception. Locate the cleanup code in SavePatchApplyService (the block referencing tempOutputPath, File.Exists and File.Delete) and change it to check File.Exists then attempt File.Delete inside a nested try/catch that catches deletion-specific exceptions and does not throw, optionally logging the deletion failure.tools/collect-mod-repro-bundle.ps1-42-42 (1)
42-42:⚠️ Potential issue | 🟡 Minor
$matchesshadows PowerShell's automatic variable.
$matchesis a built-in automatic variable populated by the-matchoperator. Although this code uses[regex]::Matches()and the shadowing is scoped to the function, it can confuse readers and tooling. Rename to$regexMatchesor$steamMatches.Proposed fix
- $matches = [regex]::Matches($CommandLine, "STEAMMOD\s*=\s*([0-9]{4,})", [System.Text.RegularExpressions.RegexOptions]::IgnoreCase) + $regexMatches = [regex]::Matches($CommandLine, "STEAMMOD\s*=\s*([0-9]{4,})", [System.Text.RegularExpressions.RegexOptions]::IgnoreCase) $ids = New-Object System.Collections.Generic.HashSet[string]([StringComparer]::OrdinalIgnoreCase) - foreach ($match in $matches) { + foreach ($match in $regexMatches) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/collect-mod-repro-bundle.ps1` at line 42, The code uses the automatic PowerShell variable $matches for the result of [regex]::Matches($CommandLine, "STEAMMOD\s*=\s*([0-9]{4,})", ...), which shadows the built-in and can confuse readers and tools; rename the variable (for example to $regexMatches or $steamMatches) wherever it's declared and used (the occurrence calling [regex]::Matches in the same function) so the automatic $matches is not shadowed, updating any subsequent references to the new name.tools/collect-mod-repro-bundle.ps1-145-156 (1)
145-156:⚠️ Potential issue | 🟡 Minor
$argsshadows PowerShell's automatic variable — can cause subtle bugs.
$argsis a built-in automatic variable that holds unbound arguments passed to a function. Overwriting it here works but is fragile and flagged by PSScriptAnalyzer. Rename to$pythonArgsor$launchArgs.Proposed fix
- $args = @() + $pythonArgs = @() if ($pythonCmd.Count -gt 1) { - $args += $pythonCmd[1..($pythonCmd.Count - 1)] + $pythonArgs += $pythonCmd[1..($pythonCmd.Count - 1)] } - $args += @( + $pythonArgs += @( "tools/detect-launch-context.py", "--command-line", $Process.commandLine, "--process-name", $Process.name, "--process-path", $Process.path, "--profile-root", $ProfileRootPath, "--pretty" ) try { - $output = & $pythonCmd[0] `@args` 2>&1 + $output = & $pythonCmd[0] `@pythonArgs` 2>&1🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/collect-mod-repro-bundle.ps1` around lines 145 - 156, The script is overwriting PowerShell's automatic $args variable; rename the local variable to something like $pythonArgs to avoid shadowing: replace the $args declaration and all subsequent uses in this block (where $pythonCmd, $Process, $ProfileRootPath are referenced and where the argument array is built for "tools/detect-launch-context.py") with $pythonArgs (or $launchArgs), preserving the same contents and concatenation logic; ensure any later invocation that relied on $args is updated to use the new variable name.tools/run-live-validation.ps1-5-5 (1)
5-5:⚠️ Potential issue | 🟡 Minor
[switch]$NoBuild = $trueshould be typed as[bool]for consistent caller ergonomicsA
[switch]defaulting to$truemeans callers can only opt out via-NoBuild:$false, which is non-obvious and violates the expected switch semantics.$EmitReproBundleon line 8 already uses[bool]for this pattern;$NoBuildshould match.🔧 Proposed fix
- [switch]$NoBuild = $true, + [bool]$NoBuild = $true,🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tools/run-live-validation.ps1` at line 5, Change the parameter type for NoBuild from a PowerShell switch to a boolean so callers can pass -NoBuild:$false naturally; replace the declaration "[switch]$NoBuild = $true" with "[bool]$NoBuild = $true" and update any callsites or conditional checks that expect a switch (look for references to $NoBuild and compare to how $EmitReproBundle is declared/used) to treat it as a bool.src/SwfocTrainer.App/ViewModels/MainViewModel.cs-1479-1482 (1)
1479-1482:⚠️ Potential issue | 🟡 Minor
NumberStyles.AllowThousandsindouble.TryParsecauses unexpected type coercion for comma-formatted inputWith
NumberStyles.Float | NumberStyles.AllowThousands, the input"1,000"(locale-formatted integer) skips theint/longbranches (which useNumberStyles.Integer, disallowing the comma separator) and lands in thedoublebranch as1000.0. If the target save field expects anint32, the codec will receive adoubleinstead of failing early with a descriptive error.Remove
NumberStyles.AllowThousandsunless grouped thousands notation is a deliberate requirement for save-field values:🔧 Proposed fix
- if (double.TryParse(trimmed, NumberStyles.Float | NumberStyles.AllowThousands, CultureInfo.InvariantCulture, out var doubleValue)) + if (double.TryParse(trimmed, NumberStyles.Float, CultureInfo.InvariantCulture, out var doubleValue))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/SwfocTrainer.App/ViewModels/MainViewModel.cs` around lines 1479 - 1482, The double parsing branch uses NumberStyles.Float | NumberStyles.AllowThousands which lets comma-grouped integers like "1,000" bypass the int/long parsing and be coerced to a double; update the double.TryParse call (the trimmed variable and the double.TryParse(...) branch) to remove NumberStyles.AllowThousands (use NumberStyles.Float only) so comma-formatted integers don't get parsed as double and instead fall back to int/long error handling (or explicitly validate thousands grouping where intended).
| id: repro_bundle_json | ||
| attributes: | ||
| label: Repro bundle JSON path/link | ||
| description: `TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL |
There was a problem hiding this comment.
YAML syntax error: unquoted backtick in description will break template rendering.
YAMLlint rejects a backtick as the first character of an unquoted scalar. This can prevent the issue template from loading on GitHub.
🔧 Proposed fix
- description: `TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL
+ description: "`TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| description: `TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL | |
| description: "`TestResults/runs/<runId>/repro-bundle.json` or uploaded artifact URL" |
🧰 Tools
🪛 YAMLlint (1.38.0)
[error] 29-29: syntax error: found character '`' that cannot start any token
(syntax)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/ISSUE_TEMPLATE/bug.yml at line 29, The YAML description scalar
contains an unquoted leading backtick which breaks parsing; update the
description field (the `description` key) to use a quoted string (single or
double quotes) and remove the surrounding backticks so the value becomes a
proper quoted scalar (e.g., replace the backtick-wrapped
`TestResults/runs/<runId>/repro-bundle.json` with a quoted string like
"TestResults/runs/<runId>/repro-bundle.json").
| @echo off | ||
| setlocal | ||
|
|
||
| set "REPO_ROOT=%~dp0" | ||
| if "%REPO_ROOT:~-1%"=="\" set "REPO_ROOT=%REPO_ROOT:~0,-1%" | ||
| set "DOTNET_NOLOGO=1" | ||
| set "SOLUTION_PATH=%REPO_ROOT%\SwfocTrainer.sln" | ||
|
|
||
| set "APP_EXE=src\SwfocTrainer.App\bin\Debug\net8.0-windows\SwfocTrainer.App.exe" | ||
|
|
||
| cd /d "%REPO_ROOT%" | ||
| if exist "%APP_EXE%" goto :start_app | ||
|
|
||
| echo [launch-app-debug] Debug EXE not found. Building Debug... | ||
| dotnet build "%SOLUTION_PATH%" -c Debug --nologo | ||
| set "BUILD_EXIT=%ERRORLEVEL%" | ||
| if not "%BUILD_EXIT%"=="0" goto :build_failed | ||
| if not exist "%APP_EXE%" goto :missing_exe | ||
|
|
||
| :start_app | ||
| echo [launch-app-debug] Starting %APP_EXE% | ||
| start "" "%APP_EXE%" | ||
| exit /b 0 | ||
|
|
||
| :build_failed | ||
| echo [launch-app-debug] Build failed with exit code %BUILD_EXIT%. | ||
| echo [launch-app-debug] Ensure the .NET SDK required by global.json is installed. | ||
| exit /b %BUILD_EXIT% | ||
|
|
||
| :missing_exe | ||
| echo [launch-app-debug] Could not locate: %APP_EXE% | ||
| exit /b 1 |
There was a problem hiding this comment.
Unix line endings (LF-only) — highest risk of the three batch scripts, with three GOTO labels in use.
32 LF sequences detected; :start_app, :build_failed, and :missing_exe are all reachable GOTO targets. Apply the same *.cmd text eol=crlf .gitattributes rule.
🧰 Tools
🪛 Blinter (1.0.112)
[error] 1-1: Unix line endings detected. Explanation: Batch file uses Unix line endings (LF-only) which can cause GOTO/CALL label parsing failures and script malfunction due to Windows batch parser 512-byte boundary bugs. Recommendation: Convert file to Windows line endings (CRLF). Use tools like dos2unix, notepad++, or configure git with 'git config core.autocrlf true'. Context: File uses Unix line endings (LF-only) - 32 LF sequences found
(E018)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@launch-app-debug.cmd` around lines 1 - 32, Add a .gitattributes rule to force
CRLF for this batch script and convert the file to CRLF to avoid GOTO label
issues: add either a global entry "*.cmd text eol=crlf" (preferred) or
"launch-app-debug.cmd text eol=crlf" to .gitattributes, then normalize/convert
launch-app-debug.cmd in Git (checkout/commit or core.autocrlf tools) and
recommit so the labels (:start_app, :build_failed, :missing_exe) remain
reachable with CRLF endings.
| @echo off | ||
| setlocal | ||
|
|
||
| set "REPO_ROOT=%~dp0" | ||
| if "%REPO_ROOT:~-1%"=="\" set "REPO_ROOT=%REPO_ROOT:~0,-1%" | ||
| set "DOTNET_NOLOGO=1" | ||
| set "SOLUTION_PATH=%REPO_ROOT%\SwfocTrainer.sln" | ||
|
|
||
| set "APP_EXE=src\SwfocTrainer.App\bin\Release\net8.0-windows\SwfocTrainer.App.exe" | ||
|
|
||
| cd /d "%REPO_ROOT%" | ||
| if exist "%APP_EXE%" goto :start_app | ||
|
|
||
| echo [launch-app-release] Release EXE not found. Building Release... | ||
| dotnet build "%SOLUTION_PATH%" -c Release --nologo | ||
| set "BUILD_EXIT=%ERRORLEVEL%" | ||
| if not "%BUILD_EXIT%"=="0" goto :build_failed | ||
| if not exist "%APP_EXE%" goto :missing_exe | ||
|
|
||
| :start_app | ||
| echo [launch-app-release] Starting %APP_EXE% | ||
| start "" "%APP_EXE%" | ||
| exit /b 0 | ||
|
|
||
| :build_failed | ||
| echo [launch-app-release] Build failed with exit code %BUILD_EXIT%. | ||
| echo [launch-app-release] Ensure the .NET SDK required by global.json is installed. | ||
| exit /b %BUILD_EXIT% | ||
|
|
||
| :missing_exe | ||
| echo [launch-app-release] Could not locate: %APP_EXE% | ||
| exit /b 1 |
There was a problem hiding this comment.
Batch file must use CRLF line endings — LF-only will break GOTO label resolution on Windows.
Blinter confirms 32 LF sequences with no CR. The Windows CMD parser uses 512-byte boundary alignment when scanning for labels; LF-only files cause GOTO :build_failed and GOTO :missing_exe to silently fail or jump to the wrong location. Convert the file to CRLF before merging (e.g., with dos2unix -D reversed, Notepad++, or git config core.autocrlf true).
🧰 Tools
🪛 Blinter (1.0.112)
[error] 1-1: Unix line endings detected. Explanation: Batch file uses Unix line endings (LF-only) which can cause GOTO/CALL label parsing failures and script malfunction due to Windows batch parser 512-byte boundary bugs. Recommendation: Convert file to Windows line endings (CRLF). Use tools like dos2unix, notepad++, or configure git with 'git config core.autocrlf true'. Context: File uses Unix line endings (LF-only) - 32 LF sequences found
(E018)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@launch-app-release.cmd` around lines 1 - 32, The batch file uses LF-only line
endings which breaks CMD label resolution for labels like :start_app,
:build_failed and :missing_exe; convert the file to CRLF line endings before
merging (e.g., update the file’s line endings in your editor or run a conversion
tool) and ensure future commits preserve CRLF by adding a rule (e.g.,
.gitattributes or set core.autocrlf) so launch-app-release.cmd is committed with
CRLF line endings.
| sonar.sources=src,tools | ||
| sonar.tests=tests | ||
| sonar.exclusions=**/bin/**,**/obj/**,1397421866(original mod)/**,3447786229(submod)/**,3661482670(cheat_mode_example)/** | ||
| sonar.cs.vstest.reportsPaths=TestResults/*.trx |
There was a problem hiding this comment.
sonar.cs.vstest.reportsPaths will always be empty — missing dotnet test step in the workflow.
The SonarCloud workflow (sonarcloud.yml) restores and builds but never runs dotnet test, so TestResults/*.trx is never written. SonarCloud will silently find no test results or coverage data.
Add a test step between Build and SonarCloud Scan in sonarcloud.yml:
🔧 Proposed addition to sonarcloud.yml
- name: Build
run: dotnet build SwfocTrainer.sln -c Release --no-restore
+
+ - name: Test
+ run: |
+ dotnet test SwfocTrainer.sln -c Release --no-build \
+ --filter "FullyQualifiedName!~SwfocTrainer.Tests.Profiles.Live&FullyQualifiedName!~RuntimeAttachSmokeTests" \
+ --logger trx --results-directory TestResults
+
- name: SonarCloud Scan🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@sonar-project.properties` at line 9, The SonarCloud config sets
sonar.cs.vstest.reportsPaths=TestResults/*.trx but no tests are run, so
TestResults/*.trx is never produced; update the CI workflow (sonarcloud.yml) to
add a dotnet test step between the Build and SonarCloud Scan stages that runs
tests and outputs TRX results (e.g., run dotnet test with --logger
"trx;LogFileName=TestResults/test_results.trx" or the equivalent) so the
sonar.cs.vstest.reportsPaths path is populated for the scan.
| var expectedOperationCount = _loadedPatchPreview?.OperationsToApply.Count ?? _loadedPatchPack.Operations.Count; | ||
| var result = await _savePatchApplyService.ApplyAsync(SavePath, _loadedPatchPack, SelectedProfileId, strict: IsStrictPatchApply); | ||
| SavePatchApplySummary = $"{result.Classification}: {result.Message}"; | ||
| if (result.Applied) | ||
| { | ||
| await LoadSaveAsync(); | ||
| if (!string.IsNullOrWhiteSpace(result.BackupPath)) | ||
| { | ||
| SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "backup_path", result.BackupPath)); | ||
| } | ||
|
|
||
| if (!string.IsNullOrWhiteSpace(result.ReceiptPath)) | ||
| { | ||
| SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "receipt_path", result.ReceiptPath)); | ||
| } | ||
| } | ||
|
|
||
| Status = result.Applied | ||
| ? $"Patch applied successfully ({result.Classification}, ops={expectedOperationCount})." | ||
| : $"Patch apply failed ({result.Classification}): {result.Message}"; |
There was a problem hiding this comment.
SavePatchApplySummary is silently cleared by LoadSaveAsync on the success path
LoadSaveAsync (line 1114) always sets SavePatchApplySummary = string.Empty. Because line 1298 sets the summary before the await LoadSaveAsync() call at line 1301, the user-visible apply summary is wiped out on every successful apply. Only Status is updated afterwards; the dedicated summary panel shows blank.
The same ordering bug exists in RestoreBackupAsync: line 1321 sets SavePatchApplySummary, then line 1324 calls LoadSaveAsync() which clears it again.
Fix: move both assignments to after LoadSaveAsync() returns.
🐛 Proposed fix for `ApplyPatchPackAsync`
var result = await _savePatchApplyService.ApplyAsync(SavePath, _loadedPatchPack, SelectedProfileId, strict: IsStrictPatchApply);
- SavePatchApplySummary = $"{result.Classification}: {result.Message}";
if (result.Applied)
{
await LoadSaveAsync();
if (!string.IsNullOrWhiteSpace(result.BackupPath))
{
SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "backup_path", result.BackupPath));
}
if (!string.IsNullOrWhiteSpace(result.ReceiptPath))
{
SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "receipt_path", result.ReceiptPath));
}
}
+ SavePatchApplySummary = $"{result.Classification}: {result.Message}";
Status = result.Applied
? $"Patch applied successfully ({result.Classification}, ops={expectedOperationCount})."
: $"Patch apply failed ({result.Classification}): {result.Message}";🐛 Proposed fix for `RestoreBackupAsync`
var result = await _savePatchApplyService.RestoreLastBackupAsync(SavePath);
- SavePatchApplySummary = result.Message;
if (result.Restored && !string.IsNullOrWhiteSpace(SelectedProfileId))
{
await LoadSaveAsync();
}
+ SavePatchApplySummary = result.Message;
Status = result.Restored
? $"Backup restored: {result.BackupPath}"
: $"Backup restore skipped: {result.Message}";📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| var expectedOperationCount = _loadedPatchPreview?.OperationsToApply.Count ?? _loadedPatchPack.Operations.Count; | |
| var result = await _savePatchApplyService.ApplyAsync(SavePath, _loadedPatchPack, SelectedProfileId, strict: IsStrictPatchApply); | |
| SavePatchApplySummary = $"{result.Classification}: {result.Message}"; | |
| if (result.Applied) | |
| { | |
| await LoadSaveAsync(); | |
| if (!string.IsNullOrWhiteSpace(result.BackupPath)) | |
| { | |
| SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "backup_path", result.BackupPath)); | |
| } | |
| if (!string.IsNullOrWhiteSpace(result.ReceiptPath)) | |
| { | |
| SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "receipt_path", result.ReceiptPath)); | |
| } | |
| } | |
| Status = result.Applied | |
| ? $"Patch applied successfully ({result.Classification}, ops={expectedOperationCount})." | |
| : $"Patch apply failed ({result.Classification}): {result.Message}"; | |
| var expectedOperationCount = _loadedPatchPreview?.OperationsToApply.Count ?? _loadedPatchPack.Operations.Count; | |
| var result = await _savePatchApplyService.ApplyAsync(SavePath, _loadedPatchPack, SelectedProfileId, strict: IsStrictPatchApply); | |
| if (result.Applied) | |
| { | |
| await LoadSaveAsync(); | |
| if (!string.IsNullOrWhiteSpace(result.BackupPath)) | |
| { | |
| SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "backup_path", result.BackupPath)); | |
| } | |
| if (!string.IsNullOrWhiteSpace(result.ReceiptPath)) | |
| { | |
| SavePatchCompatibility.Add(new SavePatchCompatibilityViewItem("info", "receipt_path", result.ReceiptPath)); | |
| } | |
| } | |
| SavePatchApplySummary = $"{result.Classification}: {result.Message}"; | |
| Status = result.Applied | |
| ? $"Patch applied successfully ({result.Classification}, ops={expectedOperationCount})." | |
| : $"Patch apply failed ({result.Classification}): {result.Message}"; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/SwfocTrainer.App/ViewModels/MainViewModel.cs` around lines 1296 - 1315,
The apply/restore flow sets SavePatchApplySummary before awaiting LoadSaveAsync,
but LoadSaveAsync always clears SavePatchApplySummary so the UI summary is lost;
in both ApplyPatchPackAsync and RestoreBackupAsync move the assignment
SavePatchApplySummary = $"{result.Classification}: {result.Message}"; to after
the await LoadSaveAsync() returns, and also move any
SavePatchCompatibility.Add(...) calls that depend on
result.BackupPath/result.ReceiptPath to after LoadSaveAsync so they are not
discarded; keep the Status update after these operations. Ensure you update the
code paths that reference SavePatchApplySummary, LoadSaveAsync,
SavePatchCompatibility.Add, Status, and the result variable in both
ApplyPatchPackAsync and RestoreBackupAsync.
| if ($errors.Count -gt 0) { | ||
| Write-Host "policy contract validation failed:" -ForegroundColor Red | ||
| foreach ($error in $errors) { | ||
| Write-Host " - $error" -ForegroundColor Red | ||
| } | ||
| exit 1 |
There was a problem hiding this comment.
$error is a reserved readonly automatic variable — the error-reporting block will throw under strict mode.
Under Set-StrictMode -Version Latest (line 1) + $ErrorActionPreference = "Stop" (line 2), foreach ($error in $errors) immediately throws "The Variable 'error' cannot be assigned since it is a readonly automatic variable", terminating the script before exit 1 is reached. Individual policy violations are never printed, making CI failures impossible to triage from the log.
🐛 Proposed fix — rename the loop variable
if ($errors.Count -gt 0) {
Write-Host "policy contract validation failed:" -ForegroundColor Red
- foreach ($error in $errors) {
- Write-Host " - $error" -ForegroundColor Red
+ foreach ($msg in $errors) {
+ Write-Host " - $msg" -ForegroundColor Red
}
exit 1
}🧰 Tools
🪛 PSScriptAnalyzer (1.24.0)
[error] 84-84: The Variable 'error' cannot be assigned since it is a readonly automatic variable that is built into PowerShell, please use a different name.
(PSAvoidAssignmentToAutomaticVariable)
[warning] 83-83: File 'validate-policy-contracts.ps1' uses Write-Host. Avoid using Write-Host because it might not work in all hosts, does not work when there is no host, and (prior to PS 5.0) cannot be suppressed, captured, or redirected. Instead, use Write-Output, Write-Verbose, or Write-Information.
(PSAvoidUsingWriteHost)
[warning] 85-85: File 'validate-policy-contracts.ps1' uses Write-Host. Avoid using Write-Host because it might not work in all hosts, does not work when there is no host, and (prior to PS 5.0) cannot be suppressed, captured, or redirected. Instead, use Write-Output, Write-Verbose, or Write-Information.
(PSAvoidUsingWriteHost)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/validate-policy-contracts.ps1` around lines 82 - 87, The foreach loop
uses the reserved automatic variable $error which is readonly under
Set-StrictMode and $ErrorActionPreference="Stop", causing the loop to throw
before printing violations; rename the loop variable (e.g., change foreach
($error in $errors) to foreach ($err in $errors)) and update the body to
Write-Host " - $err" -ForegroundColor Red so each policy violation is printed
and the subsequent exit 1 is reached.
| $liveTests = @($bundle.liveTests) | ||
| if ($liveTests.Count -eq 0) { | ||
| Add-Error "liveTests must contain at least one entry" | ||
| } |
There was a problem hiding this comment.
@($bundle.liveTests) produces a single-element array containing $null when liveTests is missing — validation bypass.
In PowerShell, @($null) produces an array with one $null element, so $liveTests.Count will be 1 (not 0), and the "must contain at least one entry" check on Line 72 will pass incorrectly. The same issue applies to @($bundle.processSnapshot) on Line 95.
Proposed fix
-$liveTests = @($bundle.liveTests)
-if ($liveTests.Count -eq 0) {
+$liveTests = if ($null -ne $bundle.liveTests) { @($bundle.liveTests) } else { @() }
+if ($liveTests.Count -eq 0) {Apply the same pattern for $processSnapshot:
-$processSnapshot = @($bundle.processSnapshot)
+$processSnapshot = if ($null -ne $bundle.processSnapshot) { @($bundle.processSnapshot) } else { @() }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| $liveTests = @($bundle.liveTests) | |
| if ($liveTests.Count -eq 0) { | |
| Add-Error "liveTests must contain at least one entry" | |
| } | |
| $liveTests = if ($null -ne $bundle.liveTests) { @($bundle.liveTests) } else { @() } | |
| if ($liveTests.Count -eq 0) { | |
| Add-Error "liveTests must contain at least one entry" | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/validate-repro-bundle.ps1` around lines 71 - 74, The current validation
uses $liveTests = @($bundle.liveTests) which yields an array with a single $null
when liveTests is missing, allowing the count check to pass; fix by building
$liveTests as an array that filters out null/empty entries (e.g. pipe
$bundle.liveTests through a Where-Object that removes null/empty values) then
validate Count -eq 0 against that filtered array, and apply the identical change
for $processSnapshot (replace the @($bundle.processSnapshot) creation with a
filtered array) so both $liveTests and $processSnapshot accurately reflect
presence/absence of entries.
| foreach ($required in @("profileId", "reasonCode", "confidence", "launchKind")) { | ||
| Require-Field -Object $bundle.launchContext -Field $required -AllowNull | ||
| } |
There was a problem hiding this comment.
Null-dereference risk if $bundle.launchContext is missing.
If the bundle JSON has no launchContext property, $bundle.launchContext is $null, and calling $null.PSObject.Properties[...] inside Require-Field will throw. The same applies to $bundle.runtimeMode (Line 107) and $bundle.diagnostics (Line 111). Guard with a null check or verify the field exists on $bundle first.
Proposed fix
+if ($null -eq $bundle.launchContext) {
+ Add-Error "missing required field: launchContext"
+} else {
foreach ($required in @("profileId", "reasonCode", "confidence", "launchKind")) {
Require-Field -Object $bundle.launchContext -Field $required -AllowNull
}
+}Apply a similar guard for runtimeMode and diagnostics.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| foreach ($required in @("profileId", "reasonCode", "confidence", "launchKind")) { | |
| Require-Field -Object $bundle.launchContext -Field $required -AllowNull | |
| } | |
| if ($null -eq $bundle.launchContext) { | |
| Add-Error "missing required field: launchContext" | |
| } else { | |
| foreach ($required in @("profileId", "reasonCode", "confidence", "launchKind")) { | |
| Require-Field -Object $bundle.launchContext -Field $required -AllowNull | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tools/validate-repro-bundle.ps1` around lines 102 - 104, The code calls
Require-Field with $bundle.launchContext, $bundle.runtimeMode and
$bundle.diagnostics which can be $null and cause a null-dereference inside
Require-Field; add guards that verify those properties exist on $bundle before
invoking Require-Field (e.g. if ($null -ne $bundle.launchContext) { foreach
($required in @(...)) { Require-Field -Object $bundle.launchContext -Field
$required -AllowNull } } else { Throw "Missing launchContext in bundle" } ), and
apply the same pattern for $bundle.runtimeMode and $bundle.diagnostics so
Require-Field always receives a non-null -Object.
|
Reviewer automation could not request a non-author reviewer for this PR.
Next steps:
|
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: ea1b83e9fa
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
|
|
||
| var allowedProfiles = pack.Compatibility.AllowedProfileIds ?? Array.Empty<string>(); | ||
| var wildcardAllowed = allowedProfiles.Any(x => x == "*"); | ||
| var profileAllowed = wildcardAllowed || allowedProfiles.Any(x => x.Equals(targetProfileId, StringComparison.OrdinalIgnoreCase)); |
There was a problem hiding this comment.
Reject null allowedProfileIds entries before compatibility checks
LoadPackAsync currently accepts compatibility.allowedProfileIds entries that are null because contract validation only checks list presence/count, but ValidateCompatibilityAsync then calls x.Equals(...) on each entry and will throw NullReferenceException when it hits a null item. A patch pack containing "allowedProfileIds": [null] will crash preview/apply instead of returning a deterministic compatibility error, so null/blank profile IDs need to be rejected (or compared null-safely) during validation.
Useful? React with 👍 / 👎.
| bool strict = true, | ||
| CancellationToken cancellationToken = default) | ||
| { | ||
| var normalizedTargetPath = NormalizeTargetPath(targetSavePath); |
There was a problem hiding this comment.
Return classified load failure for missing target save path
ApplyAsync validates the target path before entering its try/catch around load, so a missing save file or invalid extension throws immediately and bypasses the structured SavePatchApplyResult (target_load_failed) path. This makes callers fall back to exception handling instead of reason-code diagnostics for a common failure mode; normalize/validate the path inside the guarded load flow or convert that exception into a classified result.
Useful? React with 👍 / 👎.
|
|
||
| public async Task<SaveRollbackResult> RestoreLastBackupAsync(string targetSavePath, CancellationToken cancellationToken = default) | ||
| { | ||
| var normalizedTargetPath = NormalizeTargetPath(targetSavePath); |
There was a problem hiding this comment.
Allow backup restore when target save file is missing
RestoreLastBackupAsync requires targetSavePath to already exist via NormalizeTargetPath, which means restore fails before backup discovery if the primary .sav was deleted or lost. In that recovery scenario, backups may still be valid, but the method throws instead of restoring them, undermining the rollback mechanism.
Useful? React with 👍 / 👎.
User description
Summary
This PR delivers the integrated M2 Save Lab hardening + UX completion wave (P0+P1+P2) on top of the existing Save Lab foundation.
Primary outcomes:
float,double,ascii) for deterministic offline workflows.Refs #8
Problem and User Impact
Before this change set:
fieldPathdrifted even whenfieldIdwas correct.Effect on users: patch-pack workflows were less predictable and less recoverable, especially when moving artifacts across save variants.
Root Cause
newValuerequirement and raw JSON checks).fieldPathfreshness.NullLogger<T>.Instanceshape.What Changed
P0 correctness
SavePatchPackService:metadata.schemaVersion == "1.0"metadata.createdAtUtcrequiredSetValueoffset >= 0operations[].newValuerequired (oldValueremains optional)tools/schemas/save-patch-pack.schema.jsonand validator script to requirenewValue.P1 service/API behavior
ISavePatchPackService.PreviewApplyAsync(...)to requiretargetProfileId.FieldId/FieldPathdrift.FieldPath, then retryFieldIdon selector mismatch.P1 UX completion
IsStrictPatchApply.P2 type and tooling hardening
float,double, andasciiwrites.NullLogger<T>.Instanceacross field/property runtime variants.Epic #8 / M2 Acceptance Mapping
Verification Evidence
Build and deterministic tests
dotnet build SwfocTrainer.sln -c Release --no-restoredotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName~SavePatch"dotnet test tests/SwfocTrainer.Tests/SwfocTrainer.Tests.csproj -c Release --no-build --filter "FullyQualifiedName!~SwfocTrainer.Tests.Profiles.Live&FullyQualifiedName!~RuntimeAttachSmokeTests"Schema/tooling checks
powershell.exe -NoProfile -ExecutionPolicy Bypass -Command "& '.\\tools\\validate-save-patch-pack.ps1' -PatchPackPath 'tools/fixtures/save_patch_pack_sample.json' -SchemaPath 'tools/schemas/save-patch-pack.schema.json' -Strict"Wrapper smoke (offline synthetic saves)
tools/export-save-patch-pack.ps1executed successfully with validation passtools/apply-save-patch-pack.ps1executed withClassification=Appliedon strict hash matchTestResults/savepatch-smoke/Notes
PR Type
Enhancement, Tests, Documentation
Description
M2 Save Lab hardening and UX completion wave (P0-P2)
Core patch-pack workflow implementation:
Strict schema-path patch-pack validation with contract enforcement (
newValuerequired, metadata checks, operation kind validation)Atomic patch apply with backup/receipt generation and rollback capability
Fallback selector strategy: try
fieldPath, then retryfieldIdon mismatch with drift warningsExtended codec support for
float,double, andasciifield types with invariant-culture parsingUI and UX enhancements:
Save Editor patch-pack workflow integration with load/export/preview/apply/restore commands
Strict mode checkbox (default ON) for source-hash validation control
Operation preview and compatibility diagnostic grids in editor
Profile-aware preview and apply semantics with explicit target profile requirement
Service hardening:
ISavePatchPackServicefor export, load, compatibility validation, and previewISavePatchApplyServicefor atomic apply with strict/non-strict mode controlRaw JSON contract validation and case-insensitive property handling for disk roundtrip robustness
Hardened PowerShell wrappers with
NullLogger<T>instance resolution across field/property variantsTooling and governance:
New validation scripts:
validate-save-patch-pack.ps1,validate-repro-bundle.ps1,validate-policy-contracts.ps1New operational scripts:
export-save-patch-pack.ps1,apply-save-patch-pack.ps1,collect-mod-repro-bundle.ps1,compare-visual-pack.ps1JSON schemas for patch-pack and repro-bundle contracts
Evidence-first operating model with repro bundle classification and launch-context tracking
Policy contract enforcement workflow for PR reliability evidence
Documentation and testing:
Comprehensive test coverage for patch-pack export, validation, apply, and rollback workflows
New runbooks: SAVE_LAB_RUNBOOK.md, LIVE_VALIDATION_RUNBOOK.md, VISUAL_AUDIT_RUNBOOK.md
Architecture and profile format documentation for patch-pack pipeline
Agent operating contract (AGENTS.md) and AI-native team operating model
Windows batch launchers for Debug/Release app and deterministic/live test runners
CI workflows for schema validation, policy enforcement, code quality, and duplication detection
Verification:
17 passing deterministic tests for patch-pack operations
End-to-end export/apply/restore smoke evidence
Schema validation for all patch-pack and repro-bundle artifacts
Removed machine-specific save root override for portability
Diagram Walkthrough
File Walkthrough
25 files
MainViewModel.cs
Save Lab patch-pack UI integration and workflow commandssrc/SwfocTrainer.App/ViewModels/MainViewModel.cs
SavePatchPackPath,IsStrictPatchApply,SavePatchOperations,SavePatchCompatibility)operations for save patch packs
with suffix detection
ISavePatchPackServiceandISavePatchApplyServicedependencies
SavePatchPackService.cs
Patch-pack export, load, and compatibility validation servicesrc/SwfocTrainer.Saves/Services/SavePatchPackService.cs
validation
case-insensitive property handling
operations
fieldPathtofieldIdwithdrift warnings
SavePatchApplyService.cs
Atomic patch apply with backup and rollback pipelinesrc/SwfocTrainer.Saves/Services/SavePatchApplyService.cs
rollback capability
non-strict mode for drift tolerance
then fieldId)
SavePatchFieldCodec.cs
Patch field codec with extended type supportsrc/SwfocTrainer.Saves/Internal/SavePatchFieldCodec.cs
double, byte, bool, and ascii types
invariant culture
BinarySaveCodec.cs
Extended codec support for float, double, and ascii typessrc/SwfocTrainer.Saves/Services/BinarySaveCodec.cs
doublefield type reading support in schema tree buildingfloat,double, andasciivalue typesSavePatchModels.cs
Core domain models for save patch-pack workflowssrc/SwfocTrainer.Core/Models/SavePatchModels.cs
SavePatchPacktop-level record with metadata, compatibility,and operations
SavePatchMetadatawith schema version, profile, schema ID,source hash, and creation timestamp
SavePatchOperationwith kind, field path/ID, value type,old/new values, and offset
outcomes, and rollback operations
Enums.cs
Enums for patch operation kinds and apply classificationssrc/SwfocTrainer.Core/Models/Enums.cs
SavePatchOperationKindenum withSetValuevariantSavePatchApplyClassificationenum with Applied,ValidationFailed, CompatibilityFailed, WriteFailed, and RolledBack
states
ISavePatchPackService.cs
Service contract for patch-pack operationssrc/SwfocTrainer.Core/Contracts/ISavePatchPackService.cs
validation, and preview operations
semantics
ISavePatchApplyService.cs
Service contract for patch apply and rollbacksrc/SwfocTrainer.Core/Contracts/ISavePatchApplyService.cs
control
SavePatchOperationViewItem.cs
View model for patch operation preview rowssrc/SwfocTrainer.App/Models/SavePatchOperationViewItem.cs
path/ID, value type, and old/new values
SavePatchCompatibilityViewItem.cs
View model for compatibility diagnostic rowssrc/SwfocTrainer.App/Models/SavePatchCompatibilityViewItem.cs
message
run-live-validation.ps1
Live validation script with scoped execution and repro-bundle supporttools/run-live-validation.ps1
FULL) with run ID tracking
detection
artifact detection
templates
collect-mod-repro-bundle.ps1
Repro bundle collection and classification scripttools/collect-mod-repro-bundle.ps1
and live test results
next-action guidance
mode from test outcomes
validate-save-patch-pack.ps1
Patch-pack JSON schema validation scripttools/validate-save-patch-pack.ps1
contract checks
enforcement
apply-save-patch-pack.ps1
PowerShell wrapper for atomic save patch applytools/apply-save-patch-pack.ps1
atomically
NullLoggerinstance across field/property runtime variantsfor robustness
backup/receipt paths
export-save-patch-pack.ps1
PowerShell wrapper for save patch exporttools/export-save-patch-pack.ps1
save pairs
logger resolution
writing to disk
validate-repro-bundle.ps1
Repro bundle schema and semantic validatortools/validate-repro-bundle.ps1
strict/lenient modes
classification consistency)
launchContext, diagnostics)
compare-visual-pack.ps1
Visual pack baseline comparison and reportingtools/compare-visual-pack.ps1
and missing files
diff_detected)
validate-policy-contracts.ps1
Policy contract validation for governancetools/validate-policy-contracts.ps1
templates, issue templates)
GitHub templates
MainWindow.xaml
Save Lab patch-pack UI integration in editorsrc/SwfocTrainer.App/MainWindow.xaml
workflow UI
buttons
behavior
display
launch-app-debug.cmd
Windows Debug app launcherlaunch-app-debug.cmd
launch-app-release.cmd
Windows Release app launcherlaunch-app-release.cmd
run-live-tests.cmd
Windows live tests runnerrun-live-tests.cmd
running
run-deterministic-tests.cmd
Windows deterministic tests runnerrun-deterministic-tests.cmd
ci.yml
CI workflow schema validation additions.github/workflows/ci.yml
1 files
App.xaml.cs
Service registration and save root hardeningsrc/SwfocTrainer.App/App.xaml.cs
DefaultSaveRootPathoverride fromSaveOptionsISavePatchPackServiceandISavePatchApplyServiceindependency injection container
3 files
SavePatchPackServiceTests.cs
Comprehensive tests for patch-pack export and validationtests/SwfocTrainer.Tests/Saves/SavePatchPackServiceTests.cs
operation ordering
source-hash warnings
newValueand invalidmetadata
warnings
SavePatchApplyServiceTests.cs
Comprehensive tests for patch apply and rollback workflowstests/SwfocTrainer.Tests/Saves/SavePatchApplyServiceTests.cs
restoration
rollback
SaveCodecTests.cs
Extended type support tests for codectests/SwfocTrainer.Tests/Saves/SaveCodecTests.cs
validation
precision
11 files
TODO.md
M2 Save Lab completion and evidence trackingTODO.md
(
TestResults/runs//repro-bundle.json)launch reasonCode for runtime/mod tasks
runs, schema validation)
LIVE_VALIDATION_RUNBOOK.md
Live validation runbook for evidence collectiondocs/LIVE_VALIDATION_RUNBOOK.md
repro-bundle.jsonlinked inissue evidence
TEST_PLAN.md
Test plan expansion for M2 Save Labdocs/TEST_PLAN.md
float,double,ascii) toSaveCodecTests
with specific coverage areas
save patch-pack schema validation
export/apply/restore workflows
SAVE_LAB_RUNBOOK.md
Save Lab operator workflow documentationdocs/SAVE_LAB_RUNBOOK.md
toggle, restore
parameter examples
README.md
README updates for M2 Save Lab featuresREADME.md
export/import, compatibility preview, strict toggle, atomic apply, and
rollback
launch-app-debug.cmd, test runners)
examples
AGENTS.md
Repository agent operating contractAGENTS.md
workflow
bundles, profile IDs, reason codes
fix → evidence → closure
explicit profile compatibility
ARCHITECTURE.md
Architecture documentation for M2 Save Labdocs/ARCHITECTURE.md
stages
rollback sequence
before write
operations
AI_NATIVE_TEAM_OPERATING_MODEL.md
AI-native team operating model for reliabilitydocs/AI_NATIVE_TEAM_OPERATING_MODEL.md
runtime/mod debugging
classification → fix → evidence → closure
code changes
output, repro bundle
PROFILE_FORMAT.md
Save patch-pack contract documentationdocs/PROFILE_FORMAT.md
artifact structure
constraints
newValuerequirement, optionaloldValue, and typed fieldoperations only
generation
VISUAL_AUDIT_RUNBOOK.md
Visual audit runbook for screenshot comparisondocs/VISUAL_AUDIT_RUNBOOK.md
TestResults/runs//visual-pack//.pngartifacts/visual-baselines/EXTERNAL_TOOLS_SETUP.md
External tools setup and configurationdocs/EXTERNAL_TOOLS_SETUP.md
11 files
repro-bundle.schema.json
Repro bundle JSON schema definitiontools/schemas/repro-bundle.schema.json
launchContext, runtimeMode, liveTests, diagnostics, classification
classification values
outcomes, and diagnostics
bug.yml
Bug template with repro bundle evidence fields.github/ISSUE_TEMPLATE/bug.yml
run_id,repro_bundle_json,repro_bundle_md,classificationblocked_environment, blocked_profile_mismatch)
and STEAMMOD/MODPATH data
helper readiness details
save-patch-pack.schema.json
Save patch-pack JSON schema definitiontools/schemas/save-patch-pack.schema.json
createdAtUtc)
saveBuildHint)
newValue, optionaloldValue,and strict field constraints
visual-audit.yml
Visual audit workflow for baseline comparison.github/workflows/visual-audit.yml
report
configured
policy-contract.yml
Policy contract enforcement workflow.github/workflows/policy-contract.yml
presence
bundle + classification)
evidence fields
calibration.yml
Calibration template with bundle evidence fields.github/ISSUE_TEMPLATE/calibration.yml
run_id,bundle_json,bundle_mdfor repro bundleevidence
runtimeModeHint/runtimeModeEffective/runtimeModeReasonCode
repro_bundle_sample.json
Sample repro bundle fixturetools/fixtures/repro_bundle_sample.json
mode, and passed test outcome
diagnostics sections
save_patch_pack_sample.json
Sample save patch-pack fixturetools/fixtures/save_patch_pack_sample.json
compatibility, and operations
oldValue, newValue, offset
sonarcloud.yml
SonarCloud integration workflow.github/workflows/sonarcloud.yml
duplication-check.yml
Code duplication detection workflow.github/workflows/duplication-check.yml
pull_request_template.md
PR template with reliability evidence fields.github/pull_request_template.md
base_swfoc, AOTR, ROE, custom
reason codes, runtime mode, classification
6 files
Summary by CodeRabbit
New Features
CI/Workflows
Documentation
Policy Evidence Tokens
Repro bundle JSON:
tools/fixtures/repro_bundle_sample.jsonClassification:
justified skip(save/editor + tooling scope in this PR; live runtime evidence tracked separately in #19/#34)