diff --git a/AGENTS.md b/AGENTS.md
index 43e474c2a..d97738fe2 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -10,6 +10,10 @@ This project uses Nix with direnv. You should already be in the Nix shell automa
nix develop
```
+Watch out for these Nix quirks:
+- If Nix tries to fetch from git during a build, it is likely that spago.yaml files were changed but the lock file was not updated; if so, update the lockfile with `spago build`
+- If a Nix build appears to be stale, then it is likely files were modified but are untracked by Git; if so, add modified files with `git add` and retry.
+
### Build and Test
The registry is implemented in PureScript. Use spago to build it and run PureScript tests. These are cheap and fast and should be used when working on the registry packages.
@@ -19,17 +23,27 @@ spago build # Build all PureScript code
spago test # Run unit tests
```
-Integration tests require two terminals (or the use of test-env in detached mode). The integration tests are only necessary to run if working on the server (app).
+#### End-to-End Tests
+
+The end-to-end (integration) tests are in `app-e2e`. They can be run via Nix on Linux:
+
+```
+nix build .#checks.x86_64-linux.integration
+```
+
+Alternately, they can be run on macOS or for more iterative development of tests using two terminals: one to start the test env, and one to execute the tests.
```sh
# Terminal 1: Start test environment (wiremock mocks + registry server on port 9000)
nix run .#test-env
# Terminal 2: Run E2E tests once server is ready
-spago run -p registry-app-e2e
+spago-test-e2e
```
-Options: `nix run .#test-env -- --tui` for interactive TUI, `-- --detached` for background mode.
+Options: `nix run .#test-env -- --tui` for interactive TUI, `-- --detached` for background mode to use a single terminal.
+
+State is stored in `/tmp/registry-test-env` and cleaned up on each `nix run .#test-env`. To examine state after a test run (for debugging), stop the test-env but don't restart it.
#### Smoke Test (Linux only)
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index 92f5f9dcf..ebe38a0dd 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -72,20 +72,29 @@ nix build .#checks.x86_64-linux.smoke -L
### Integration Test
+You can run the integration tests with the following on Linux:
+
+```sh
+nix build .#checks.x86_64-linux.integration -L
+```
+
+On macOS or for iterative development, you can instead start the test environment and run the tests separately.
+
```sh
# Terminal 1: Start the test environment (wiremock mocks + registry server)
nix run .#test-env
-# Terminal 2: Once the server is ready, run the E2E tests
-spago run -p registry-app-e2e
+# Terminal 2: Run E2E tests once server is ready
+spago-test-e2e
```
The test environment:
- Starts wiremock services mocking GitHub, S3, Pursuit, etc.
-- Starts the registry server on port 9000 with a temporary SQLite database
+- Starts the registry server with a temporary SQLite database
- Uses fixture data from `app/fixtures/`
+- State is stored in `/tmp/registry-test-env` and cleaned up on each `nix run .#test-env`
-Press `Ctrl+C` in Terminal 1 to stop all services. State is cleaned up automatically.
+Press `Ctrl+C` in Terminal 1 to stop all services.
All arguments after `--` are passed directly to process-compose:
@@ -101,7 +110,11 @@ process-compose attach # Attach TUI
process-compose down # Stop all services
```
-You can also set `STATE_DIR` to use a persistent state directory instead of a temp dir.
+To examine state after a test run (e.g., for debugging), stop the test-env but don't restart it. The state remains in `/tmp/registry-test-env`:
+- `db/registry.sqlite3` — SQLite database
+- `scratch/registry/` — Local registry clone with metadata
+- `scratch/registry-index/` — Local manifest index clone
+- `repo-fixtures/` — Git fixture repositories
## Available Nix Commands
diff --git a/SPEC.md b/SPEC.md
index 423d0d80d..54c627d05 100644
--- a/SPEC.md
+++ b/SPEC.md
@@ -197,6 +197,7 @@ All packages in the registry contain a `purs.json` manifest file in their root d
- `version`: a valid [`Version`](#version)
- `license`: a valid [`License`](#license)
- `location`: a valid [`Location`](#location)
+- `ref`: a `string` representing the reference (e.g., a Git commit or Git tag) at the `location` that was used to fetch this version's source code
- `owners` (optional): a non-empty array of [`Owner`](#owner)
- `description` (optional): a description of your library as a plain text string, not markdown, up to 300 characters
- `includeFiles` (optional): a non-empty array of globs, where globs are used to match file paths (in addition to the `src` directory and other [always-included files](#always-included-files)) that you want included in your package tarball
@@ -221,6 +222,7 @@ For example:
"githubOwner": "purescript",
"githubRepo": "purescript-control"
},
+ "ref": "v4.2.0",
"include": ["test/**/*.purs"],
"exclude": ["test/graphs"],
"dependencies": { "newtype": ">=3.0.0 <4.0.0", "prelude": ">=4.0.0 <5.0.0" }
diff --git a/app-e2e/spago.yaml b/app-e2e/spago.yaml
index c19e78c42..fb3804b90 100644
--- a/app-e2e/spago.yaml
+++ b/app-e2e/spago.yaml
@@ -8,21 +8,24 @@ package:
- codec-json
- console
- datetime
- - effect
- - either
- - foldable-traversable
+ - exceptions
+ - fetch
+ - integers
- json
- - maybe
+ - node-child-process
+ - node-execa
- node-fs
- node-path
- node-process
- - prelude
+ - ordered-collections
- registry-app
- registry-foreign
- registry-lib
- registry-test-utils
+ - routing-duplex
- spec
- spec-node
- strings
+ - transformers
run:
main: Test.E2E.Main
diff --git a/app-e2e/src/Test/E2E/Endpoint/Jobs.purs b/app-e2e/src/Test/E2E/Endpoint/Jobs.purs
new file mode 100644
index 000000000..2aa157673
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Endpoint/Jobs.purs
@@ -0,0 +1,76 @@
+module Test.E2E.Endpoint.Jobs (spec) where
+
+import Registry.App.Prelude
+
+import Data.Array as Array
+import Registry.API.V1 (JobId(..))
+import Registry.API.V1 as V1
+import Registry.Test.Assert as Assert
+import Test.E2E.Support.Client as Client
+import Test.E2E.Support.Env (E2ESpec)
+import Test.E2E.Support.Env as Env
+import Test.E2E.Support.Fixtures as Fixtures
+import Test.Spec as Spec
+
+spec :: E2ESpec
+spec = do
+ Spec.describe "Status endpoint" do
+ Spec.it "can reach the status endpoint" do
+ Client.getStatus
+
+ Spec.describe "Jobs list" do
+ Spec.it "excludes completed jobs when include_completed is false" do
+ -- Create a job and wait for it to complete
+ { jobId } <- Client.publish Fixtures.effectPublishData
+ _ <- Env.pollJobOrFail jobId
+
+ -- Now we have at least one completed job
+ recentJobs <- Client.getJobsWith Client.ActiveOnly
+ allJobs <- Client.getJobsWith Client.IncludeCompleted
+
+ -- All jobs should include the completed publish job
+ let allCount = Array.length allJobs
+ Assert.shouldSatisfy allCount (_ > 0)
+
+ -- Active-only should return fewer or equal jobs
+ let recentCount = Array.length recentJobs
+ Assert.shouldSatisfy recentCount (_ <= allCount)
+
+ -- Verify completed jobs are excluded from active-only results
+ let completedJob = Array.find (\job -> isJust (V1.jobInfo job).finishedAt) allJobs
+ case completedJob of
+ Just completed -> do
+ let
+ completedId = (V1.jobInfo completed).jobId
+ inRecent = Array.any (\job -> (V1.jobInfo job).jobId == completedId) recentJobs
+ when inRecent do
+ Assert.fail $ "Completed job " <> unwrap completedId <> " should be excluded from include_completed=false results"
+ Nothing -> pure unit
+
+ Spec.describe "Job query parameters" do
+ Spec.it "accepts level and since parameters" do
+ { jobId } <- Client.publish Fixtures.effectPublishData
+ job <- Env.pollJobOrFail jobId
+ let info = V1.jobInfo job
+
+ baseJob <- Client.getJob jobId Nothing Nothing
+ Assert.shouldEqual (V1.jobInfo baseJob).jobId info.jobId
+
+ debugJob <- Client.getJob jobId (Just V1.Debug) Nothing
+ Assert.shouldEqual (V1.jobInfo debugJob).jobId info.jobId
+
+ let sinceTime = fromMaybe info.createdAt info.finishedAt
+ sinceJob <- Client.getJob jobId Nothing (Just sinceTime)
+ Assert.shouldEqual (V1.jobInfo sinceJob).jobId info.jobId
+
+ Spec.describe "Jobs API error handling" do
+ Spec.it "returns HTTP 404 for non-existent job ID" do
+ let fakeJobId = JobId "nonexistent-job-id-12345"
+ result <- Client.tryGetJob fakeJobId Nothing Nothing
+ case result of
+ Right _ ->
+ Assert.fail "Expected HTTP 404 for non-existent job"
+ Left err ->
+ case Client.clientErrorStatus err of
+ Just 404 -> pure unit
+ _ -> Assert.fail $ "Expected HTTP 404, got: " <> Client.printClientError err
diff --git a/app-e2e/src/Test/E2E/Endpoint/Publish.purs b/app-e2e/src/Test/E2E/Endpoint/Publish.purs
new file mode 100644
index 000000000..47e51c959
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Endpoint/Publish.purs
@@ -0,0 +1,76 @@
+module Test.E2E.Endpoint.Publish (spec) where
+
+import Registry.App.Prelude
+
+import Data.Array as Array
+import Data.Array.NonEmpty as NEA
+import Data.Map as Map
+import Data.Set as Set
+import Data.String as String
+import Registry.API.V1 (Job(..))
+import Registry.API.V1 as V1
+import Registry.Manifest (Manifest(..))
+import Registry.Metadata (Metadata(..))
+import Registry.Sha256 as Sha256
+import Registry.Test.Assert as Assert
+import Registry.Version as Version
+import Test.E2E.Support.Client as Client
+import Test.E2E.Support.Env (E2ESpec)
+import Test.E2E.Support.Env as Env
+import Test.E2E.Support.Fixtures as Fixtures
+import Test.E2E.Support.WireMock as WireMock
+import Test.Spec as Spec
+
+spec :: E2ESpec
+spec = do
+ Spec.describe "Publish workflow" do
+ Spec.it "can publish effect@4.0.0 and verify all state changes" do
+ { jobId } <- Client.publish Fixtures.effectPublishData
+ job <- Env.pollJobOrFail jobId
+ Assert.shouldSatisfy (V1.jobInfo job).finishedAt isJust
+
+ uploadOccurred <- Env.hasStorageUpload Fixtures.effect
+ unless uploadOccurred do
+ storageRequests <- WireMock.getStorageRequests
+ WireMock.failWithRequests "Expected S3 PUT for effect/4.0.0.tar.gz" storageRequests
+
+ Metadata metadata <- Env.readMetadata Fixtures.effect.name
+ case Map.lookup Fixtures.effect.version metadata.published of
+ Nothing -> Assert.fail $ "Expected version " <> Version.print Fixtures.effect.version <> " in metadata published versions"
+ Just publishedMeta -> do
+ Assert.shouldSatisfy (Sha256.print publishedMeta.hash) (not <<< String.null)
+
+ manifestEntries <- Env.readManifestIndexEntry Fixtures.effect.name
+ let hasVersion = Array.any (\(Manifest m) -> m.version == Fixtures.effect.version) manifestEntries
+ unless hasVersion do
+ Assert.fail $ "Expected version " <> Version.print Fixtures.effect.version <> " in manifest index"
+
+ Env.waitForAllMatrixJobs Fixtures.effect
+
+ -- Collect the compilers from the matrix jobs that ran for this package
+ allJobs <- Client.getJobsWith Client.IncludeCompleted
+ let
+ matrixCompilers = Array.mapMaybe
+ ( case _ of
+ MatrixJob { packageName, packageVersion, compilerVersion } ->
+ if packageName == Fixtures.effect.name && packageVersion == Fixtures.effect.version then Just compilerVersion
+ else Nothing
+ _ -> Nothing
+ )
+ allJobs
+ -- The expected compilers are: the publish compiler + all matrix job compilers
+ expectedCompilers = Set.fromFoldable $ Array.cons Fixtures.effectPublishData.compiler matrixCompilers
+
+ Metadata metadataAfter <- Env.readMetadata Fixtures.effect.name
+ case Map.lookup Fixtures.effect.version metadataAfter.published of
+ Nothing -> Assert.fail "Version missing after matrix jobs"
+ Just publishedMetaAfter -> do
+ let actualCompilers = Set.fromFoldable $ NEA.toArray publishedMetaAfter.compilers
+ Assert.shouldEqual actualCompilers expectedCompilers
+
+ Spec.describe "Publish state machine" do
+ Spec.it "returns same jobId for duplicate publish requests" do
+ { jobId: id1 } <- Client.publish Fixtures.effectPublishData
+ _ <- Env.pollJobOrFail id1
+ { jobId: id2 } <- Client.publish Fixtures.effectPublishData
+ Assert.shouldEqual id1 id2
diff --git a/app-e2e/src/Test/E2E/Endpoint/Transfer.purs b/app-e2e/src/Test/E2E/Endpoint/Transfer.purs
new file mode 100644
index 000000000..e06b466a2
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Endpoint/Transfer.purs
@@ -0,0 +1,51 @@
+module Test.E2E.Endpoint.Transfer (spec) where
+
+import Registry.App.Prelude
+
+import Data.Array as Array
+import Registry.API.V1 as V1
+import Registry.Location (Location(..))
+import Registry.Metadata (Metadata(..))
+import Registry.PackageName as PackageName
+import Registry.Test.Assert as Assert
+import Test.E2E.Support.Client as Client
+import Test.E2E.Support.Env (E2ESpec)
+import Test.E2E.Support.Env as Env
+import Test.E2E.Support.Fixtures as Fixtures
+import Test.E2E.Support.WireMock as WireMock
+import Test.Spec as Spec
+
+spec :: E2ESpec
+spec = do
+ Spec.describe "Transfer workflow" do
+ Spec.it "can transfer effect to a new location with full state verification" do
+ { jobId: publishJobId } <- Client.publish Fixtures.effectPublishData
+ _ <- Env.pollJobOrFail publishJobId
+ Env.waitForAllMatrixJobs Fixtures.effect
+
+ Metadata originalMetadata <- Env.readMetadata Fixtures.effect.name
+ case originalMetadata.location of
+ GitHub { owner } -> Assert.shouldEqual owner "purescript"
+ Git _ -> Assert.fail "Expected GitHub location, got Git"
+
+ -- clear the publish PUT so we can verify transfers leave storage unaffected
+ WireMock.clearStorageRequests
+
+ authData <- Env.signTransferOrFail Fixtures.effectTransferData
+ { jobId: transferJobId } <- Client.transfer authData
+ transferJob <- Env.pollJobOrFail transferJobId
+ Assert.shouldSatisfy (V1.jobInfo transferJob).finishedAt isJust
+
+ Metadata newMetadata <- Env.readMetadata Fixtures.effect.name
+ case newMetadata.location of
+ GitHub { owner } -> Assert.shouldEqual owner "new-owner"
+ Git _ -> Assert.fail "Expected GitHub location after transfer, got Git"
+
+ storageRequests <- WireMock.getStorageRequests
+ let
+ packagePath = PackageName.print Fixtures.effect.name
+ putOrDeleteRequests = Array.filter
+ (\r -> (r.method == "PUT" || r.method == "DELETE") && WireMock.filterByUrlContaining packagePath [ r ] /= [])
+ storageRequests
+ unless (Array.null putOrDeleteRequests) do
+ WireMock.failWithRequests "Transfer should not PUT or DELETE to storage" putOrDeleteRequests
diff --git a/app-e2e/src/Test/E2E/Endpoint/Unpublish.purs b/app-e2e/src/Test/E2E/Endpoint/Unpublish.purs
new file mode 100644
index 000000000..9a82e943b
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Endpoint/Unpublish.purs
@@ -0,0 +1,52 @@
+module Test.E2E.Endpoint.Unpublish (spec) where
+
+import Registry.App.Prelude
+
+import Data.Map as Map
+import Data.String as String
+import Registry.API.V1 as V1
+import Registry.Metadata (Metadata(..))
+import Registry.Test.Assert as Assert
+import Test.E2E.Support.Client as Client
+import Test.E2E.Support.Env (E2ESpec)
+import Test.E2E.Support.Env as Env
+import Test.E2E.Support.Fixtures as Fixtures
+import Test.E2E.Support.WireMock as WireMock
+import Test.Spec as Spec
+
+spec :: E2ESpec
+spec = do
+ Spec.describe "Publish-Unpublish workflow" do
+ Spec.it "can publish effect@4.0.0 then unpublish it with full state verification" do
+ { jobId: publishJobId } <- Client.publish Fixtures.effectPublishData
+ _ <- Env.pollJobOrFail publishJobId
+ Env.waitForAllMatrixJobs Fixtures.effect
+
+ existsBefore <- Env.manifestIndexEntryExists Fixtures.effect
+ unless existsBefore do
+ Assert.fail "Expected version to exist in manifest index before unpublish"
+
+ authData <- Env.signUnpublishOrFail Fixtures.effectUnpublishData
+ { jobId: unpublishJobId } <- Client.unpublish authData
+ unpublishJob <- Env.pollJobOrFail unpublishJobId
+ Assert.shouldSatisfy (V1.jobInfo unpublishJob).finishedAt isJust
+
+ Metadata metadata <- Env.readMetadata Fixtures.effect.name
+
+ case Map.lookup Fixtures.effect.version metadata.unpublished of
+ Nothing ->
+ Assert.fail "Expected version 4.0.0 to be in 'unpublished' metadata"
+ Just unpublishedInfo ->
+ Assert.shouldSatisfy unpublishedInfo.reason (not <<< String.null)
+
+ when (Map.member Fixtures.effect.version metadata.published) do
+ Assert.fail "Version 4.0.0 should not be in 'published' metadata after unpublish"
+
+ deleteOccurred <- Env.hasStorageDelete Fixtures.effect
+ unless deleteOccurred do
+ storageRequests <- WireMock.getStorageRequests
+ WireMock.failWithRequests "Expected S3 DELETE for effect/4.0.0.tar.gz" storageRequests
+
+ existsAfter <- Env.manifestIndexEntryExists Fixtures.effect
+ when existsAfter do
+ Assert.fail "Expected version to be removed from manifest index after unpublish"
diff --git a/app-e2e/src/Test/E2E/GitHubIssue.purs b/app-e2e/src/Test/E2E/GitHubIssue.purs
index be9f3ba8f..c4598313a 100644
--- a/app-e2e/src/Test/E2E/GitHubIssue.purs
+++ b/app-e2e/src/Test/E2E/GitHubIssue.purs
@@ -1,218 +1,149 @@
-- | End-to-end tests for the GitHubIssue workflow.
--- | These tests exercise the full flow: parsing a GitHub event, submitting to
--- | the registry API, polling for completion, and posting comments.
+-- | Tests the full flow: parsing GitHub event → submitting to registry API →
+-- | polling for completion → posting comments.
module Test.E2E.GitHubIssue (spec) where
import Registry.App.Prelude
+import Control.Monad.Reader (ask)
import Data.Array as Array
import Data.Codec.JSON as CJ
import Data.Codec.JSON.Record as CJ.Record
import Data.String as String
import Effect.Aff (Milliseconds(..))
-import Effect.Aff as Aff
import JSON as JSON
import Node.FS.Aff as FS.Aff
import Node.Path as Path
import Node.Process as Process
import Registry.App.GitHubIssue as GitHubIssue
import Registry.Foreign.Tmp as Tmp
-import Registry.Operation (AuthenticatedData)
import Registry.Operation as Operation
-import Registry.Test.E2E.Client as Client
-import Registry.Test.E2E.Fixtures as Fixtures
-import Registry.Test.E2E.WireMock (WireMockRequest)
-import Registry.Test.E2E.WireMock as WireMock
-import Test.Spec (Spec)
+import Test.E2E.Support.Client as Client
+import Test.E2E.Support.Env (E2E, E2ESpec)
+import Test.E2E.Support.Fixtures as Fixtures
+import Test.E2E.Support.WireMock as WireMock
import Test.Spec as Spec
-spec :: Spec Unit
+spec :: E2ESpec
spec = do
Spec.describe "GitHubIssue end-to-end" do
- Spec.before clearWireMockJournal do
-
- Spec.it "handles a publish via GitHub issue, posts comments, and closes issue on success" \_ -> do
- result <- runWorkflowWithEvent $ mkGitHubPublishEvent Fixtures.effectPublishData
-
- assertJobSucceeded result
- assertHasComment jobStartedText result
- assertHasComment jobCompletedText result
- assertIssueClosed result
-
- Spec.it "posts failure comment and leaves issue open when job fails" \_ -> do
- result <- runWorkflowWithEvent $ mkGitHubAuthenticatedEventFrom "random-user" Fixtures.failingTransferData
-
- assertJobFailed result
- assertHasComment jobStartedText result
- assertHasComment jobFailedText result
- assertNoComment jobCompletedText result
- assertIssueOpen result
-
- Spec.it "re-signs authenticated operation for trustee (job fails due to unpublish time limit)" \_ -> do
- result <- runWorkflowWithEvent $ mkGitHubAuthenticatedEvent Fixtures.trusteeAuthenticatedData
-
- assertHasComment jobStartedText result
- assertTeamsApiCalled result
-
- where
- clearWireMockJournal :: Aff Unit
- clearWireMockJournal = do
- wmConfig <- liftEffect WireMock.configFromEnv
- WireMock.clearRequestsOrFail wmConfig
-
+ Spec.it "handles publish via GitHub issue, posts comments, and closes issue on success" do
+ requests <- runWorkflow $ mkPublishEvent Fixtures.effectPublishData
+ assertComment "Job started" requests
+ assertComment "Job completed successfully" requests
+ assertClosed requests
+
+ Spec.it "posts failure comment and leaves issue open when job fails" do
+ requests <- runWorkflow $ mkAuthenticatedEvent "random-user" Fixtures.failingTransferData
+ assertComment "Job started" requests
+ assertComment "Job failed" requests
+ assertNoComment "Job completed successfully" requests
+ assertOpen requests
+
+ Spec.it "calls Teams API to verify trustee membership for authenticated operation" do
+ requests <- runWorkflow $ mkAuthenticatedEvent packagingTeamUser Fixtures.trusteeAuthenticatedData
+ assertComment "Job started" requests
+ assertTeamsApiCalled requests
+
+ Spec.it "posts error comment when issue body contains invalid JSON" do
+ requests <- runWorkflow Fixtures.invalidJsonIssueEvent
+ assertComment "malformed" requests
+ assertOpen requests
+
+-- Constants
testIssueNumber :: Int
testIssueNumber = 101
--- | Username configured as a packaging team member in test WireMock fixtures.
--- | See nix/test/config.nix for the GitHub Teams API stub.
-packagingTeamUsername :: String
-packagingTeamUsername = "packaging-team-user"
-
-jobStartedText :: String
-jobStartedText = "Job started"
-
-jobCompletedText :: String
-jobCompletedText = "Job completed successfully"
-
-jobFailedText :: String
-jobFailedText = "Job failed"
-
-packagingTeamMembersPath :: String
-packagingTeamMembersPath = "/orgs/purescript/teams/packaging/members"
-
-testPollConfig :: GitHubIssue.PollConfig
-testPollConfig =
- { maxAttempts: 60
- , interval: Milliseconds 500.0
- }
+packagingTeamUser :: String
+packagingTeamUser = "packaging-team-user"
+-- Event builders
githubEventCodec :: CJ.Codec { sender :: { login :: String }, issue :: { number :: Int, body :: String } }
githubEventCodec = CJ.named "GitHubEvent" $ CJ.Record.object
{ sender: CJ.Record.object { login: CJ.string }
, issue: CJ.Record.object { number: CJ.int, body: CJ.string }
}
-mkGitHubPublishEvent :: Operation.PublishData -> String
-mkGitHubPublishEvent publishData =
+mkPublishEvent :: Operation.PublishData -> String
+mkPublishEvent publishData =
let
- publishJson = JSON.print $ CJ.encode Operation.publishCodec publishData
- body = "```json\n" <> publishJson <> "\n```"
- event = { sender: { login: packagingTeamUsername }, issue: { number: testIssueNumber, body } }
+ body = "```json\n" <> JSON.print (CJ.encode Operation.publishCodec publishData) <> "\n```"
in
- JSON.print $ CJ.encode githubEventCodec event
-
-mkGitHubAuthenticatedEvent :: AuthenticatedData -> String
-mkGitHubAuthenticatedEvent = mkGitHubAuthenticatedEventFrom packagingTeamUsername
+ JSON.print $ CJ.encode githubEventCodec
+ { sender: { login: packagingTeamUser }, issue: { number: testIssueNumber, body } }
-mkGitHubAuthenticatedEventFrom :: String -> AuthenticatedData -> String
-mkGitHubAuthenticatedEventFrom username authData =
+mkAuthenticatedEvent :: String -> Operation.AuthenticatedData -> String
+mkAuthenticatedEvent username authData =
let
- authJson = JSON.print $ CJ.encode Operation.authenticatedCodec authData
- body = "```json\n" <> authJson <> "\n```"
- event = { sender: { login: username }, issue: { number: testIssueNumber, body } }
+ body = "```json\n" <> JSON.print (CJ.encode Operation.authenticatedCodec authData) <> "\n```"
in
- JSON.print $ CJ.encode githubEventCodec event
+ JSON.print $ CJ.encode githubEventCodec
+ { sender: { login: username }, issue: { number: testIssueNumber, body } }
-issuePath :: Int -> String
-issuePath n = "/issues/" <> show n
+-- Workflow runner
+runWorkflow :: String -> E2E (Array WireMock.WireMockRequest)
+runWorkflow eventJson = do
+ { stateDir } <- ask
-issueCommentsPath :: Int -> String
-issueCommentsPath n = issuePath n <> "/comments"
+ Client.getStatus
-commentRequests :: Array WireMockRequest -> Array WireMockRequest
-commentRequests =
- WireMock.filterByMethod "POST"
- >>> WireMock.filterByUrlContaining (issueCommentsPath testIssueNumber)
-
-closeRequests :: Array WireMockRequest -> Array WireMockRequest
-closeRequests =
- WireMock.filterByMethod "PATCH"
- >>> WireMock.filterByUrlContaining (issuePath testIssueNumber)
-
-teamsRequests :: Array WireMockRequest -> Array WireMockRequest
-teamsRequests =
- WireMock.filterByMethod "GET"
- >>> WireMock.filterByUrlContaining packagingTeamMembersPath
+ tmpDir <- liftAff Tmp.mkTmpDir
+ let eventPath = Path.concat [ tmpDir, "github-event.json" ]
+ liftAff $ FS.Aff.writeTextFile UTF8 eventPath eventJson
+ liftEffect $ Process.setEnv "GITHUB_EVENT_PATH" eventPath
-bodyContains :: String -> WireMockRequest -> Boolean
-bodyContains text r = fromMaybe false (String.contains (String.Pattern text) <$> r.body)
+ originalCwd <- liftEffect Process.cwd
+ liftEffect $ Process.chdir stateDir
-hasComment :: String -> Array WireMockRequest -> Boolean
-hasComment text = Array.any (bodyContains text)
+ envResult <- liftAff GitHubIssue.initializeGitHub
+ for_ envResult \env -> do
+ let testEnv = env { pollConfig = { maxAttempts: 60, interval: Milliseconds 500.0 }, logVerbosity = Quiet }
+ liftAff $ void $ GitHubIssue.runGitHubIssue testEnv
--- | Result of running the GitHubIssue workflow.
-type RunResult =
- { success :: Boolean
- , requests :: Array WireMockRequest
- }
+ liftEffect $ Process.chdir originalCwd
--- | Run the GitHub issue workflow with a given event JSON.
--- | Handles server check, temp file creation, env setup, and request capture.
-runWorkflowWithEvent :: String -> Aff RunResult
-runWorkflowWithEvent eventJson = do
- -- Verify server is reachable
- config <- liftEffect Client.configFromEnv
- statusResult <- Client.getStatus config
- case statusResult of
- Left err -> Aff.throwError $ Aff.error $ "Server not reachable: " <> Client.printClientError err
- Right _ -> pure unit
-
- -- Write event to temp file
- tmpDir <- Tmp.mkTmpDir
- let eventPath = Path.concat [ tmpDir, "github-event.json" ]
- FS.Aff.writeTextFile UTF8 eventPath eventJson
- liftEffect $ Process.setEnv "GITHUB_EVENT_PATH" eventPath
+ WireMock.getGithubRequests
- -- Initialize and run workflow
- envResult <- GitHubIssue.initializeGitHub
- case envResult of
- Nothing ->
- Aff.throwError $ Aff.error "initializeGitHub returned Nothing"
- Just env -> do
- let testEnv = env { pollConfig = testPollConfig, logVerbosity = Quiet }
- result <- GitHubIssue.runGitHubIssue testEnv
-
- -- Capture WireMock requests
- wmConfig <- liftEffect WireMock.configFromEnv
- requests <- WireMock.getRequestsOrFail wmConfig
-
- case result of
- Left err ->
- WireMock.failWithRequests ("runGitHubIssue failed: " <> err) requests
- Right success ->
- pure { success, requests }
-
-assertJobSucceeded :: RunResult -> Aff Unit
-assertJobSucceeded { success, requests } =
- unless success do
- WireMock.failWithRequests "Job did not succeed" requests
-
-assertJobFailed :: RunResult -> Aff Unit
-assertJobFailed { success, requests } =
- when success do
- WireMock.failWithRequests "Expected job to fail but it succeeded" requests
-
-assertHasComment :: String -> RunResult -> Aff Unit
-assertHasComment text { requests } =
- unless (hasComment text (commentRequests requests)) do
+-- Assertions (all operate on captured requests)
+assertComment :: String -> Array WireMock.WireMockRequest -> E2E Unit
+assertComment text requests = do
+ let
+ comments = requests # Array.filter \r ->
+ r.method == "POST" && String.contains (String.Pattern $ "/issues/" <> show testIssueNumber <> "/comments") r.url
+ unless (Array.any (bodyContains text) comments) do
WireMock.failWithRequests ("Expected '" <> text <> "' comment but not found") requests
-assertNoComment :: String -> RunResult -> Aff Unit
-assertNoComment text { requests } =
- when (hasComment text (commentRequests requests)) do
+assertNoComment :: String -> Array WireMock.WireMockRequest -> E2E Unit
+assertNoComment text requests = do
+ let
+ comments = requests # Array.filter \r ->
+ r.method == "POST" && String.contains (String.Pattern $ "/issues/" <> show testIssueNumber <> "/comments") r.url
+ when (Array.any (bodyContains text) comments) do
WireMock.failWithRequests ("Did not expect '" <> text <> "' comment") requests
-assertIssueClosed :: RunResult -> Aff Unit
-assertIssueClosed { requests } =
- when (Array.null (closeRequests requests)) do
- WireMock.failWithRequests "Expected issue to be closed, but no close request was made" requests
+assertClosed :: Array WireMock.WireMockRequest -> E2E Unit
+assertClosed requests = do
+ let
+ closes = requests # Array.filter \r ->
+ r.method == "PATCH" && String.contains (String.Pattern $ "/issues/" <> show testIssueNumber) r.url
+ when (Array.null closes) do
+ WireMock.failWithRequests "Expected issue to be closed" requests
-assertIssueOpen :: RunResult -> Aff Unit
-assertIssueOpen { requests } =
- unless (Array.null (closeRequests requests)) do
- WireMock.failWithRequests "Expected issue to remain open, but a close request was made" requests
+assertOpen :: Array WireMock.WireMockRequest -> E2E Unit
+assertOpen requests = do
+ let
+ closes = requests # Array.filter \r ->
+ r.method == "PATCH" && String.contains (String.Pattern $ "/issues/" <> show testIssueNumber) r.url
+ unless (Array.null closes) do
+ WireMock.failWithRequests "Expected issue to remain open" requests
-assertTeamsApiCalled :: RunResult -> Aff Unit
-assertTeamsApiCalled { requests } =
- when (Array.null (teamsRequests requests)) do
- WireMock.failWithRequests "Expected GitHub Teams API to be called, but no such request was seen" requests
+assertTeamsApiCalled :: Array WireMock.WireMockRequest -> E2E Unit
+assertTeamsApiCalled requests = do
+ let
+ teams = requests # Array.filter \r ->
+ r.method == "GET" && String.contains (String.Pattern "/orgs/purescript/teams/packaging/members") r.url
+ when (Array.null teams) do
+ WireMock.failWithRequests "Expected Teams API to be called" requests
+
+bodyContains :: String -> WireMock.WireMockRequest -> Boolean
+bodyContains text r = fromMaybe false (String.contains (String.Pattern text) <$> r.body)
diff --git a/app-e2e/src/Test/E2E/Main.purs b/app-e2e/src/Test/E2E/Main.purs
deleted file mode 100644
index bbd7f3212..000000000
--- a/app-e2e/src/Test/E2E/Main.purs
+++ /dev/null
@@ -1,24 +0,0 @@
-module Test.E2E.Main (main) where
-
-import Prelude
-
-import Data.Maybe (Maybe(..))
-import Data.Time.Duration (Milliseconds(..))
-import Effect (Effect)
-import Test.E2E.GitHubIssue as Test.E2E.GitHubIssue
-import Test.E2E.Publish as Test.E2E.Publish
-import Test.Spec as Spec
-import Test.Spec.Reporter.Console (consoleReporter)
-import Test.Spec.Runner.Node (runSpecAndExitProcess')
-import Test.Spec.Runner.Node.Config as Cfg
-
-main :: Effect Unit
-main = runSpecAndExitProcess' config [ consoleReporter ] do
- Spec.describe "E2E Tests" do
- Spec.describe "Publish" Test.E2E.Publish.spec
- Spec.describe "GitHubIssue" Test.E2E.GitHubIssue.spec
- where
- config =
- { defaultConfig: Cfg.defaultConfig { timeout = Just $ Milliseconds 120_000.0 }
- , parseCLIOptions: false
- }
diff --git a/app-e2e/src/Test/E2E/Publish.purs b/app-e2e/src/Test/E2E/Publish.purs
deleted file mode 100644
index d094a768e..000000000
--- a/app-e2e/src/Test/E2E/Publish.purs
+++ /dev/null
@@ -1,251 +0,0 @@
--- | End-to-end tests for the Publish API endpoint.
--- | These tests exercise the actual registry server via HTTP requests.
-module Test.E2E.Publish (spec) where
-
-import Prelude
-
-import Data.Array as Array
-import Data.Codec.JSON as CJ
-import Data.Either (Either(..))
-import Data.Foldable (for_)
-import Data.Maybe (Maybe(..), isJust)
-import Data.String as String
-import Effect.Aff (Aff)
-import Effect.Class (liftEffect)
-import Effect.Class.Console as Console
-import JSON as JSON
-import Registry.API.V1 (Job(..))
-import Registry.API.V1 as V1
-import Registry.Internal.Codec as Internal.Codec
-import Registry.Operation as Operation
-import Registry.PackageName (PackageName)
-import Registry.Test.Assert as Assert
-import Registry.Test.E2E.Client as Client
-import Registry.Test.E2E.Fixtures as Fixtures
-import Registry.Test.Utils (unsafePackageName, unsafeVersion)
-import Registry.Version (Version)
-import Registry.Version as Version
-import Test.Spec (Spec)
-import Test.Spec as Spec
-
--- | Get client config from environment
-getConfig :: Aff Client.Config
-getConfig = liftEffect Client.configFromEnv
-
-spec :: Spec Unit
-spec = do
- Spec.describe "Server connectivity" do
- Spec.it "can reach the status endpoint" do
- config <- getConfig
- result <- Client.getStatus config
- case result of
- Left err -> Assert.fail $ "Failed to reach status endpoint: " <> Client.printClientError err
- Right _ -> pure unit
-
- Spec.it "can list jobs (initially only compiler-upgrade matrix jobs)" do
- config <- getConfig
- result <- Client.getJobs config
- case result of
- Left err -> Assert.fail $ "Failed to list jobs: " <> Client.printClientError err
- -- We ignore success status because the job executor runs asynchronously
- -- and jobs may not have completed by the time we query.
- Right jobs ->
- let
- ignoreSuccess j = j { success = true }
- in
- Assert.shouldEqual (map ignoreSuccess initialJobs) (map (ignoreSuccess <<< deterministicJob) jobs)
-
- Spec.describe "Publish workflow" do
- Spec.it "can publish effect@4.0.0 and filter logs" do
- config <- getConfig
-
- -- Submit publish request
- publishResult <- Client.publish config Fixtures.effectPublishData
- case publishResult of
- Left err -> Assert.fail $ "Failed to submit publish request: " <> Client.printClientError err
- Right { jobId } -> do
- -- Poll until job completes
- job <- Client.pollJob config jobId
-
- -- If job failed, print logs for debugging
- unless (V1.jobInfo job).success do
- Console.log "Job failed! Logs:"
- let logMessages = map (\l -> "[" <> V1.printLogLevel l.level <> "] " <> l.message) (V1.jobInfo job).logs
- Console.log $ String.joinWith "\n" logMessages
-
- -- Verify job completed successfully
- when (not (V1.jobInfo job).success) do
- let errorLogs = Array.filter (\l -> l.level == V1.Error) (V1.jobInfo job).logs
- let errorMessages = map _.message errorLogs
- Assert.fail $ "Job failed with errors:\n" <> String.joinWith "\n" errorMessages
-
- Assert.shouldSatisfy (V1.jobInfo job).finishedAt isJust
-
- -- Test log level filtering
- allLogsResult <- Client.getJob config jobId (Just V1.Debug) Nothing
- case allLogsResult of
- Left err -> Assert.fail $ "Failed to get job with DEBUG level: " <> Client.printClientError err
- Right allLogsJob -> do
- let allLogs = (V1.jobInfo allLogsJob).logs
-
- infoLogsResult <- Client.getJob config jobId (Just V1.Info) Nothing
- case infoLogsResult of
- Left err -> Assert.fail $ "Failed to get job with INFO level: " <> Client.printClientError err
- Right infoLogsJob -> do
- let infoLogs = (V1.jobInfo infoLogsJob).logs
- let debugOnlyLogs = Array.filter (\l -> l.level == V1.Debug) allLogs
-
- -- INFO logs should not contain any DEBUG logs
- let infoContainsDebug = Array.any (\l -> l.level == V1.Debug) infoLogs
- when infoContainsDebug do
- Assert.fail "INFO level filter returned DEBUG logs"
-
- -- If there were DEBUG logs, INFO result should be smaller
- when (Array.length debugOnlyLogs > 0) do
- Assert.shouldSatisfy (Array.length infoLogs) (_ < Array.length allLogs)
-
- -- Test timestamp filtering
- let logs = (V1.jobInfo job).logs
- when (Array.length logs >= 2) do
- case Array.index logs 0 of
- Nothing -> pure unit
- Just firstLog -> do
- sinceResult <- Client.getJob config jobId (Just V1.Debug) (Just firstLog.timestamp)
- case sinceResult of
- Left err -> Assert.fail $ "Failed to get job with since filter: " <> Client.printClientError err
- Right sinceJob -> do
- let sinceLogs = (V1.jobInfo sinceJob).logs
- for_ sinceLogs \l ->
- Assert.shouldSatisfy l.timestamp (_ >= firstLog.timestamp)
-
- Spec.it "kicks off matrix jobs for effect@4.0.0 once the package is published" do
- config <- getConfig
- maybeJobs <- Client.getJobs config
- case maybeJobs of
- Left err -> Assert.fail $ "Failed to get jobs: " <> Client.printClientError err
- Right jobs -> do
- let
- expectedJobs = initialJobs <>
- [ { jobType: "publish"
- , packageName: Just $ unsafePackageName "effect"
- , packageVersion: Just $ unsafeVersion "4.0.0"
- , compilerVersion: Nothing
- , payload: """{"compiler":"0.15.9","location":{"githubOwner":"purescript","githubRepo":"purescript-effect"},"name":"effect","ref":"v4.0.0","version":"4.0.0"}"""
- , success: true
- }
- , { jobType: "matrix"
- , packageName: Just $ unsafePackageName "effect"
- , packageVersion: Just $ unsafeVersion "4.0.0"
- , compilerVersion: Just $ unsafeVersion "0.15.10"
- , payload: """{"prelude":"6.0.1"}"""
- , success: true
- }
- , { jobType: "matrix"
- , packageName: Just $ unsafePackageName "effect"
- , packageVersion: Just $ unsafeVersion "4.0.0"
- , compilerVersion: Just $ unsafeVersion "0.15.11"
- , payload: """{"prelude":"6.0.1"}"""
- , success: false
- }
- , { jobType: "matrix"
- , packageName: Just $ unsafePackageName "effect"
- , packageVersion: Just $ unsafeVersion "4.0.0"
- , compilerVersion: Just $ unsafeVersion "0.15.12"
- , payload: """{"prelude":"6.0.1"}"""
- , success: false
- }
- , { jobType: "matrix"
- , packageName: Just $ unsafePackageName "effect"
- , packageVersion: Just $ unsafeVersion "4.0.0"
- , compilerVersion: Just $ unsafeVersion "0.15.13"
- , payload: """{"prelude":"6.0.1"}"""
- , success: false
- }
- , { jobType: "matrix"
- , packageName: Just $ unsafePackageName "effect"
- , packageVersion: Just $ unsafeVersion "4.0.0"
- , compilerVersion: Just $ unsafeVersion "0.15.14"
- , payload: """{"prelude":"6.0.1"}"""
- , success: false
- }
- , { jobType: "matrix"
- , packageName: Just $ unsafePackageName "effect"
- , packageVersion: Just $ unsafeVersion "4.0.0"
- , compilerVersion: Just $ unsafeVersion "0.15.15"
- , payload: """{"prelude":"6.0.1"}"""
- , success: false
- }
- ]
- let
- ignoreSuccess j = j { success = true }
- Assert.shouldEqual (map ignoreSuccess expectedJobs) (map (ignoreSuccess <<< deterministicJob) jobs)
-
-type DeterministicJob =
- { jobType :: String
- , packageName :: Maybe PackageName
- , packageVersion :: Maybe Version
- , compilerVersion :: Maybe Version
- , payload :: String
- , success :: Boolean
- }
-
-deterministicJob :: Job -> DeterministicJob
-deterministicJob = case _ of
- PublishJob { success, packageName, packageVersion, payload } ->
- { jobType: "publish"
- , packageName: Just packageName
- , packageVersion: Just packageVersion
- , compilerVersion: Nothing
- , success
- , payload: JSON.print $ CJ.encode Operation.publishCodec payload
- }
- UnpublishJob { success, packageName, packageVersion, payload } ->
- { jobType: "unpublish"
- , packageName: Just packageName
- , packageVersion: Just packageVersion
- , compilerVersion: Nothing
- , success
- , payload: JSON.print $ CJ.encode Operation.authenticatedCodec payload
- }
- TransferJob { success, packageName, payload } ->
- { jobType: "transfer"
- , packageName: Just packageName
- , packageVersion: Nothing
- , compilerVersion: Nothing
- , success
- , payload: JSON.print $ CJ.encode Operation.authenticatedCodec payload
- }
- MatrixJob { success, packageName, packageVersion, compilerVersion, payload } ->
- { jobType: "matrix"
- , packageName: Just packageName
- , packageVersion: Just packageVersion
- , compilerVersion: Just compilerVersion
- , success
- , payload: JSON.print $ CJ.encode (Internal.Codec.packageMap Version.codec) payload
- }
- PackageSetJob { success, payload } ->
- { jobType: "packageset"
- , packageName: Nothing
- , packageVersion: Nothing
- , compilerVersion: Nothing
- , success
- , payload: JSON.print $ CJ.encode Operation.packageSetOperationCodec payload
- }
-
-initialJobs :: Array DeterministicJob
-initialJobs =
- [ { jobType: "matrix"
- , packageName: Just $ unsafePackageName "prelude"
- , packageVersion: Just $ unsafeVersion "6.0.1"
- , compilerVersion: Just $ unsafeVersion "0.15.15"
- , payload: """{}"""
- , success: true
- }
- , { jobType: "matrix"
- , packageName: Just $ unsafePackageName "type-equality"
- , packageVersion: Just $ unsafeVersion "4.0.1"
- , compilerVersion: Just $ unsafeVersion "0.15.15"
- , payload: """{}"""
- , success: true
- }
- ]
diff --git a/app-e2e/src/Test/E2E/Support/Client.purs b/app-e2e/src/Test/E2E/Support/Client.purs
new file mode 100644
index 000000000..6985b9611
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Support/Client.purs
@@ -0,0 +1,192 @@
+-- | HTTP client for making requests to the registry server during E2E tests.
+-- | This module provides typed helpers for interacting with the Registry API.
+-- |
+-- | All client functions operate in the E2E monad (ReaderT TestEnv Aff) and
+-- | throw on HTTP or parse errors. Use the `try*` variants (e.g., `tryGetJob`)
+-- | when testing error responses - they return `Either ClientError a` with
+-- | typed HTTP status codes.
+module Test.E2E.Support.Client
+ ( ClientError(..)
+ , JobFilter(..)
+ , getJobs
+ , getJobsWith
+ , getJob
+ , tryGetJob
+ , getStatus
+ , publish
+ , unpublish
+ , transfer
+ , pollJob
+ , printClientError
+ , clientErrorStatus
+ ) where
+
+import Registry.App.Prelude
+
+import Codec.JSON.DecodeError as CJ.DecodeError
+import Control.Monad.Reader (ask)
+import Data.Codec.JSON as CJ
+import Data.DateTime (DateTime)
+import Data.Int as Int
+import Effect.Aff (delay)
+import Effect.Aff as Aff
+import Effect.Class.Console as Console
+import Effect.Exception (Error)
+import Effect.Exception as Exception
+import Fetch (Method(..))
+import Fetch as Fetch
+import JSON as JSON
+import Registry.API.V1 (Job, JobId, LogLevel, Route(..))
+import Registry.API.V1 as V1
+import Registry.Operation (AuthenticatedData, PublishData)
+import Registry.Operation as Operation
+import Routing.Duplex as Routing
+import Test.E2E.Support.Types (E2E)
+
+-- | Errors that can occur during client operations
+data ClientError
+ = HttpError { status :: Int, body :: String }
+ | ParseError { msg :: String, raw :: String }
+ | Timeout String
+
+printClientError :: ClientError -> String
+printClientError = case _ of
+ HttpError { status, body } -> "HTTP Error " <> Int.toStringAs Int.decimal status <> ": " <> body
+ ParseError { msg, raw } -> "Parse Error: " <> msg <> "\nOriginal: " <> raw
+ Timeout msg -> "Timeout: " <> msg
+
+-- | Extract the HTTP status code from a ClientError, if it's an HttpError
+clientErrorStatus :: ClientError -> Maybe Int
+clientErrorStatus = case _ of
+ HttpError { status } -> Just status
+ _ -> Nothing
+
+-- | Convert a ClientError to an Effect Error for throwing
+toError :: ClientError -> Error
+toError = Exception.error <<< printClientError
+
+-- | Throw a ClientError as an Aff error
+throw :: forall a. ClientError -> Aff a
+throw = Aff.throwError <<< toError
+
+-- | Print a Route to its URL path using the route codec
+printRoute :: Route -> String
+printRoute = Routing.print V1.routes
+
+-- | Make a GET request and decode the response, returning Either on error.
+tryGet :: forall a. CJ.Codec a -> String -> String -> Aff (Either ClientError a)
+tryGet codec baseUrl path = do
+ response <- Fetch.fetch (baseUrl <> path) { method: GET }
+ body <- response.text
+ if response.status >= 200 && response.status < 300 then
+ case parseJson codec body of
+ Left err -> pure $ Left $ ParseError { msg: CJ.DecodeError.print err, raw: body }
+ Right a -> pure $ Right a
+ else
+ pure $ Left $ HttpError { status: response.status, body }
+
+-- | Make a GET request and decode the response. Throws on error.
+get :: forall a. CJ.Codec a -> String -> String -> Aff a
+get codec baseUrl path = tryGet codec baseUrl path >>= either throw pure
+
+-- | Make a POST request with JSON body and decode the response. Throws on error.
+post :: forall req res. CJ.Codec req -> CJ.Codec res -> String -> String -> req -> Aff res
+post reqCodec resCodec baseUrl path reqBody = do
+ let jsonBody = JSON.print $ CJ.encode reqCodec reqBody
+ response <- Fetch.fetch (baseUrl <> path)
+ { method: POST
+ , headers: { "Content-Type": "application/json" }
+ , body: jsonBody
+ }
+ responseBody <- response.text
+ if response.status >= 200 && response.status < 300 then
+ case parseJson resCodec responseBody of
+ Left err -> throw $ ParseError { msg: CJ.DecodeError.print err, raw: responseBody }
+ Right a -> pure a
+ else
+ throw $ HttpError { status: response.status, body: responseBody }
+
+data JobFilter = ActiveOnly | IncludeCompleted
+
+-- | Get the list of jobs with a configurable filter
+getJobsWith :: JobFilter -> E2E (Array Job)
+getJobsWith filter = do
+ { clientConfig } <- ask
+ let
+ includeCompleted = case filter of
+ ActiveOnly -> Just false
+ IncludeCompleted -> Just true
+ route = Jobs { since: Nothing, include_completed: includeCompleted }
+ liftAff $ get (CJ.array V1.jobCodec) clientConfig.baseUrl (printRoute route)
+
+-- | Get the list of jobs (includes completed jobs)
+getJobs :: E2E (Array Job)
+getJobs = getJobsWith IncludeCompleted
+
+-- | Get a specific job by ID, with optional log filtering
+getJob :: JobId -> Maybe LogLevel -> Maybe DateTime -> E2E Job
+getJob jobId level since = do
+ { clientConfig } <- ask
+ let route = Job jobId { level, since }
+ liftAff $ get V1.jobCodec clientConfig.baseUrl (printRoute route)
+
+-- | Try to get a specific job by ID, returning Left on HTTP/parse errors.
+-- | Use this when testing error responses (e.g., expecting 404).
+tryGetJob :: JobId -> Maybe LogLevel -> Maybe DateTime -> E2E (Either ClientError Job)
+tryGetJob jobId level since = do
+ { clientConfig } <- ask
+ let route = Job jobId { level, since }
+ liftAff $ tryGet V1.jobCodec clientConfig.baseUrl (printRoute route)
+
+-- | Check if the server is healthy
+getStatus :: E2E Unit
+getStatus = do
+ { clientConfig } <- ask
+ liftAff do
+ response <- Fetch.fetch (clientConfig.baseUrl <> printRoute Status) { method: GET }
+ if response.status == 200 then
+ pure unit
+ else do
+ body <- response.text
+ throw $ HttpError { status: response.status, body }
+
+-- | Publish a package
+publish :: PublishData -> E2E V1.JobCreatedResponse
+publish reqBody = do
+ { clientConfig } <- ask
+ liftAff $ post Operation.publishCodec V1.jobCreatedResponseCodec clientConfig.baseUrl (printRoute Publish) reqBody
+
+-- | Unpublish a package (requires authentication)
+unpublish :: AuthenticatedData -> E2E V1.JobCreatedResponse
+unpublish authData = do
+ { clientConfig } <- ask
+ liftAff $ post Operation.authenticatedCodec V1.jobCreatedResponseCodec clientConfig.baseUrl (printRoute Unpublish) authData
+
+-- | Transfer a package to a new location (requires authentication)
+transfer :: AuthenticatedData -> E2E V1.JobCreatedResponse
+transfer authData = do
+ { clientConfig } <- ask
+ liftAff $ post Operation.authenticatedCodec V1.jobCreatedResponseCodec clientConfig.baseUrl (printRoute Transfer) authData
+
+-- | Poll a job until it completes or times out.
+-- |
+-- | This is the recommended way to wait for job completion in E2E tests.
+-- | Do not implement custom polling loops; use this function or the higher-level
+-- | helpers in Test.E2E.Support.Env (pollJobOrFail, pollJobExpectFailure).
+pollJob :: JobId -> E2E Job
+pollJob jobId = do
+ { clientConfig } <- ask
+ go clientConfig 1
+ where
+ go config attempt
+ | attempt > config.maxPollAttempts =
+ liftAff $ throw $ Timeout $ "Job " <> unwrap jobId <> " did not complete after " <> Int.toStringAs Int.decimal config.maxPollAttempts <> " attempts"
+ | otherwise = do
+ liftAff $ delay config.pollInterval
+ job <- getJob jobId (Just V1.Debug) Nothing
+ case (V1.jobInfo job).finishedAt of
+ Just _ -> pure job
+ Nothing -> do
+ when (attempt `mod` 10 == 0) do
+ Console.log $ "Polling job " <> unwrap jobId <> " (attempt " <> Int.toStringAs Int.decimal attempt <> ")"
+ go config (attempt + 1)
diff --git a/app-e2e/src/Test/E2E/Support/Env.purs b/app-e2e/src/Test/E2E/Support/Env.purs
new file mode 100644
index 000000000..06c8d47b9
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Support/Env.purs
@@ -0,0 +1,312 @@
+-- | Shared environment and helper functions for E2E tests.
+-- |
+-- | This module provides:
+-- | - TestEnv type and E2E monad for test helpers (re-exported from Types)
+-- | - Environment construction from env vars (mkTestEnv)
+-- | - WireMock reset helpers for test isolation
+-- | - Job polling with automatic failure handling
+-- | - Git and metadata state inspection
+-- |
+-- | All functions operate in the E2E monad (ReaderT TestEnv Aff), so they
+-- | have access to the shared test environment without explicit passing.
+module Test.E2E.Support.Env
+ ( module ReExports
+ , mkTestEnv
+ , runE2E
+ , resetTestState
+ , resetDatabase
+ , resetGitFixtures
+ , resetLogs
+ , resetGitHubRequestCache
+ , pollJobOrFail
+ , pollJobExpectFailure
+ , signUnpublishOrFail
+ , signTransferOrFail
+ , gitStatus
+ , isCleanGitStatus
+ , waitForAllMatrixJobs
+ , isMatrixJobFor
+ , readMetadata
+ , readManifestIndexEntry
+ , manifestIndexEntryExists
+ , assertReposClean
+ , hasStorageUpload
+ , hasStorageDelete
+ ) where
+
+import Registry.App.Prelude
+
+import Control.Monad.Reader (ask, runReaderT)
+import Data.Array as Array
+import Data.String as String
+import Effect.Aff (Milliseconds(..))
+import Effect.Aff as Aff
+import Effect.Class.Console as Console
+import Node.ChildProcess.Types (Exit(..))
+import Node.FS.Aff as FS.Aff
+import Node.Library.Execa as Execa
+import Node.Path as Path
+import Registry.API.V1 (Job(..))
+import Registry.API.V1 as V1
+import Registry.App.CLI.Git as Git
+import Registry.App.Effect.Env as Env
+import Registry.Foreign.FSExtra as FS.Extra
+import Registry.Manifest (Manifest(..))
+import Registry.ManifestIndex as ManifestIndex
+import Registry.Metadata (Metadata)
+import Registry.Metadata as Metadata
+import Registry.Operation (AuthenticatedData, TransferData, UnpublishData)
+import Registry.PackageName as PackageName
+import Registry.Test.Assert as Assert
+import Registry.Version as Version
+import Test.E2E.Support.Client as Client
+import Test.E2E.Support.Fixtures (PackageFixture)
+import Test.E2E.Support.Fixtures as Fixtures
+import Test.E2E.Support.Types (ClientConfig, E2E, E2ESpec, TestEnv, WireMockConfig) as ReExports
+import Test.E2E.Support.Types (E2E, TestEnv)
+import Test.E2E.Support.WireMock as WireMock
+
+-- | Build the test environment from environment variables.
+-- | Called once at startup in Main, before running any tests.
+mkTestEnv :: Effect TestEnv
+mkTestEnv = do
+ port <- Env.lookupRequired Env.serverPort
+ let
+ clientConfig =
+ { baseUrl: "http://localhost:" <> show port
+ , pollInterval: Milliseconds 2000.0
+ , maxPollAttempts: 30
+ }
+
+ githubUrl <- Env.lookupRequired Env.githubApiUrl
+ storageUrl <- Env.lookupRequired Env.s3ApiUrl
+ let
+ githubWireMock = { baseUrl: githubUrl }
+ storageWireMock = { baseUrl: storageUrl }
+
+ stateDir <- Env.lookupRequired Env.stateDir
+ privateKey <- Env.lookupRequired Env.pacchettibottiED25519
+
+ pure { clientConfig, githubWireMock, storageWireMock, stateDir, privateKey }
+
+-- | Run an E2E computation with a given environment.
+-- | Primarily used by hoistSpec in Main.
+runE2E :: forall a. TestEnv -> E2E a -> Aff a
+runE2E env = flip runReaderT env
+
+-- | Reset all test state for isolation between tests.
+-- | This is the recommended way to set up test isolation in Spec.before_.
+-- | Resets: database, git fixtures, storage mock, and logs.
+resetTestState :: E2E Unit
+resetTestState = do
+ resetDatabase
+ resetGitFixtures
+ WireMock.clearStorageRequests
+ WireMock.resetStorageScenarios
+ WireMock.clearGithubRequests
+ resetGitHubRequestCache
+ resetLogs
+
+-- | Reset the database by clearing all job-related tables.
+-- |
+-- | This works because all job tables (publish_jobs, unpublish_jobs, transfer_jobs,
+-- | matrix_jobs, package_set_jobs, logs) have foreign keys to job_info with
+-- | ON DELETE CASCADE. See db/schema.sql for the schema definition.
+resetDatabase :: E2E Unit
+resetDatabase = do
+ { stateDir } <- ask
+ let dbPath = Path.concat [ stateDir, "db", "registry.sqlite3" ]
+ result <- liftAff $ _.getResult =<< Execa.execa "sqlite3" [ dbPath, "DELETE FROM job_info;" ] identity
+ case result.exit of
+ Normally 0 -> pure unit
+ _ -> liftAff $ Aff.throwError $ Aff.error $ "Failed to reset database: " <> result.stderr
+
+-- | Reset the git fixtures to restore original state.
+-- | This restores metadata files modified by unpublish/transfer operations.
+-- |
+-- | Strategy: Reset the origin repos to their initial-fixture tag (created during
+-- | setup), then delete the server's scratch git clones. The server will
+-- | re-clone fresh copies on the next operation, ensuring a clean cache state.
+resetGitFixtures :: E2E Unit
+resetGitFixtures = do
+ { stateDir } <- ask
+ fixturesDir <- liftEffect $ Env.lookupRequired Env.repoFixturesDir
+ let
+ registryOrigin = Path.concat [ fixturesDir, "purescript", "registry" ]
+ registryIndexOrigin = Path.concat [ fixturesDir, "purescript", "registry-index" ]
+ scratchDir = Path.concat [ stateDir, "scratch" ]
+ resetOrigin registryOrigin
+ resetOrigin registryIndexOrigin
+ deleteGitClones scratchDir
+ where
+ resetOrigin dir = do
+ void $ gitOrFail [ "reset", "--hard", "initial-fixture" ] dir
+ void $ gitOrFail [ "clean", "-fd" ] dir
+
+ deleteGitClones scratchDir = do
+ liftAff $ FS.Extra.remove $ Path.concat [ scratchDir, "registry" ]
+ liftAff $ FS.Extra.remove $ Path.concat [ scratchDir, "registry-index" ]
+
+-- | Clear server log files for test isolation.
+-- | Deletes *.log files from the scratch/logs directory but preserves the directory itself.
+resetLogs :: E2E Unit
+resetLogs = do
+ { stateDir } <- ask
+ let logsDir = Path.concat [ stateDir, "scratch", "logs" ]
+ let cmd = "rm -f '" <> logsDir <> "'/*.log 2>/dev/null || true"
+ result <- liftAff $ _.getResult =<< Execa.execa "sh" [ "-c", cmd ] identity
+ case result.exit of
+ Normally _ -> pure unit
+ _ -> pure unit
+
+-- | Clear cached GitHub API requests from the scratch cache directory.
+-- | This ensures each test makes fresh API calls rather than using cached responses.
+resetGitHubRequestCache :: E2E Unit
+resetGitHubRequestCache = do
+ { stateDir } <- ask
+ let cacheDir = Path.concat [ stateDir, "scratch", ".cache" ]
+ liftAff $ Aff.attempt (FS.Aff.readdir cacheDir) >>= case _ of
+ Left _ -> pure unit
+ Right files -> for_ files \file ->
+ when (String.Pattern "Request__" `String.contains` file) do
+ FS.Extra.remove (Path.concat [ cacheDir, file ])
+
+-- | Poll a job until completion, failing the test if the job fails.
+-- | Prints error logs on failure for debugging.
+pollJobOrFail :: V1.JobId -> E2E V1.Job
+pollJobOrFail jobId = do
+ job <- Client.pollJob jobId
+ unless (V1.jobInfo job).success do
+ Console.log "Job failed! Logs:"
+ let logMessages = map (\l -> "[" <> V1.printLogLevel l.level <> "] " <> l.message) (V1.jobInfo job).logs
+ Console.log $ String.joinWith "\n" logMessages
+ let errorLogs = Array.filter (\l -> l.level == V1.Error) (V1.jobInfo job).logs
+ let errorMessages = map _.message errorLogs
+ Assert.fail $ "Job failed with errors:\n" <> String.joinWith "\n" errorMessages
+ pure job
+
+-- | Poll a job until completion, expecting it to fail.
+-- | Returns the job for further assertions on error messages.
+pollJobExpectFailure :: V1.JobId -> E2E V1.Job
+pollJobExpectFailure jobId = do
+ job <- Client.pollJob jobId
+ when (V1.jobInfo job).success do
+ Assert.fail "Expected job to fail, but it succeeded"
+ pure job
+
+-- | Sign an unpublish operation using the pacchettibotti private key from environment.
+signUnpublishOrFail :: UnpublishData -> E2E AuthenticatedData
+signUnpublishOrFail unpublishData = do
+ { privateKey } <- ask
+ case Fixtures.signUnpublish privateKey unpublishData of
+ Left err -> liftAff $ Aff.throwError $ Aff.error $ "Failed to sign unpublish: " <> err
+ Right authData -> pure authData
+
+-- | Sign a transfer operation using the pacchettibotti private key from environment.
+signTransferOrFail :: TransferData -> E2E AuthenticatedData
+signTransferOrFail transferData = do
+ { privateKey } <- ask
+ case Fixtures.signTransfer privateKey transferData of
+ Left err -> liftAff $ Aff.throwError $ Aff.error $ "Failed to sign transfer: " <> err
+ Right authData -> pure authData
+
+-- | Run git status --porcelain in a directory and return the output.
+gitStatus :: String -> E2E String
+gitStatus cwd = gitOrFail [ "status", "--porcelain" ] cwd
+
+-- | Run a git command, throwing an exception on failure.
+gitOrFail :: Array String -> FilePath -> E2E String
+gitOrFail args cwd = liftAff $ Git.gitCLI args (Just cwd) >>= case _ of
+ Left err -> Aff.throwError $ Aff.error err
+ Right out -> pure out
+
+-- | Check if git status output indicates a clean working tree (no changes).
+isCleanGitStatus :: String -> Boolean
+isCleanGitStatus status = String.null status
+
+-- | Wait for all matrix jobs for a package to complete.
+waitForAllMatrixJobs :: PackageFixture -> E2E Unit
+waitForAllMatrixJobs pkg = go 120 0
+ where
+ go :: Int -> Int -> E2E Unit
+ go 0 _ = liftAff $ Aff.throwError $ Aff.error "Timed out waiting for matrix jobs to complete"
+ go attempts lastCount = do
+ jobs <- Client.getJobs
+ let
+ matrixJobs = Array.filter (isMatrixJobFor pkg) jobs
+ totalCount = Array.length matrixJobs
+ finishedCount = Array.length $ Array.filter (\j -> isJust (V1.jobInfo j).finishedAt) matrixJobs
+ allFinished = finishedCount == totalCount
+ stillCreating = totalCount > lastCount
+ if totalCount >= 1 && allFinished && not stillCreating then
+ pure unit
+ else do
+ when (attempts `mod` 10 == 0) do
+ Console.log $ "Waiting for matrix jobs: " <> show finishedCount <> "/" <> show totalCount <> " finished"
+ liftAff $ Aff.delay (Milliseconds 1000.0)
+ go (attempts - 1) totalCount
+
+-- | Check if a job is a matrix job for the given package.
+isMatrixJobFor :: PackageFixture -> Job -> Boolean
+isMatrixJobFor pkg = case _ of
+ MatrixJob { packageName, packageVersion } ->
+ packageName == pkg.name && packageVersion == pkg.version
+ _ -> false
+
+-- | Read and parse the metadata file for a package from the server's scratch clone.
+readMetadata :: PackageName -> E2E Metadata
+readMetadata packageName = do
+ { stateDir } <- ask
+ let metadataPath = Path.concat [ stateDir, "scratch", "registry", "metadata", PackageName.print packageName <> ".json" ]
+ liftAff (readJsonFile Metadata.codec metadataPath) >>= case _ of
+ Left err -> liftAff $ Aff.throwError $ Aff.error $ "Failed to read metadata for " <> PackageName.print packageName <> ": " <> err
+ Right metadata -> pure metadata
+
+-- | Read and parse the manifest index entry for a package from the server's scratch clone.
+readManifestIndexEntry :: PackageName -> E2E (Array Manifest)
+readManifestIndexEntry packageName = do
+ { stateDir } <- ask
+ let indexPath = Path.concat [ stateDir, "scratch", "registry-index" ]
+ liftAff $ ManifestIndex.readEntryFile indexPath packageName >>= case _ of
+ Left err -> Aff.throwError $ Aff.error $ "Failed to read manifest index for " <> PackageName.print packageName <> ": " <> err
+ Right manifests -> pure $ Array.fromFoldable manifests
+
+-- | Check if a specific package version exists in the manifest index.
+manifestIndexEntryExists :: PackageFixture -> E2E Boolean
+manifestIndexEntryExists pkg = do
+ { stateDir } <- ask
+ let indexPath = Path.concat [ stateDir, "scratch", "registry-index" ]
+ liftAff $ ManifestIndex.readEntryFile indexPath pkg.name >>= case _ of
+ Left _ -> pure false
+ Right manifests -> pure $ Array.any (\(Manifest m) -> m.version == pkg.version) $ Array.fromFoldable manifests
+
+-- | Assert that both git repos (registry and registry-index) have no uncommitted changes.
+assertReposClean :: E2E Unit
+assertReposClean = do
+ { stateDir } <- ask
+ let scratchRegistry = Path.concat [ stateDir, "scratch", "registry" ]
+ let scratchRegistryIndex = Path.concat [ stateDir, "scratch", "registry-index" ]
+ registryStatus <- gitStatus scratchRegistry
+ registryIndexStatus <- gitStatus scratchRegistryIndex
+ unless (isCleanGitStatus registryStatus) do
+ Assert.fail $ "registry repo has uncommitted changes:\n" <> registryStatus
+ unless (isCleanGitStatus registryIndexStatus) do
+ Assert.fail $ "registry-index repo has uncommitted changes:\n" <> registryIndexStatus
+
+-- | Check if a storage upload (PUT) occurred for a specific package.
+hasStorageUpload :: PackageFixture -> E2E Boolean
+hasStorageUpload pkg = do
+ requests <- WireMock.getStorageRequests
+ let
+ expectedPath = PackageName.print pkg.name <> "/" <> Version.print pkg.version <> ".tar.gz"
+ putRequests = WireMock.filterByMethod "PUT" requests
+ pure $ Array.any (\r -> String.contains (String.Pattern expectedPath) r.url) putRequests
+
+-- | Check if a storage delete (DELETE) occurred for a specific package.
+hasStorageDelete :: PackageFixture -> E2E Boolean
+hasStorageDelete pkg = do
+ requests <- WireMock.getStorageRequests
+ let
+ expectedPath = PackageName.print pkg.name <> "/" <> Version.print pkg.version <> ".tar.gz"
+ deleteRequests = WireMock.filterByMethod "DELETE" requests
+ pure $ Array.any (\r -> String.contains (String.Pattern expectedPath) r.url) deleteRequests
diff --git a/app-e2e/src/Test/E2E/Support/Fixtures.purs b/app-e2e/src/Test/E2E/Support/Fixtures.purs
new file mode 100644
index 000000000..f23b494e3
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Support/Fixtures.purs
@@ -0,0 +1,226 @@
+-- | Test fixtures for E2E tests.
+-- | Contains package operation data used across multiple test suites.
+module Test.E2E.Support.Fixtures
+ ( PackageFixture
+ , effect
+ , console
+ , prelude
+ , effectPublishData
+ , effectPublishDataDifferentLocation
+ , consolePublishData
+ , failingTransferData
+ , nonexistentTransferData
+ , trusteeAuthenticatedData
+ , effectUnpublishData
+ , effectTransferData
+ , nonexistentUnpublishData
+ , preludeUnpublishData
+ , signUnpublish
+ , signTransfer
+ , invalidJsonIssueEvent
+ ) where
+
+import Registry.App.Prelude
+
+import Data.Codec.JSON as CJ
+import JSON as JSON
+import Registry.Location (Location(..))
+import Registry.Operation (AuthenticatedData, AuthenticatedPackageOperation(..), TransferData, UnpublishData)
+import Registry.Operation as Operation
+import Registry.PackageName (PackageName)
+import Registry.SSH as SSH
+import Registry.Test.Utils as Utils
+import Registry.Version (Version)
+
+type PackageFixture = { name :: PackageName, version :: Version }
+
+-- | effect@4.0.0 fixture package
+effect :: PackageFixture
+effect = { name: Utils.unsafePackageName "effect", version: Utils.unsafeVersion "4.0.0" }
+
+-- | console@6.1.0 fixture package
+console :: PackageFixture
+console = { name: Utils.unsafePackageName "console", version: Utils.unsafeVersion "6.1.0" }
+
+-- | prelude@6.0.1 fixture package
+prelude :: PackageFixture
+prelude = { name: Utils.unsafePackageName "prelude", version: Utils.unsafeVersion "6.0.1" }
+
+-- | Standard publish data for effect@4.0.0, used by E2E tests.
+-- | This matches the fixtures in app/fixtures/github-packages/effect-4.0.0
+effectPublishData :: Operation.PublishData
+effectPublishData =
+ { name: effect.name
+ , location: Just $ GitHub
+ { owner: "purescript"
+ , repo: "purescript-effect"
+ , subdir: Nothing
+ }
+ , ref: "v4.0.0"
+ , compiler: Utils.unsafeVersion "0.15.9"
+ , resolutions: Nothing
+ , version: effect.version
+ }
+
+-- | Publish data for effect@99.0.0 with a DIFFERENT location.
+-- | Uses a non-existent version to avoid duplicate job detection,
+-- | but still targets an existing package to test location conflicts.
+effectPublishDataDifferentLocation :: Operation.PublishData
+effectPublishDataDifferentLocation =
+ effectPublishData
+ { location = Just $ GitHub
+ { owner: "someone-else"
+ , repo: "purescript-effect"
+ , subdir: Nothing
+ }
+ , version = Utils.unsafeVersion "99.0.0"
+ , ref = "v99.0.0"
+ }
+
+-- | Publish data for console@6.1.0, used for concurrency tests.
+-- | Console depends on effect ^4.0.0 and prelude ^6.0.0.
+-- | This matches the fixtures in app/fixtures/github-packages/console-6.1.0
+consolePublishData :: Operation.PublishData
+consolePublishData =
+ { name: console.name
+ , location: Just $ GitHub
+ { owner: "purescript"
+ , repo: "purescript-console"
+ , subdir: Nothing
+ }
+ , ref: "v6.1.0"
+ , compiler: Utils.unsafeVersion "0.15.9"
+ , resolutions: Nothing
+ , version: console.version
+ }
+
+-- | Unpublish data for effect@4.0.0, used for publish-then-unpublish tests.
+effectUnpublishData :: UnpublishData
+effectUnpublishData =
+ { name: effect.name
+ , version: effect.version
+ , reason: "Testing unpublish flow"
+ }
+
+-- | Transfer data for effect, used for transfer tests.
+-- | Transfers effect to a different GitHub owner.
+effectTransferData :: TransferData
+effectTransferData =
+ { name: effect.name
+ , newLocation: GitHub
+ { owner: "new-owner"
+ , repo: "purescript-effect"
+ , subdir: Nothing
+ }
+ }
+
+-- | Unpublish data for a nonexistent package.
+-- | Used to test error handling when unpublishing an unknown package.
+nonexistentUnpublishData :: UnpublishData
+nonexistentUnpublishData =
+ { name: Utils.unsafePackageName "nonexistent-package"
+ , version: Utils.unsafeVersion "1.0.0"
+ , reason: "Testing error handling for unknown package"
+ }
+
+-- | Unpublish data for prelude@6.0.1.
+-- | This package was published long ago (in fixtures), so it should fail
+-- | the 48-hour time limit check.
+preludeUnpublishData :: UnpublishData
+preludeUnpublishData =
+ { name: prelude.name
+ , version: prelude.version
+ , reason: "Testing 48-hour limit enforcement"
+ }
+
+-- | Sign an unpublish operation using the given private key.
+-- | The private key should be the base64-decoded PACCHETTIBOTTI_ED25519 env var.
+signUnpublish :: String -> UnpublishData -> Either String AuthenticatedData
+signUnpublish privateKey unpublishData = do
+ let rawPayload = JSON.print $ CJ.encode Operation.unpublishCodec unpublishData
+ private <- SSH.parsePrivateKey { key: privateKey, passphrase: Nothing }
+ # lmap SSH.printPrivateKeyParseError
+ let signature = SSH.sign private rawPayload
+ pure
+ { payload: Unpublish unpublishData
+ , rawPayload
+ , signature
+ }
+
+-- | Authenticated transfer data for prelude, which has no owners in fixtures.
+-- | Used to test failure scenarios in E2E tests - will fail because no owners
+-- | are listed to verify the signature against.
+failingTransferData :: AuthenticatedData
+failingTransferData = do
+ let
+ transferPayload :: TransferData
+ transferPayload =
+ { name: prelude.name
+ , newLocation: GitHub
+ { owner: "someone-else"
+ , repo: "purescript-prelude"
+ , subdir: Nothing
+ }
+ }
+
+ rawPayload :: String
+ rawPayload = JSON.print $ CJ.encode Operation.transferCodec transferPayload
+
+ { payload: Transfer transferPayload
+ , rawPayload
+ , signature: SSH.Signature "invalid-signature-for-testing"
+ }
+
+-- | Authenticated data with an intentionally invalid signature.
+-- | When submitted by a trustee (packaging-team-user), pacchettibotti will re-sign it.
+-- | If re-signing works, the job succeeds; if not, signature verification fails.
+-- | Uses prelude@6.0.1 which exists in app/fixtures/registry/metadata/prelude.json.
+trusteeAuthenticatedData :: AuthenticatedData
+trusteeAuthenticatedData = do
+ let
+ unpublishPayload :: UnpublishData
+ unpublishPayload =
+ { name: prelude.name
+ , version: prelude.version
+ , reason: "Testing trustee re-signing"
+ }
+ rawPayload = JSON.print $ CJ.encode Operation.unpublishCodec unpublishPayload
+
+ { payload: Unpublish unpublishPayload
+ , rawPayload
+ , signature: SSH.Signature "invalid-signature-for-testing"
+ }
+
+-- | Transfer data for a nonexistent package.
+-- | Used to test error handling when transferring an unknown package.
+-- | Job should fail with "has not been published before" error.
+nonexistentTransferData :: TransferData
+nonexistentTransferData =
+ { name: Utils.unsafePackageName "nonexistent-package"
+ , newLocation: GitHub
+ { owner: "someone"
+ , repo: "purescript-nonexistent"
+ , subdir: Nothing
+ }
+ }
+
+-- | Sign a transfer operation using the given private key.
+-- | The private key should be the base64-decoded PACCHETTIBOTTI_ED25519 env var.
+signTransfer :: String -> TransferData -> Either String AuthenticatedData
+signTransfer privateKey transferData = do
+ let rawPayload = JSON.print $ CJ.encode Operation.transferCodec transferData
+ private <- lmap SSH.printPrivateKeyParseError $ SSH.parsePrivateKey { key: privateKey, passphrase: Nothing }
+ let signature = SSH.sign private rawPayload
+ pure
+ { payload: Transfer transferData
+ , rawPayload
+ , signature
+ }
+
+-- | GitHub issue event with invalid JSON in the body.
+-- | Used to test that malformed JSON is handled gracefully with an error comment.
+-- | Note: The inner JSON has a trailing comma (`"v1.0.0",}`) which is intentionally
+-- | malformed to trigger a parse error.
+invalidJsonIssueEvent :: String
+invalidJsonIssueEvent =
+ """{"sender":{"login":"packaging-team-user"},"issue":{"number":101,"body":"```json\n{\"name\": \"effect\", \"ref\": \"v1.0.0\",}\n```"}}"""
diff --git a/app-e2e/src/Test/E2E/Support/Types.purs b/app-e2e/src/Test/E2E/Support/Types.purs
new file mode 100644
index 000000000..2e4429057
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Support/Types.purs
@@ -0,0 +1,48 @@
+-- | Core types for E2E tests.
+-- |
+-- | This module defines the shared environment and monad types used by all
+-- | E2E test helpers. It's kept separate to avoid circular dependencies
+-- | between Env, Client, and WireMock modules.
+module Test.E2E.Support.Types
+ ( TestEnv
+ , ClientConfig
+ , WireMockConfig
+ , E2E
+ , E2ESpec
+ ) where
+
+import Registry.App.Prelude
+
+import Control.Monad.Reader (ReaderT)
+import Effect.Aff (Milliseconds)
+import Test.Spec (SpecT)
+
+-- | Configuration for the E2E test client
+type ClientConfig =
+ { baseUrl :: String
+ , pollInterval :: Milliseconds
+ , maxPollAttempts :: Int
+ }
+
+-- | Configuration for connecting to WireMock admin API
+type WireMockConfig =
+ { baseUrl :: String
+ }
+
+-- | The shared test environment available to all E2E helpers.
+-- | Constructed once at startup from environment variables.
+type TestEnv =
+ { clientConfig :: ClientConfig
+ , githubWireMock :: WireMockConfig
+ , storageWireMock :: WireMockConfig
+ , stateDir :: String
+ , privateKey :: String
+ }
+
+-- | The base monad for E2E test helpers.
+-- | All Client, Env, and WireMock functions operate in this monad.
+type E2E = ReaderT TestEnv Aff
+
+-- | The spec type for E2E tests.
+-- | Test modules export `spec :: E2ESpec` instead of `spec :: Spec Unit`.
+type E2ESpec = SpecT E2E Unit Identity Unit
diff --git a/test-utils/src/Registry/Test/E2E/WireMock.purs b/app-e2e/src/Test/E2E/Support/WireMock.purs
similarity index 52%
rename from test-utils/src/Registry/Test/E2E/WireMock.purs
rename to app-e2e/src/Test/E2E/Support/WireMock.purs
index 6895d9e44..4e3789fca 100644
--- a/test-utils/src/Registry/Test/E2E/WireMock.purs
+++ b/app-e2e/src/Test/E2E/Support/WireMock.purs
@@ -2,15 +2,19 @@
-- |
-- | This module provides helpers to query WireMock's request journal, allowing
-- | tests to assert on what HTTP requests were made to mock services.
-module Registry.Test.E2E.WireMock
- ( WireMockConfig
- , WireMockRequest
+-- |
+-- | Also provides helpers for managing WireMock scenarios (stateful mocking).
+-- | Scenarios allow responses to change based on state transitions - e.g., a
+-- | package tarball returns 404 until it's been "uploaded" via PUT, after which
+-- | it returns 200.
+module Test.E2E.Support.WireMock
+ ( WireMockRequest
, WireMockError(..)
- , configFromEnv
- , getRequests
- , getRequestsOrFail
- , clearRequests
- , clearRequestsOrFail
+ , getGithubRequests
+ , getStorageRequests
+ , clearGithubRequests
+ , clearStorageRequests
+ , resetStorageScenarios
, filterByMethod
, filterByUrlContaining
, printWireMockError
@@ -18,34 +22,24 @@ module Registry.Test.E2E.WireMock
, failWithRequests
) where
-import Prelude
+import Registry.App.Prelude
+import Codec.JSON.DecodeError as CJ.DecodeError
import Control.Monad.Error.Class (class MonadThrow, throwError)
import Control.Monad.Except (runExceptT)
-import Control.Monad.Trans.Class (lift)
+import Control.Monad.Reader (ask)
import Data.Array as Array
-import Data.Bifunctor (lmap)
import Data.Codec.JSON as CJ
import Data.Codec.JSON.Record as CJ.Record
-import Data.Either (Either(..))
import Data.Int as Int
-import Data.Maybe (Maybe(..))
import Data.String as String
-import Effect (Effect)
-import Effect.Aff (Aff)
import Effect.Aff as Aff
+import Effect.Exception (Error)
import Effect.Exception as Effect.Exception
import Fetch (Method(..))
import Fetch as Fetch
-import Effect.Exception (Error)
import JSON as JSON
-import Node.Process as Process
-import Codec.JSON.DecodeError as CJ.DecodeError
-
--- | Configuration for connecting to WireMock admin API
-type WireMockConfig =
- { baseUrl :: String
- }
+import Test.E2E.Support.Types (E2E)
-- | A recorded request from WireMock's journal
type WireMockRequest =
@@ -64,16 +58,6 @@ printWireMockError = case _ of
HttpError { status, body } -> "HTTP Error " <> Int.toStringAs Int.decimal status <> ": " <> body
ParseError { msg, raw } -> "Parse Error: " <> msg <> "\nOriginal: " <> raw
--- | Create config from GITHUB_API_URL environment variable.
--- | Convenience for tests that need to inspect GitHub mock requests.
--- | Each WireMock instance has its own admin API on the same port.
-configFromEnv :: Effect WireMockConfig
-configFromEnv = do
- maybeUrl <- Process.lookupEnv "GITHUB_API_URL"
- case maybeUrl of
- Nothing -> Effect.Exception.throw "GITHUB_API_URL environment variable is not set."
- Just baseUrl -> pure { baseUrl }
-
-- | Codec for a single request entry in WireMock's response
requestCodec :: CJ.Codec WireMockRequest
requestCodec = CJ.named "WireMockRequest" $ CJ.Record.object
@@ -100,10 +84,10 @@ parseResponse codec body = do
json <- lmap (append "JSON parse error: ") $ JSON.parse body
lmap CJ.DecodeError.print $ CJ.decode codec json
--- | Get all recorded requests from WireMock's journal
-getRequests :: WireMockConfig -> Aff (Either WireMockError (Array WireMockRequest))
-getRequests config = runExceptT do
- response <- lift $ Fetch.fetch (config.baseUrl <> "/__admin/requests") { method: GET }
+-- | Get all recorded requests from a WireMock instance
+getRequestsFrom :: String -> Aff (Either WireMockError (Array WireMockRequest))
+getRequestsFrom baseUrl = runExceptT do
+ response <- lift $ Fetch.fetch (baseUrl <> "/__admin/requests") { method: GET }
body <- lift response.text
if response.status == 200 then
case parseResponse journalCodec body of
@@ -112,35 +96,61 @@ getRequests config = runExceptT do
else
throwError $ HttpError { status: response.status, body }
--- | Clear all recorded requests from WireMock's journal
-clearRequests :: WireMockConfig -> Aff (Either WireMockError Unit)
-clearRequests config = runExceptT do
- response <- lift $ Fetch.fetch (config.baseUrl <> "/__admin/requests") { method: DELETE }
+-- | Clear all recorded requests from a WireMock instance
+clearRequestsFrom :: String -> Aff (Either WireMockError Unit)
+clearRequestsFrom baseUrl = runExceptT do
+ response <- lift $ Fetch.fetch (baseUrl <> "/__admin/requests") { method: DELETE }
+ if response.status == 200 then
+ pure unit
+ else do
+ body <- lift response.text
+ throwError $ HttpError { status: response.status, body }
+
+-- | Reset all scenarios to initial state on a WireMock instance
+resetScenariosOn :: String -> Aff (Either WireMockError Unit)
+resetScenariosOn baseUrl = runExceptT do
+ response <- lift $ Fetch.fetch (baseUrl <> "/__admin/scenarios/reset") { method: POST }
if response.status == 200 then
pure unit
else do
body <- lift response.text
throwError $ HttpError { status: response.status, body }
--- | Get requests, throwing on error. Useful in tests where failure should abort.
-getRequestsOrFail :: WireMockConfig -> Aff (Array WireMockRequest)
-getRequestsOrFail config = do
- result <- getRequests config
- case result of
- Left err ->
- throwError $ Aff.error $ "Failed to get WireMock requests: " <> printWireMockError err
- Right rs ->
- pure rs
-
--- | Clear requests, throwing on error. Useful in test setup.
-clearRequestsOrFail :: WireMockConfig -> Aff Unit
-clearRequestsOrFail config = do
- result <- clearRequests config
- case result of
- Left err ->
- Aff.throwError $ Aff.error $ "Failed to clear WireMock journal: " <> printWireMockError err
- Right _ ->
- pure unit
+-- | Helper to run a WireMock operation and throw on error
+orFail :: forall a. String -> Either WireMockError a -> Aff a
+orFail context = case _ of
+ Left err -> Aff.throwError $ Aff.error $ context <> ": " <> printWireMockError err
+ Right a -> pure a
+
+-- | Get captured requests from the GitHub WireMock.
+getGithubRequests :: E2E (Array WireMockRequest)
+getGithubRequests = do
+ { githubWireMock } <- ask
+ liftAff $ getRequestsFrom githubWireMock.baseUrl >>= orFail "Failed to get GitHub WireMock requests"
+
+-- | Get captured requests from the storage WireMock (S3, Pursuit).
+getStorageRequests :: E2E (Array WireMockRequest)
+getStorageRequests = do
+ { storageWireMock } <- ask
+ liftAff $ getRequestsFrom storageWireMock.baseUrl >>= orFail "Failed to get storage WireMock requests"
+
+-- | Clear the GitHub WireMock request journal.
+clearGithubRequests :: E2E Unit
+clearGithubRequests = do
+ { githubWireMock } <- ask
+ liftAff $ clearRequestsFrom githubWireMock.baseUrl >>= orFail "Failed to clear GitHub WireMock requests"
+
+-- | Clear the storage WireMock request journal.
+clearStorageRequests :: E2E Unit
+clearStorageRequests = do
+ { storageWireMock } <- ask
+ liftAff $ clearRequestsFrom storageWireMock.baseUrl >>= orFail "Failed to clear storage WireMock requests"
+
+-- | Reset all storage WireMock scenarios to their initial state.
+resetStorageScenarios :: E2E Unit
+resetStorageScenarios = do
+ { storageWireMock } <- ask
+ liftAff $ resetScenariosOn storageWireMock.baseUrl >>= orFail "Failed to reset storage WireMock scenarios"
-- | Filter requests by HTTP method
filterByMethod :: String -> Array WireMockRequest -> Array WireMockRequest
@@ -152,13 +162,12 @@ filterByUrlContaining substring = Array.filter (\r -> String.contains (String.Pa
-- | Format an array of requests for debugging output
formatRequests :: Array WireMockRequest -> String
-formatRequests requests = String.joinWith "\n" $ map formatRequest requests
+formatRequests = String.joinWith "\n" <<< map formatRequest
where
- formatRequest r = r.method <> " " <> r.url <> case r.body of
+ formatRequest req = req.method <> " " <> req.url <> case req.body of
Nothing -> ""
- Just b -> "\n Body: " <> b
+ Just body -> "\n Body: " <> body
-- | Fail a test with a message and debug info about captured requests.
failWithRequests :: forall m a. MonadThrow Error m => String -> Array WireMockRequest -> m a
-failWithRequests msg requests = throwError $ Effect.Exception.error $
- msg <> "\n\nCaptured requests:\n" <> formatRequests requests
+failWithRequests msg requests = throwError $ Effect.Exception.error $ String.joinWith "\n" [ msg, "\nCaptured requests:", formatRequests requests ]
diff --git a/app-e2e/src/Test/E2E/Workflow.purs b/app-e2e/src/Test/E2E/Workflow.purs
new file mode 100644
index 000000000..3e65b5840
--- /dev/null
+++ b/app-e2e/src/Test/E2E/Workflow.purs
@@ -0,0 +1,107 @@
+-- | End-to-end tests for multi-operation workflows.
+-- |
+-- | These tests verify complex scenarios involving multiple operations:
+-- | 1. Git state remains clean after multiple matrix jobs complete
+-- | 2. Dependency state is validated correctly across publish/unpublish sequences
+module Test.E2E.Workflow (spec) where
+
+import Registry.App.Prelude
+
+import Data.Array as Array
+import Data.Map as Map
+import Data.String as String
+import Registry.API.V1 as V1
+import Registry.Metadata (Metadata(..))
+import Registry.Test.Assert as Assert
+import Test.E2E.Support.Client as Client
+import Test.E2E.Support.Env (E2ESpec)
+import Test.E2E.Support.Env as Env
+import Test.E2E.Support.Fixtures as Fixtures
+import Test.E2E.Support.WireMock as WireMock
+import Test.Spec as Spec
+
+spec :: E2ESpec
+spec = do
+ Spec.describe "Concurrent git operations" do
+ Spec.it "multiple matrix jobs complete without conflict" do
+ { jobId: publishJobId } <- Client.publish Fixtures.effectPublishData
+ _ <- Env.pollJobOrFail publishJobId
+ Env.waitForAllMatrixJobs Fixtures.effect
+
+ uploadOccurred <- Env.hasStorageUpload Fixtures.effect
+ unless uploadOccurred do
+ Assert.fail "Expected tarball upload to S3 for effect@4.0.0"
+
+ Metadata metadata <- Env.readMetadata Fixtures.effect.name
+ unless (isJust $ Map.lookup Fixtures.effect.version metadata.published) do
+ Assert.fail "Expected effect@4.0.0 to be in published metadata"
+
+ manifestExists <- Env.manifestIndexEntryExists Fixtures.effect
+ unless manifestExists do
+ Assert.fail "Expected effect@4.0.0 to exist in manifest index"
+
+ Spec.describe "Dependency and unpublish interactions" do
+ Spec.it "publishing a package fails when its dependency was unpublished" do
+ { jobId: effectJobId } <- Client.publish Fixtures.effectPublishData
+ _ <- Env.pollJobOrFail effectJobId
+
+ authData <- Env.signUnpublishOrFail Fixtures.effectUnpublishData
+ { jobId: unpublishJobId } <- Client.unpublish authData
+ _ <- Env.pollJobOrFail unpublishJobId
+
+ deleteOccurred <- Env.hasStorageDelete Fixtures.effect
+ unless deleteOccurred do
+ Assert.fail "Expected tarball delete from S3 for effect@4.0.0"
+
+ manifestExists <- Env.manifestIndexEntryExists Fixtures.effect
+ when manifestExists do
+ Assert.fail "Expected effect@4.0.0 to be removed from manifest index after unpublish"
+
+ WireMock.clearStorageRequests
+
+ { jobId: consoleJobId } <- Client.publish Fixtures.consolePublishData
+ consoleJob <- Env.pollJobExpectFailure consoleJobId
+
+ let
+ logs = (V1.jobInfo consoleJob).logs
+ logMessages = map _.message logs
+ hasDependencyError = Array.any (String.contains (String.Pattern "Could not produce valid dependencies")) logMessages
+ unless hasDependencyError do
+ Assert.fail $ "Expected dependency resolution error, got:\n" <> String.joinWith "\n" logMessages
+
+ consoleUploadOccurred <- Env.hasStorageUpload Fixtures.console
+ when consoleUploadOccurred do
+ Assert.fail "Expected no tarball upload for console@6.1.0 after failed publish"
+
+ Spec.it "unpublishing a package fails when dependents exist in manifest index" do
+ { jobId: effectJobId } <- Client.publish Fixtures.effectPublishData
+ _ <- Env.pollJobOrFail effectJobId
+
+ { jobId: consoleJobId } <- Client.publish Fixtures.consolePublishData
+ _ <- Env.pollJobOrFail consoleJobId
+
+ WireMock.clearStorageRequests
+
+ authData <- Env.signUnpublishOrFail Fixtures.effectUnpublishData
+ { jobId: unpublishJobId } <- Client.unpublish authData
+ unpublishJob <- Env.pollJobExpectFailure unpublishJobId
+
+ let
+ logs = (V1.jobInfo unpublishJob).logs
+ logMessages = map _.message logs
+ hasDependencyError = Array.any (String.contains (String.Pattern "unsatisfied dependencies")) logMessages
+ unless hasDependencyError do
+ Assert.fail $ "Expected unsatisfied dependencies error, got:\n" <>
+ String.joinWith "\n" logMessages
+
+ deleteOccurred <- Env.hasStorageDelete Fixtures.effect
+ when deleteOccurred do
+ Assert.fail "Expected no tarball delete for effect@4.0.0 after failed unpublish"
+
+ manifestExists <- Env.manifestIndexEntryExists Fixtures.effect
+ unless manifestExists do
+ Assert.fail "Expected effect@4.0.0 to still exist in manifest index after failed unpublish"
+
+ Metadata effectMeta <- Env.readMetadata Fixtures.effect.name
+ unless (isJust $ Map.lookup Fixtures.effect.version effectMeta.published) do
+ Assert.fail "Expected effect@4.0.0 to still be in published metadata after failed unpublish"
diff --git a/app-e2e/src/Test/Main.purs b/app-e2e/src/Test/Main.purs
new file mode 100644
index 000000000..bf3a108fb
--- /dev/null
+++ b/app-e2e/src/Test/Main.purs
@@ -0,0 +1,38 @@
+module Test.E2E.Main (main) where
+
+import Registry.App.Prelude
+
+import Data.Time.Duration (Milliseconds(..))
+import Test.E2E.Endpoint.Jobs as Jobs
+import Test.E2E.Endpoint.Publish as Publish
+import Test.E2E.Endpoint.Transfer as Transfer
+import Test.E2E.Endpoint.Unpublish as Unpublish
+import Test.E2E.GitHubIssue as GitHubIssue
+import Test.E2E.Support.Env (assertReposClean, mkTestEnv, resetTestState, runE2E)
+import Test.E2E.Workflow as Workflow
+import Test.Spec (hoistSpec)
+import Test.Spec as Spec
+import Test.Spec.Reporter.Console (consoleReporter)
+import Test.Spec.Runner.Node (runSpecAndExitProcess')
+import Test.Spec.Runner.Node.Config as Cfg
+
+main :: Effect Unit
+main = do
+ env <- mkTestEnv
+ runSpecAndExitProcess' config [ consoleReporter ] $ hoistE2E env do
+ Spec.before_ resetTestState $ Spec.after_ assertReposClean $ Spec.describe "E2E Tests" do
+ Spec.describe "Endpoints" do
+ Spec.describe "Publish" Publish.spec
+ Spec.describe "Jobs" Jobs.spec
+ Spec.describe "Unpublish" Unpublish.spec
+ Spec.describe "Transfer" Transfer.spec
+
+ Spec.describe "Workflows" do
+ Spec.describe "GitHubIssue" GitHubIssue.spec
+ Spec.describe "Multi-operation" Workflow.spec
+ where
+ hoistE2E env = hoistSpec identity (\_ m -> runE2E env m)
+ config =
+ { defaultConfig: Cfg.defaultConfig { timeout = Just $ Milliseconds 60_000.0 }
+ , parseCLIOptions: false
+ }
diff --git a/app/fixtures/github-packages/console-6.1.0/LICENSE b/app/fixtures/github-packages/console-6.1.0/LICENSE
new file mode 100644
index 000000000..311379c1e
--- /dev/null
+++ b/app/fixtures/github-packages/console-6.1.0/LICENSE
@@ -0,0 +1,26 @@
+Copyright 2018 PureScript
+
+Redistribution and use in source and binary forms, with or without modification,
+are permitted provided that the following conditions are met:
+
+1. Redistributions of source code must retain the above copyright notice, this
+list of conditions and the following disclaimer.
+
+2. Redistributions in binary form must reproduce the above copyright notice,
+this list of conditions and the following disclaimer in the documentation and/or
+other materials provided with the distribution.
+
+3. Neither the name of the copyright holder nor the names of its contributors
+may be used to endorse or promote products derived from this software without
+specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND
+ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR
+ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON
+ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/app/fixtures/github-packages/console-6.1.0/bower.json b/app/fixtures/github-packages/console-6.1.0/bower.json
new file mode 100644
index 000000000..da93c7f6e
--- /dev/null
+++ b/app/fixtures/github-packages/console-6.1.0/bower.json
@@ -0,0 +1,22 @@
+{
+ "name": "purescript-console",
+ "homepage": "https://github.com/purescript/purescript-console",
+ "license": "BSD-3-Clause",
+ "repository": {
+ "type": "git",
+ "url": "https://github.com/purescript/purescript-console.git"
+ },
+ "ignore": [
+ "**/.*",
+ "bower_components",
+ "node_modules",
+ "output",
+ "test",
+ "bower.json",
+ "package.json"
+ ],
+ "dependencies": {
+ "purescript-effect": "^4.0.0",
+ "purescript-prelude": "^6.0.0"
+ }
+}
diff --git a/app/fixtures/github-packages/console-6.1.0/src/Effect/Console.js b/app/fixtures/github-packages/console-6.1.0/src/Effect/Console.js
new file mode 100644
index 000000000..432a4241b
--- /dev/null
+++ b/app/fixtures/github-packages/console-6.1.0/src/Effect/Console.js
@@ -0,0 +1,9 @@
+export const log = s => () => console.log(s);
+export const warn = s => () => console.warn(s);
+export const error = s => () => console.error(s);
+export const info = s => () => console.info(s);
+export const debug = s => () => console.debug(s);
+export const time = s => () => console.time(s);
+export const timeLog = s => () => console.timeLog(s);
+export const timeEnd = s => () => console.timeEnd(s);
+export const clear = () => console.clear();
diff --git a/app/fixtures/github-packages/console-6.1.0/src/Effect/Console.purs b/app/fixtures/github-packages/console-6.1.0/src/Effect/Console.purs
new file mode 100644
index 000000000..364ee2b1c
--- /dev/null
+++ b/app/fixtures/github-packages/console-6.1.0/src/Effect/Console.purs
@@ -0,0 +1,46 @@
+-- | This module provides functions for outputting strings to the console.
+module Effect.Console
+ ( log
+ , logShow
+ , warn
+ , warnShow
+ , error
+ , errorShow
+ , info
+ , infoShow
+ , debug
+ , debugShow
+ , time
+ , timeLog
+ , timeEnd
+ , clear
+ ) where
+
+import Prelude
+
+import Effect (Effect)
+
+foreign import log :: String -> Effect Unit
+foreign import warn :: String -> Effect Unit
+foreign import error :: String -> Effect Unit
+foreign import info :: String -> Effect Unit
+foreign import debug :: String -> Effect Unit
+foreign import time :: String -> Effect Unit
+foreign import timeLog :: String -> Effect Unit
+foreign import timeEnd :: String -> Effect Unit
+foreign import clear :: Effect Unit
+
+logShow :: forall a. Show a => a -> Effect Unit
+logShow = log <<< show
+
+warnShow :: forall a. Show a => a -> Effect Unit
+warnShow = warn <<< show
+
+errorShow :: forall a. Show a => a -> Effect Unit
+errorShow = error <<< show
+
+infoShow :: forall a. Show a => a -> Effect Unit
+infoShow = info <<< show
+
+debugShow :: forall a. Show a => a -> Effect Unit
+debugShow = debug <<< show
diff --git a/app/fixtures/registry-archive/prelude-6.0.2.tar.gz b/app/fixtures/registry-archive/prelude-6.0.2.tar.gz
index 2ef880dff..c06e9b276 100644
Binary files a/app/fixtures/registry-archive/prelude-6.0.2.tar.gz and b/app/fixtures/registry-archive/prelude-6.0.2.tar.gz differ
diff --git a/app/fixtures/registry-index/pr/el/prelude b/app/fixtures/registry-index/pr/el/prelude
index e6c7d0759..8a01e7d6c 100644
--- a/app/fixtures/registry-index/pr/el/prelude
+++ b/app/fixtures/registry-index/pr/el/prelude
@@ -1 +1 @@
-{"name":"prelude","version":"6.0.1","license":"BSD-3-Clause","location":{"githubOwner":"purescript","githubRepo":"purescript-prelude"},"description":"The PureScript Prelude","dependencies":{}}
+{"name":"prelude","version":"6.0.1","license":"BSD-3-Clause","location":{"githubOwner":"purescript","githubRepo":"purescript-prelude"},"ref":"v6.0.1","description":"The PureScript Prelude","dependencies":{}}
diff --git a/app/fixtures/registry-index/ty/pe/type-equality b/app/fixtures/registry-index/ty/pe/type-equality
index 8fbce8f14..8d5fc1d6e 100644
--- a/app/fixtures/registry-index/ty/pe/type-equality
+++ b/app/fixtures/registry-index/ty/pe/type-equality
@@ -1 +1 @@
-{"name":"type-equality","version":"4.0.1","license":"BSD-3-Clause","location":{"githubOwner":"purescript","githubRepo":"purescript-type-equality"},"dependencies":{}}
\ No newline at end of file
+{"name":"type-equality","version":"4.0.1","license":"BSD-3-Clause","location":{"githubOwner":"purescript","githubRepo":"purescript-type-equality"},"ref":"v4.0.1","dependencies":{}}
\ No newline at end of file
diff --git a/app/fixtures/registry-storage/console-6.1.0.tar.gz b/app/fixtures/registry-storage/console-6.1.0.tar.gz
new file mode 100644
index 000000000..52c94f426
Binary files /dev/null and b/app/fixtures/registry-storage/console-6.1.0.tar.gz differ
diff --git a/app/fixtures/registry-storage/prelude-6.0.1.tar.gz b/app/fixtures/registry-storage/prelude-6.0.1.tar.gz
index 1df21a580..87d64d2c1 100644
Binary files a/app/fixtures/registry-storage/prelude-6.0.1.tar.gz and b/app/fixtures/registry-storage/prelude-6.0.1.tar.gz differ
diff --git a/app/fixtures/registry/metadata/prelude.json b/app/fixtures/registry/metadata/prelude.json
index d25e9a0f6..4d1ab32e6 100644
--- a/app/fixtures/registry/metadata/prelude.json
+++ b/app/fixtures/registry/metadata/prelude.json
@@ -8,7 +8,9 @@
"bytes": 31129,
"compilers": [
"0.15.9",
- "0.15.10"
+ "0.15.10",
+ "0.15.11",
+ "0.15.12"
],
"hash": "sha256-EbbFV0J5xV0WammfgCv6HRFSK7Zd803kkofE8aEoam0=",
"publishedTime": "2022-08-18T20:04:00.000Z",
diff --git a/app/fixtures/registry/metadata/type-equality.json b/app/fixtures/registry/metadata/type-equality.json
index b57b9fd09..6b675a80e 100644
--- a/app/fixtures/registry/metadata/type-equality.json
+++ b/app/fixtures/registry/metadata/type-equality.json
@@ -8,7 +8,9 @@
"bytes": 2179,
"compilers": [
"0.15.9",
- "0.15.10"
+ "0.15.10",
+ "0.15.11",
+ "0.15.12"
],
"hash": "sha256-3lDTQdbTM6/0oxav/0V8nW9fWn3lsSM3b2XxwreDxqs=",
"publishedTime": "2022-04-27T18:00:18.000Z",
diff --git a/app/src/App/API.purs b/app/src/App/API.purs
index 17df7ab86..b9b272613 100644
--- a/app/src/App/API.purs
+++ b/app/src/App/API.purs
@@ -296,9 +296,13 @@ authenticated auth = case auth.payload of
{ published = Map.delete payload.version prev.published
, unpublished = Map.insert payload.version unpublished prev.unpublished
}
+ -- Delete the manifest entry first so ManifestIndex.delete can fail if other
+ -- packages still depend on this version. This way, we detect dependency
+ -- violations before performing any irreversible side effects like deleting
+ -- the tarball from storage.
+ Registry.deleteManifest payload.name payload.version
Storage.delete payload.name payload.version
Registry.writeMetadata payload.name updated
- Registry.deleteManifest payload.name payload.version
Log.notice $ "Unpublished " <> formatted <> "!"
Transfer payload -> do
@@ -479,7 +483,7 @@ publish maybeLegacyIndex payload = do
Log.notice $ "Package source does not have a purs.json file, creating one from your spago.yaml file..."
SpagoYaml.readSpagoYaml packageSpagoYaml >>= case _ of
Left readErr -> Except.throw $ "Could not publish your package - a spago.yaml was present, but it was not possible to read it:\n" <> readErr
- Right config -> case SpagoYaml.spagoYamlToManifest config of
+ Right config -> case SpagoYaml.spagoYamlToManifest payload.ref config of
Left err -> Except.throw $ "Could not publish your package - there was an error while converting your spago.yaml into a purs.json manifest:\n" <> err
Right manifest -> do
Log.notice $ Array.fold
@@ -506,7 +510,7 @@ publish maybeLegacyIndex payload = do
]
Right legacyManifest -> do
Log.debug $ "Successfully produced a legacy manifest from the package source."
- let manifest = Legacy.Manifest.toManifest payload.name version existingMetadata.location legacyManifest
+ let manifest = Legacy.Manifest.toManifest payload.name version existingMetadata.location payload.ref legacyManifest
Log.notice $ Array.fold
[ "Converted your legacy manifest(s) into a purs.json manifest to use for publishing:"
, "\n```json\n"
@@ -772,7 +776,7 @@ publish maybeLegacyIndex payload = do
Storage.upload (un Manifest manifest).name (un Manifest manifest).version tarballPath
Log.debug $ "Adding the new version " <> Version.print (un Manifest manifest).version <> " to the package metadata file."
- let newPublishedVersion = { hash, ref: payload.ref, compilers: NonEmptyArray.singleton payload.compiler, publishedTime, bytes }
+ let newPublishedVersion = { hash, compilers: NonEmptyArray.singleton payload.compiler, publishedTime, bytes }
let newMetadata = metadata { published = Map.insert (un Manifest manifest).version newPublishedVersion metadata.published }
Registry.writeMetadata (un Manifest manifest).name (Metadata newMetadata)
diff --git a/app/src/App/CLI/Git.purs b/app/src/App/CLI/Git.purs
index ac9ffc398..baf513748 100644
--- a/app/src/App/CLI/Git.purs
+++ b/app/src/App/CLI/Git.purs
@@ -214,8 +214,8 @@ gitCommit { address: { owner, repo }, committer, commit, message } cwd = Except.
-- Git will error if we try to commit without any changes actually staged,
-- so the below command lists file paths (--name-only) that have changed
-- between the index and current HEAD (--cached), only including files that
- -- have been added or modified (--diff-filter=AM).
- staged <- exec [ "diff", "--name-only", "--cached", "--diff-filter=AM" ] \error ->
+ -- have been added, modified, or deleted (--diff-filter=AMD).
+ staged <- exec [ "diff", "--name-only", "--cached", "--diff-filter=AMD" ] \error ->
"Failed to check whether any changes are staged " <> inRepoErr error
-- If there are no staged files, then we have nothing to commit.
diff --git a/app/src/App/Effect/Archive.purs b/app/src/App/Effect/Archive.purs
index 8c26092ad..17ca0675e 100644
--- a/app/src/App/Effect/Archive.purs
+++ b/app/src/App/Effect/Archive.purs
@@ -35,13 +35,13 @@ import Node.Buffer as Buffer
import Node.FS.Aff as FS.Aff
import Node.Path as Path
import Registry.App.CLI.Tar as Tar
-import Registry.Foreign.FSExtra as FS.Extra
import Registry.App.Effect.GitHub (GITHUB)
import Registry.App.Effect.GitHub as GitHub
import Registry.App.Effect.Log (LOG)
import Registry.App.Effect.Log as Log
import Registry.App.Legacy.Types (RawVersion(..))
import Registry.Constants as Constants
+import Registry.Foreign.FSExtra as FS.Extra
import Registry.Foreign.Octokit as Octokit
import Registry.Foreign.Tar as Foreign.Tar
import Registry.Internal.Format as Internal.Format
diff --git a/app/src/App/Effect/Env.purs b/app/src/App/Effect/Env.purs
index cd7880602..873162264 100644
--- a/app/src/App/Effect/Env.purs
+++ b/app/src/App/Effect/Env.purs
@@ -285,6 +285,16 @@ pacchettibottiED25519Pub = EnvKey
githubEventPath :: EnvKey FilePath
githubEventPath = EnvKey { key: "GITHUB_EVENT_PATH", decode: pure }
+-- Test environment variables (used by E2E tests)
+
+-- | Root directory for test state (database, scratch repos, etc).
+stateDir :: EnvKey FilePath
+stateDir = EnvKey { key: "STATE_DIR", decode: pure }
+
+-- | Directory containing git repository fixtures for tests.
+repoFixturesDir :: EnvKey FilePath
+repoFixturesDir = EnvKey { key: "REPO_FIXTURES_DIR", decode: pure }
+
decodeDatabaseUrl :: String -> Either String DatabaseUrl
decodeDatabaseUrl input = do
let prefix = "sqlite:"
diff --git a/app/src/App/Effect/Registry.purs b/app/src/App/Effect/Registry.purs
index bd406ff25..48fbdf4a8 100644
--- a/app/src/App/Effect/Registry.purs
+++ b/app/src/App/Effect/Registry.purs
@@ -388,6 +388,7 @@ handle env = Cache.interpret _registryCache (Cache.handleMemory env.cacheRef) <<
Right Git.Changed -> do
Log.info "Registry repo has changed, clearing metadata cache..."
+ Cache.delete _registryCache AllMetadata
resetFromDisk
WriteMetadata name metadata reply -> map (map reply) Except.runExcept do
@@ -501,10 +502,9 @@ handle env = Cache.interpret _registryCache (Cache.handleMemory env.cacheRef) <<
Log.info $ "Mirroring legacy package set " <> name <> " to the legacy package sets repo"
manifests <- Except.rethrow =<< handle env (ReadAllManifests identity)
- metadata <- Except.rethrow =<< handle env (ReadAllMetadata identity)
Log.debug $ "Converting package set..."
- converted <- case Legacy.PackageSet.convertPackageSet manifests metadata set of
+ converted <- case Legacy.PackageSet.convertPackageSet manifests set of
Left error -> Except.throw $ "Failed to convert package set " <> name <> " to a legacy package set: " <> error
Right converted -> pure converted
@@ -733,17 +733,30 @@ handle env = Cache.interpret _registryCache (Cache.handleMemory env.cacheRef) <<
result <- Git.gitPull { address, pullMode: env.pull } path
pure result
- now <- nowUTC
- debouncers <- Run.liftEffect $ Ref.read env.debouncer
- case Map.lookup path debouncers of
- -- We will be behind the upstream by at most this amount of time.
- Just prev | DateTime.diff now prev <= Duration.Minutes 1.0 ->
- pure $ Right Git.NoChange
- -- If we didn't debounce, then we should fetch the upstream.
- _ -> do
+ -- Check if the repo directory exists before consulting the debouncer.
+ -- This ensures that if the scratch directory is deleted (e.g., for test
+ -- isolation), we always re-clone rather than returning a stale NoChange.
+ repoExists <- Run.liftAff $ Aff.attempt (FS.Aff.stat path)
+ case repoExists of
+ Left _ -> do
+ -- Repo doesn't exist, bypass debouncer entirely and clone fresh
result <- fetchLatest
+ now <- nowUTC
Run.liftEffect $ Ref.modify_ (Map.insert path now) env.debouncer
pure result
+ Right _ -> do
+ -- Repo exists, check debouncer
+ now <- nowUTC
+ debouncers <- Run.liftEffect $ Ref.read env.debouncer
+ case Map.lookup path debouncers of
+ -- We will be behind the upstream by at most this amount of time.
+ Just prev | DateTime.diff now prev <= Duration.Minutes 1.0 ->
+ pure $ Right Git.NoChange
+ -- If we didn't debounce, then we should fetch the upstream.
+ _ -> do
+ result <- fetchLatest
+ Run.liftEffect $ Ref.modify_ (Map.insert path now) env.debouncer
+ pure result
-- | Commit the file(s) indicated by the commit key with a commit message.
commit :: CommitKey -> String -> Run _ (Either String GitResult)
diff --git a/app/src/App/GitHubIssue.purs b/app/src/App/GitHubIssue.purs
index bd3e4c6b2..e3eb353aa 100644
--- a/app/src/App/GitHubIssue.purs
+++ b/app/src/App/GitHubIssue.purs
@@ -29,7 +29,6 @@ import Node.Process as Process
import Registry.API.V1 as V1
import Registry.App.API as API
import Registry.App.Auth as Auth
-
import Registry.App.Effect.Cache as Cache
import Registry.App.Effect.Env (GITHUB_EVENT_ENV, PACCHETTIBOTTI_ENV, RESOURCE_ENV)
import Registry.App.Effect.Env as Env
diff --git a/app/src/App/Legacy/Manifest.purs b/app/src/App/Legacy/Manifest.purs
index 65aad78ec..8d997342f 100644
--- a/app/src/App/Legacy/Manifest.purs
+++ b/app/src/App/Legacy/Manifest.purs
@@ -59,13 +59,13 @@ type LegacyManifest =
, dependencies :: Map PackageName Range
}
-toManifest :: PackageName -> Version -> Location -> LegacyManifest -> Manifest
-toManifest name version location legacy = do
+toManifest :: PackageName -> Version -> Location -> String -> LegacyManifest -> Manifest
+toManifest name version location ref legacy = do
let { license, description, dependencies } = patchLegacyManifest name version legacy
let includeFiles = Nothing
let excludeFiles = Nothing
let owners = Nothing
- Manifest { name, version, location, license, description, dependencies, includeFiles, excludeFiles, owners }
+ Manifest { name, version, location, ref, license, description, dependencies, includeFiles, excludeFiles, owners }
-- | Attempt to retrieve a license, description, and set of dependencies from a
-- | PureScript repo that does not have a Registry-supported manifest, but does
diff --git a/app/src/App/Legacy/PackageSet.purs b/app/src/App/Legacy/PackageSet.purs
index eb1ce8021..62b718d7c 100644
--- a/app/src/App/Legacy/PackageSet.purs
+++ b/app/src/App/Legacy/PackageSet.purs
@@ -102,8 +102,8 @@ printPscTag (PscTag { compiler, date }) =
, Format.DateTime.format pscDateFormat (DateTime date bottom)
]
-convertPackageSet :: ManifestIndex -> Map PackageName Metadata -> PackageSet -> Either String ConvertedLegacyPackageSet
-convertPackageSet index metadataMap (PackageSet { compiler, packages, published, version }) = do
+convertPackageSet :: ManifestIndex -> PackageSet -> Either String ConvertedLegacyPackageSet
+convertPackageSet index (PackageSet { compiler, packages, published, version }) = do
converted <- case separate $ mapWithIndex convertPackage packages of
{ left, right } | Map.isEmpty left -> Right right
{ left } -> do
@@ -130,17 +130,14 @@ convertPackageSet index metadataMap (PackageSet { compiler, packages, published,
versions <- note noIndexPackageError $ Map.lookup packageName $ ManifestIndex.toMap index
Manifest manifest <- note noIndexVersionError $ Map.lookup packageVersion versions
- Metadata metadata <- note noMetadataPackageError $ Map.lookup packageName metadataMap
- { ref } <- note noMetadataVersionError $ Map.lookup packageVersion metadata.published
-
- repo <- case metadata.location of
+ repo <- case manifest.location of
GitHub { owner, repo, subdir: Nothing } -> Right $ "https://github.com/" <> owner <> "/" <> repo <> ".git"
Git { url, subdir: Nothing } -> Right url
GitHub _ -> Left usesSubdirError
Git _ -> Left usesSubdirError
pure
- { version: RawVersion ref
+ { version: RawVersion manifest.ref
, dependencies: Array.fromFoldable $ Map.keys $ manifest.dependencies
, repo
}
@@ -149,8 +146,6 @@ convertPackageSet index metadataMap (PackageSet { compiler, packages, published,
versionStr = Version.print packageVersion
noIndexPackageError = "No registry index entry found for " <> nameStr
noIndexVersionError = "Found registry index entry for " <> nameStr <> " but none for version " <> versionStr
- noMetadataPackageError = "No metadata entry found for " <> nameStr
- noMetadataVersionError = "Found metadata entry for " <> nameStr <> " but no published version for " <> versionStr
usesSubdirError = "Package " <> nameStr <> " uses the 'subdir' key, which is not supported for legacy package sets."
printDhall :: LegacyPackageSet -> String
diff --git a/app/src/App/Manifest/SpagoYaml.purs b/app/src/App/Manifest/SpagoYaml.purs
index 1d701e57c..66ffa1c48 100644
--- a/app/src/App/Manifest/SpagoYaml.purs
+++ b/app/src/App/Manifest/SpagoYaml.purs
@@ -27,9 +27,10 @@ import Registry.Range (Range)
import Registry.Range as Range
import Registry.Version as Version
--- | Attempt to convert a spago.yaml file to a Manifest
-spagoYamlToManifest :: SpagoYaml -> Either String Manifest
-spagoYamlToManifest config = do
+-- | Attempt to convert a spago.yaml file to a Manifest. The ref parameter is
+-- | the Git reference (tag or commit) used to fetch this version's source.
+spagoYamlToManifest :: String -> SpagoYaml -> Either String Manifest
+spagoYamlToManifest ref config = do
package@{ name, description, dependencies: spagoDependencies } <- note "No 'package' key found in config." config.package
publish@{ version, license, owners } <- note "No 'publish' key found under the 'package' key in config." package.publish
location <- note "No 'location' key found under the 'publish' key in config." publish.location
@@ -43,6 +44,7 @@ spagoYamlToManifest config = do
, description
, license
, location
+ , ref
, owners
, includeFiles
, excludeFiles
diff --git a/app/src/App/Server/Env.purs b/app/src/App/Server/Env.purs
index 302332346..1b2ea7b4a 100644
--- a/app/src/App/Server/Env.purs
+++ b/app/src/App/Server/Env.purs
@@ -12,12 +12,12 @@ import Node.Path as Path
import Registry.API.V1 (JobId, Route)
import Registry.App.API (COMPILER_CACHE, _compilerCache)
import Registry.App.CLI.Git as Git
+import Registry.App.Effect.Archive (ARCHIVE)
+import Registry.App.Effect.Archive as Archive
import Registry.App.Effect.Cache (CacheRef)
import Registry.App.Effect.Cache as Cache
import Registry.App.Effect.Db (DB)
import Registry.App.Effect.Db as Db
-import Registry.App.Effect.Archive (ARCHIVE)
-import Registry.App.Effect.Archive as Archive
import Registry.App.Effect.Env (PACCHETTIBOTTI_ENV, RESOURCE_ENV, ResourceEnv)
import Registry.App.Effect.Env as Env
import Registry.App.Effect.GitHub (GITHUB)
diff --git a/app/test/App/API.purs b/app/test/App/API.purs
index a8431a9bb..d60bcab3f 100644
--- a/app/test/App/API.purs
+++ b/app/test/App/API.purs
@@ -142,7 +142,7 @@ spec = do
Nothing -> Except.throw $ "Expected " <> formatPackageVersion name version <> " to be in metadata."
Just published -> do
let many' = NonEmptyArray.toArray published.compilers
- let expected = map Utils.unsafeVersion [ "0.15.9", "0.15.10" ]
+ let expected = map Utils.unsafeVersion [ "0.15.9", "0.15.10", "0.15.11", "0.15.12" ]
unless (many' == expected) do
Except.throw $ "Expected " <> formatPackageVersion name version <> " to have a compiler matrix of " <> Utils.unsafeStringify (map Version.print expected) <> " but got " <> Utils.unsafeStringify (map Version.print many')
@@ -191,7 +191,7 @@ spec = do
Nothing -> Except.throw $ "Expected " <> formatPackageVersion transitive.name transitive.version <> " to be in metadata."
Just published -> do
let many' = NonEmptyArray.toArray published.compilers
- let expected = map Utils.unsafeVersion [ "0.15.9", "0.15.10" ]
+ let expected = map Utils.unsafeVersion [ "0.15.9", "0.15.10", "0.15.11", "0.15.12" ]
unless (many' == expected) do
Except.throw $ "Expected " <> formatPackageVersion transitive.name transitive.version <> " to have a compiler matrix of " <> Utils.unsafeStringify (map Version.print expected) <> " but got " <> Utils.unsafeStringify (map Version.print many')
diff --git a/app/test/App/Legacy/PackageSet.purs b/app/test/App/Legacy/PackageSet.purs
index 414b09a57..2d4a7a2dc 100644
--- a/app/test/App/Legacy/PackageSet.purs
+++ b/app/test/App/Legacy/PackageSet.purs
@@ -2,8 +2,6 @@ module Test.Registry.App.Legacy.PackageSet (spec) where
import Registry.App.Prelude
-import Data.Array.NonEmpty as NonEmptyArray
-import Data.DateTime (DateTime(..))
import Data.Either as Either
import Data.Map as Map
import Data.Set as Set
@@ -14,7 +12,6 @@ import Registry.App.Legacy.PackageSet as Legacy.PackageSet
import Registry.App.Legacy.Types (legacyPackageSetCodec)
import Registry.ManifestIndex as ManifestIndex
import Registry.PackageName as PackageName
-import Registry.Sha256 as Sha256
import Registry.Test.Assert as Assert
import Registry.Test.Utils as Utils
import Registry.Version as Version
@@ -93,7 +90,7 @@ packageSet = PackageSet
convertedPackageSet :: ConvertedLegacyPackageSet
convertedPackageSet =
- case Legacy.PackageSet.convertPackageSet index metadata packageSet of
+ case Legacy.PackageSet.convertPackageSet index packageSet of
Left err -> unsafeCrashWith err
Right value -> value
where
@@ -104,13 +101,6 @@ convertedPackageSet =
, mkManifest prelude []
]
- metadata = Map.fromFoldable
- [ unsafeMetadataEntry assert
- , unsafeMetadataEntry console
- , unsafeMetadataEntry effect
- , unsafeMetadataEntry prelude
- ]
-
legacyPackageSetJson :: String
legacyPackageSetJson =
"""{
@@ -201,23 +191,3 @@ mkManifest (Tuple name version) deps = do
(PackageName.print name)
(LenientVersion.print version)
(map (bimap PackageName.print (LenientVersion.version >>> toRange)) deps)
-
-unsafeMetadataEntry :: Tuple PackageName LenientVersion -> Tuple PackageName Metadata
-unsafeMetadataEntry (Tuple name version) = do
- let
- published =
- { ref: LenientVersion.raw version
- , hash: unsafeFromRight $ Sha256.parse "sha256-gb24ZRec6mgR8TFBVR2eIh5vsMdhuL+zK9VKjWP74Cw="
- , bytes: 0.0
- , compilers: NonEmptyArray.singleton (Utils.unsafeVersion "0.15.2")
- , publishedTime: DateTime (Utils.unsafeDate "2022-07-07") bottom
- }
-
- metadata = Metadata
- { location: GitHub { owner: "purescript", repo: "purescript-" <> PackageName.print name, subdir: Nothing }
- , owners: Nothing
- , published: Map.singleton (LenientVersion.version version) published
- , unpublished: Map.empty
- }
-
- Tuple name metadata
diff --git a/app/test/App/Manifest/SpagoYaml.purs b/app/test/App/Manifest/SpagoYaml.purs
index 973af0a99..52174063c 100644
--- a/app/test/App/Manifest/SpagoYaml.purs
+++ b/app/test/App/Manifest/SpagoYaml.purs
@@ -19,6 +19,6 @@ spec = do
config <- SpagoYaml.readSpagoYaml (Path.concat [ fixturesPath, path ]) >>= case _ of
Left err -> Aff.throwError $ Aff.error err
Right config -> pure config
- case SpagoYaml.spagoYamlToManifest config of
+ case SpagoYaml.spagoYamlToManifest "v1.0.0" config of
Left err -> Assert.fail $ path <> " failed: " <> err
Right _ -> pure unit
diff --git a/flake.nix b/flake.nix
index 610221dd7..bbec41157 100644
--- a/flake.nix
+++ b/flake.nix
@@ -198,7 +198,8 @@
# Integration test - exercises the server API
integration = import ./nix/test/integration.nix {
- inherit pkgs spagoSrc testEnv;
+ inherit pkgs spagoSrc;
+ testSupport = testEnv;
};
# VM smoke test - verifies deployment without full API testing
@@ -232,11 +233,19 @@
nodejs
jq
dbmate
+ sqlite
purs
spago
purs-tidy-unstable
purs-backend-es-unstable
process-compose
+
+ # E2E test runner script - uses same fixed test environment as test-env
+ (writeShellScriptBin "spago-test-e2e" ''
+ set -euo pipefail
+ ${testEnv.envToExports testEnv.testEnv}
+ exec spago run -p registry-app-e2e
+ '')
];
};
}
diff --git a/lib/fixtures/manifests/aff-5.1.2.json b/lib/fixtures/manifests/aff-5.1.2.json
index 22684f05c..77bb331dd 100644
--- a/lib/fixtures/manifests/aff-5.1.2.json
+++ b/lib/fixtures/manifests/aff-5.1.2.json
@@ -6,6 +6,7 @@
"githubOwner": "purescript",
"githubRepo": "purescript-aff"
},
+ "ref": "v5.1.2",
"dependencies": {
"datetime": ">=4.0.0 <5.0.0",
"effect": ">=2.0.0 <3.0.0",
diff --git a/lib/fixtures/manifests/mysql-4.1.1.json b/lib/fixtures/manifests/mysql-4.1.1.json
index 6f9703b61..e0e8c70fe 100644
--- a/lib/fixtures/manifests/mysql-4.1.1.json
+++ b/lib/fixtures/manifests/mysql-4.1.1.json
@@ -6,6 +6,7 @@
"githubOwner": "oreshinya",
"githubRepo": "purescript-mysql"
},
+ "ref": "v4.1.1",
"dependencies": {
"aff": ">=5.0.2 <6.0.0",
"js-date": ">=6.0.0 <7.0.0",
diff --git a/lib/fixtures/manifests/prelude-4.1.1.json b/lib/fixtures/manifests/prelude-4.1.1.json
index 3dd47411c..56ac6db20 100644
--- a/lib/fixtures/manifests/prelude-4.1.1.json
+++ b/lib/fixtures/manifests/prelude-4.1.1.json
@@ -7,6 +7,7 @@
"githubOwner": "purescript",
"githubRepo": "purescript-prelude"
},
+ "ref": "v4.1.1",
"owners": [
{
"keytype": "ed-25519",
diff --git a/lib/src/API/V1.purs b/lib/src/API/V1.purs
index fee64ef3c..862025980 100644
--- a/lib/src/API/V1.purs
+++ b/lib/src/API/V1.purs
@@ -268,6 +268,7 @@ jobInfo = case _ of
newtype JobId = JobId String
derive instance Newtype JobId _
+derive newtype instance Eq JobId
jobIdCodec :: CJ.Codec JobId
jobIdCodec = Profunctor.wrapIso JobId CJ.string
diff --git a/lib/src/Manifest.purs b/lib/src/Manifest.purs
index d660b459b..49bb62f2c 100644
--- a/lib/src/Manifest.purs
+++ b/lib/src/Manifest.purs
@@ -48,6 +48,7 @@ newtype Manifest = Manifest
, version :: Version
, license :: License
, location :: Location
+ , ref :: String
, owners :: Maybe (NonEmptyArray Owner)
, description :: Maybe String
, includeFiles :: Maybe (NonEmptyArray NonEmptyString)
@@ -77,6 +78,7 @@ codec = Profunctor.wrapIso Manifest $ CJ.named "Manifest" $ CJ.object
$ CJ.recordProp @"license" License.codec
$ CJ.recordPropOptional @"description" (Internal.Codec.limitedString 300)
$ CJ.recordProp @"location" Location.codec
+ $ CJ.recordProp @"ref" CJ.string
$ CJ.recordPropOptional @"owners" (CJ.Common.nonEmptyArray Owner.codec)
$ CJ.recordPropOptional @"includeFiles" (CJ.Common.nonEmptyArray CJ.Common.nonEmptyString)
$ CJ.recordPropOptional @"excludeFiles" (CJ.Common.nonEmptyArray CJ.Common.nonEmptyString)
diff --git a/lib/src/Metadata.purs b/lib/src/Metadata.purs
index c54bed31e..3235661de 100644
--- a/lib/src/Metadata.purs
+++ b/lib/src/Metadata.purs
@@ -63,17 +63,11 @@ codec = Profunctor.wrapIso Metadata $ CJ.named "Metadata" $ CJ.object
$ CJ.record
-- | Metadata about a published package version.
--- |
--- | NOTE: The `ref` field is UNSPECIFIED and WILL BE REMOVED in the future. Do
--- | not rely on its presence!
type PublishedMetadata =
{ bytes :: Number
, compilers :: NonEmptyArray Version
, hash :: Sha256
, publishedTime :: DateTime
-
- -- UNSPECIFIED: Will be removed in the future.
- , ref :: String
}
publishedMetadataCodec :: CJ.Codec PublishedMetadata
@@ -82,7 +76,6 @@ publishedMetadataCodec = CJ.named "PublishedMetadata" $ CJ.Record.object
, compilers: CJ.Common.nonEmptyArray Version.codec
, hash: Sha256.codec
, publishedTime: Internal.Codec.iso8601DateTime
- , ref: CJ.string
}
-- | Metadata about an unpublished package version.
diff --git a/lib/test/Registry/ManifestIndex.purs b/lib/test/Registry/ManifestIndex.purs
index 18e0863ef..1fb7e13a6 100644
--- a/lib/test/Registry/ManifestIndex.purs
+++ b/lib/test/Registry/ManifestIndex.purs
@@ -151,9 +151,9 @@ spec = do
contextEntry :: String
contextEntry =
- """{"name":"context","version":"0.0.1","license":"MIT","location":{"githubOwner":"Fresheyeball","githubRepo":"purescript-owner"},"dependencies":{}}
-{"name":"context","version":"0.0.2","license":"MIT","location":{"githubOwner":"Fresheyeball","githubRepo":"purescript-owner"},"dependencies":{}}
-{"name":"context","version":"0.0.3","license":"MIT","location":{"githubOwner":"Fresheyeball","githubRepo":"purescript-owner"},"dependencies":{}}
+ """{"name":"context","version":"0.0.1","license":"MIT","location":{"githubOwner":"Fresheyeball","githubRepo":"purescript-owner"},"ref":"v0.0.1","dependencies":{}}
+{"name":"context","version":"0.0.2","license":"MIT","location":{"githubOwner":"Fresheyeball","githubRepo":"purescript-owner"},"ref":"v0.0.2","dependencies":{}}
+{"name":"context","version":"0.0.3","license":"MIT","location":{"githubOwner":"Fresheyeball","githubRepo":"purescript-owner"},"ref":"v0.0.3","dependencies":{}}
"""
testIndex
@@ -242,6 +242,7 @@ manifestCodec' = Profunctor.dimap to from $ CJ.named "ManifestRep" $ CJ.Record.o
{ url: "https://github.com/purescript/purescript-" <> PackageName.print name <> ".git"
, subdir: Nothing
}
+ , ref: "v" <> Version.print version
, description: Nothing
, owners: Nothing
, includeFiles: Nothing
diff --git a/lib/test/Registry/Metadata.purs b/lib/test/Registry/Metadata.purs
index 02e12c053..8daffc02c 100644
--- a/lib/test/Registry/Metadata.purs
+++ b/lib/test/Registry/Metadata.purs
@@ -29,8 +29,7 @@ recordStudio =
"0.13.0"
],
"hash": "sha256-LPRUC8ozZc7VCeRhKa4CtSgAfNqgAoVs2lH+7mYEcTk=",
- "publishedTime": "2021-03-27T10:03:46.000Z",
- "ref": "v0.1.0"
+ "publishedTime": "2021-03-27T10:03:46.000Z"
},
"0.2.1": {
"bytes": 3365,
@@ -38,8 +37,7 @@ recordStudio =
"0.13.0"
],
"hash": "sha256-ySKKKp3rUJa4UmYTZshaOMO3jE+DW7IIqKJsurA2PP8=",
- "publishedTime": "2022-05-15T10:51:57.000Z",
- "ref": "v0.2.1"
+ "publishedTime": "2022-05-15T10:51:57.000Z"
},
"1.0.0": {
"bytes": 5155,
@@ -47,8 +45,7 @@ recordStudio =
"0.13.0"
],
"hash": "sha256-0iMF8Rq88QBGuxTNrh+iuruw8l5boCP6J2JWBpQ4b7w=",
- "publishedTime": "2022-11-03T17:30:28.000Z",
- "ref": "v1.0.0"
+ "publishedTime": "2022-11-03T17:30:28.000Z"
},
"1.0.1": {
"bytes": 5635,
@@ -57,8 +54,7 @@ recordStudio =
"0.13.1"
],
"hash": "sha256-Xm9pwDBHW5zYUEzxfVSgjglIcwRI1gcCOmcpyQ/tqeY=",
- "publishedTime": "2022-11-04T12:21:09.000Z",
- "ref": "v1.0.1"
+ "publishedTime": "2022-11-04T12:21:09.000Z"
}
},
"unpublished": {
diff --git a/lib/test/Registry/Operation/Validation.purs b/lib/test/Registry/Operation/Validation.purs
index cf474f103..955b08164 100644
--- a/lib/test/Registry/Operation/Validation.purs
+++ b/lib/test/Registry/Operation/Validation.purs
@@ -15,7 +15,8 @@ import Registry.Manifest (Manifest(..))
import Registry.Metadata (Metadata(..))
import Registry.Operation.Validation (UnpublishError(..), forbiddenModules, getUnresolvedDependencies, validatePursModule, validateUnpublish)
import Registry.Test.Assert as Assert
-import Registry.Test.Utils (defaultHash, defaultLocation, fromJust, unsafeDateTime, unsafeManifest, unsafePackageName, unsafeVersion)
+import Registry.Test.Fixtures (defaultHash, defaultLocation)
+import Registry.Test.Utils (fromJust, unsafeDateTime, unsafeManifest, unsafePackageName, unsafeVersion)
import Test.Spec (Spec)
import Test.Spec as Spec
@@ -66,7 +67,7 @@ spec = do
inRange = unsafeDateTime "2022-12-11T12:00:00.000Z"
compilers = NonEmptyArray.singleton (unsafeVersion "0.13.0")
- publishedMetadata = { bytes: 100.0, hash: defaultHash, publishedTime: outOfRange, compilers, ref: "" }
+ publishedMetadata = { bytes: 100.0, hash: defaultHash, publishedTime: outOfRange, compilers }
metadata = Metadata
{ location: defaultLocation
diff --git a/nix/test/config.nix b/nix/test/config.nix
index fdd24a537..77b5cfaff 100644
--- a/nix/test/config.nix
+++ b/nix/test/config.nix
@@ -19,22 +19,21 @@ let
ports = {
server = serverPort;
github = serverPort + 1;
- bucket = serverPort + 2;
- s3 = serverPort + 3;
- pursuit = serverPort + 4;
- healthchecks = serverPort + 5;
+ # Single storage WireMock instance for bucket + s3 + pursuit (merged for stateful scenarios)
+ storage = serverPort + 2;
+ healthchecks = serverPort + 3;
};
- # Default state directory for tests
- defaultStateDir = "/var/lib/registry-server";
+ # Fixed state directory for tests - not configurable to avoid mismatch between
+ # test-env and spago-test-e2e shells. The test-env script cleans this up on start.
+ stateDir = "/tmp/registry-test-env";
# Mock service URLs for test environment
+ # All storage-related APIs (s3, bucket, pursuit) now share a single WireMock instance
mockUrls = {
registry = "http://localhost:${toString ports.server}/api";
github = "http://localhost:${toString ports.github}";
- s3 = "http://localhost:${toString ports.s3}";
- bucket = "http://localhost:${toString ports.bucket}";
- pursuit = "http://localhost:${toString ports.pursuit}";
+ storage = "http://localhost:${toString ports.storage}";
healthchecks = "http://localhost:${toString ports.healthchecks}";
};
@@ -48,16 +47,20 @@ let
};
# Complete test environment - starts with .env.example defaults which include
- # mock secrets, then overrides external services with mock URLs. The DATABASE_URL
- # and REPO_FIXTURES_DIR vars are derived from STATE_DIR at runtime so those are
- # implemented in the script directly.
+ # mock secrets, then overrides external services with mock URLs.
+ # All storage-related APIs share a single WireMock instance for stateful scenarios.
testEnv = envDefaults // {
+ # State directory and derived paths
+ STATE_DIR = stateDir;
+ REPO_FIXTURES_DIR = "${stateDir}/repo-fixtures";
+ DATABASE_URL = "sqlite:${stateDir}/db/registry.sqlite3";
# Mock service URLs (override production endpoints)
REGISTRY_API_URL = mockUrls.registry;
GITHUB_API_URL = mockUrls.github;
- S3_API_URL = mockUrls.s3;
- S3_BUCKET_URL = mockUrls.bucket;
- PURSUIT_API_URL = mockUrls.pursuit;
+ # All storage-related APIs share a single base URL for stateful scenarios
+ S3_API_URL = mockUrls.storage;
+ S3_BUCKET_URL = mockUrls.storage;
+ PURSUIT_API_URL = mockUrls.storage;
HEALTHCHECKS_URL = mockUrls.healthchecks;
PACCHETTIBOTTI_ED25519_PUB = testKeys.public;
PACCHETTIBOTTI_ED25519 = testKeys.private;
@@ -140,6 +143,30 @@ let
};
};
+ # Console package helpers (console@6.1.0)
+ consoleBase64Response =
+ fileName:
+ base64Response {
+ url = "/repos/purescript/purescript-console/contents/${fileName}?ref=v6.1.0";
+ inherit fileName;
+ filePath = rootPath + "/app/fixtures/github-packages/console-6.1.0/${fileName}";
+ };
+
+ console404Response = fileName: {
+ request = {
+ method = "GET";
+ url = "/repos/purescript/purescript-console/contents/${fileName}?ref=v6.1.0";
+ };
+ response = {
+ status = 404;
+ headers."Content-Type" = "application/json";
+ jsonBody = {
+ message = "Not Found";
+ documentation_url = "https://docs.github.com/rest/repos/contents#get-repository-content";
+ };
+ };
+ };
+
# GitHub API wiremock mappings
githubMappings = [
(effectBase64Response "bower.json")
@@ -149,6 +176,13 @@ let
(effect404Response "spago.dhall")
(effect404Response "purs.json")
(effect404Response "package.json")
+ # Console package (console@6.1.0)
+ (consoleBase64Response "bower.json")
+ (consoleBase64Response "LICENSE")
+ (console404Response "spago.yaml")
+ (console404Response "spago.dhall")
+ (console404Response "purs.json")
+ (console404Response "package.json")
{
request = {
method = "GET";
@@ -216,97 +250,452 @@ let
}
];
- # S3 API wiremock mappings (serves package tarballs)
- s3Mappings = [
- {
- request = {
- method = "GET";
- url = "/effect/4.0.0.tar.gz";
- };
- response = {
- status = 200;
- headers."Content-Type" = "application/octet-stream";
- bodyFileName = "effect-4.0.0.tar.gz";
- };
- }
- {
- request = {
- method = "GET";
- url = "/prelude/6.0.1.tar.gz";
- };
- response = {
- status = 200;
- headers."Content-Type" = "application/octet-stream";
- bodyFileName = "prelude-6.0.1.tar.gz";
- };
- }
- {
- request = {
- method = "GET";
- url = "/type-equality/4.0.1.tar.gz";
- };
- response = {
- status = 200;
- headers."Content-Type" = "application/octet-stream";
- bodyFileName = "type-equality-4.0.1.tar.gz";
- };
- }
- ];
+ # Fixture directory for storage (tarballs)
+ storageFixturesDir = rootPath + "/app/fixtures/registry-storage";
- s3Files = [
- {
- name = "effect-4.0.0.tar.gz";
- path = rootPath + "/app/fixtures/registry-storage/effect-4.0.0.tar.gz";
- }
- {
- name = "prelude-6.0.1.tar.gz";
- path = rootPath + "/app/fixtures/registry-storage/prelude-6.0.1.tar.gz";
- }
+ # Parse tarball filename into package name and version
+ # e.g. "effect-4.0.0.tar.gz" -> { name = "effect"; version = "4.0.0"; fileName = "effect-4.0.0.tar.gz"; }
+ # e.g. "type-equality-4.0.1.tar.gz" -> { name = "type-equality"; version = "4.0.1"; ... }
+ parseTarball =
+ fileName:
+ let
+ base = lib.removeSuffix ".tar.gz" fileName;
+ parts = lib.splitString "-" base;
+ # Version is the last part; name is everything before
+ version = lib.last parts;
+ name = lib.concatStringsSep "-" (lib.init parts);
+ in
{
- name = "type-equality-4.0.1.tar.gz";
- path = rootPath + "/app/fixtures/registry-storage/type-equality-4.0.1.tar.gz";
- }
- ];
+ inherit name version fileName;
+ };
+
+ # List all .tar.gz files in storage fixtures
+ storageTarballs = map parseTarball (
+ builtins.filter (f: lib.hasSuffix ".tar.gz" f) (
+ builtins.attrNames (builtins.readDir storageFixturesDir)
+ )
+ );
+
+ # Metadata fixtures directory (to determine which packages are "published")
+ metadataFixturesDir = rootPath + "/app/fixtures/registry/metadata";
+ metadataFiles = builtins.attrNames (builtins.readDir metadataFixturesDir);
+ publishedPackageNames = map (f: lib.removeSuffix ".json" f) metadataFiles;
+
+ # ============================================================================
+ # UNIFIED STORAGE MAPPINGS WITH WIREMOCK SCENARIOS
+ # ============================================================================
+ #
+ # All storage-related APIs (S3 downloads, bucket uploads, Pursuit) are now served
+ # by a single WireMock instance with stateful scenarios. This enables proper
+ # read-after-write semantics - when a test publishes a package, subsequent
+ # downloads will succeed.
+ #
+ # Scenario design:
+ # - One scenario per package-version (e.g., "effect-4.0.0")
+ # - WireMock scenarios always start at state "Started"
+ # - Published packages (has metadata): "Started" means Present (tarball available)
+ # - After DELETE, transitions to "Deleted" state (404 on GET)
+ # - Unpublished packages (no metadata): "Started" means Absent (tarball 404)
+ # - After PUT upload, transitions to "Present" state
+ # - After DELETE, transitions to "Deleted" state (404 on GET)
+ #
+ # State machine:
+ # Published: Started(Present) --DELETE--> Deleted(404)
+ # Unpublished: Started(404) --PUT--> Present(200) --DELETE--> Deleted(404)
+ #
+ # Reset between tests via POST /__admin/scenarios/reset
+ # ============================================================================
- # S3 Bucket API wiremock mappings (handles upload/list operations)
+ # Generate S3 GET mappings with scenario support
+ s3Mappings = lib.concatMap (
+ pkg:
+ let
+ scenario = "${pkg.name}-${pkg.version}";
+ isPublished = builtins.elem pkg.name publishedPackageNames;
+ tarPath = "/${pkg.name}/${pkg.version}.tar.gz";
+ in
+ if isPublished then
+ # Published package: tarball available in Started state, 404 in Deleted state
+ [
+ {
+ request = {
+ method = "GET";
+ url = tarPath;
+ };
+ response = {
+ status = 200;
+ headers."Content-Type" = "application/octet-stream";
+ bodyFileName = pkg.fileName;
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ }
+ {
+ request = {
+ method = "GET";
+ url = tarPath;
+ };
+ response = {
+ status = 404;
+ body = "Not Found";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Deleted";
+ }
+ ]
+ else
+ # Unpublished package: 404 in Started, 200 in Present, 404 in Deleted
+ [
+ {
+ request = {
+ method = "GET";
+ url = tarPath;
+ };
+ response = {
+ status = 404;
+ body = "Not Found";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ }
+ {
+ request = {
+ method = "GET";
+ url = tarPath;
+ };
+ response = {
+ status = 200;
+ headers."Content-Type" = "application/octet-stream";
+ bodyFileName = pkg.fileName;
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Present";
+ }
+ {
+ request = {
+ method = "GET";
+ url = tarPath;
+ };
+ response = {
+ status = 404;
+ body = "Not Found";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Deleted";
+ }
+ ]
+ ) storageTarballs;
+
+ # Generate s3Files list from fixtures (tarballs for bodyFileName references)
+ s3Files = map (pkg: {
+ name = pkg.fileName;
+ path = storageFixturesDir + "/${pkg.fileName}";
+ }) storageTarballs;
+
+ # Generate bucket PUT/DELETE/listObjects mappings with scenario support
# The AWS SDK uses virtual-hosted style URLs by default, where the bucket name
# is in the hostname (purescript-registry.localhost:9002) and the path contains
- # only the key. For example: GET /?prefix=effect/ instead of GET /purescript-registry?prefix=effect/
- bucketMappings = [
- # List objects - virtual-hosted style (bucket in hostname, path is just /?prefix=...)
- {
- request = {
- method = "GET";
- urlPattern = "/\\?.*prefix=.*";
- };
- response = {
- status = 200;
- headers."Content-Type" = "application/xml";
- body = ''prelude/6.0.1.tar.gz16298"abc123"type-equality/4.0.1.tar.gz2184"def456"'';
- };
- }
- # Upload effect@4.0.0 - virtual-hosted style (path is /effect/4.0.0.tar.gz)
- {
- request = {
- method = "PUT";
- urlPattern = "/effect/4\\.0\\.0\\.tar\\.gz.*";
- };
- response = {
- status = 200;
- headers."ETag" = ''"abc123"'';
- headers."Content-Type" = "application/xml";
- body = "";
- };
- }
- # Fail upload for prelude (to test error handling)
- {
- request = {
- method = "PUT";
- urlPattern = "/prelude/6\\.0\\.1\\.tar\\.gz.*";
- };
- response.status = 500;
- }
- ];
+ # only the key.
+ bucketMappings =
+ # Generate per-package listObjects mappings with scenario support
+ (lib.concatMap (
+ pkg:
+ let
+ scenario = "${pkg.name}-${pkg.version}";
+ isPublished = builtins.elem pkg.name publishedPackageNames;
+ escapedName = lib.replaceStrings [ "-" ] [ "\\-" ] pkg.name;
+ listUrlPattern = "/\\?.*prefix=${escapedName}.*";
+ presentContents = ''${pkg.name}/${pkg.version}.tar.gz1000"abc123"'';
+ in
+ if isPublished then
+ # Published package: listObjects returns contents in Started, empty in Deleted
+ [
+ {
+ request = {
+ method = "GET";
+ urlPattern = listUrlPattern;
+ };
+ response = {
+ status = 200;
+ headers."Content-Type" = "application/xml";
+ body = "${presentContents}";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ }
+ {
+ request = {
+ method = "GET";
+ urlPattern = listUrlPattern;
+ };
+ response = {
+ status = 200;
+ headers."Content-Type" = "application/xml";
+ body = "";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Deleted";
+ }
+ ]
+ else
+ # Unpublished package: listObjects returns empty in Started, contents in Present, empty in Deleted
+ [
+ {
+ request = {
+ method = "GET";
+ urlPattern = listUrlPattern;
+ };
+ response = {
+ status = 200;
+ headers."Content-Type" = "application/xml";
+ body = "";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ }
+ {
+ request = {
+ method = "GET";
+ urlPattern = listUrlPattern;
+ };
+ response = {
+ status = 200;
+ headers."Content-Type" = "application/xml";
+ body = "${presentContents}";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Present";
+ }
+ {
+ request = {
+ method = "GET";
+ urlPattern = listUrlPattern;
+ };
+ response = {
+ status = 200;
+ headers."Content-Type" = "application/xml";
+ body = "";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Deleted";
+ }
+ ]
+ ) storageTarballs)
+ ++ (
+ # Generate PUT/DELETE mappings for all packages with scenario support
+ lib.concatMap (
+ pkg:
+ let
+ scenario = "${pkg.name}-${pkg.version}";
+ isPublished = builtins.elem pkg.name publishedPackageNames;
+ escapedVersion = lib.replaceStrings [ "." ] [ "\\." ] pkg.version;
+ urlPattern = "/${pkg.name}/${escapedVersion}\\.tar\\.gz.*";
+ in
+ if isPublished then
+ # Published package: PUT fails (already exists), DELETE transitions to Deleted
+ [
+ {
+ request = {
+ method = "PUT";
+ urlPattern = urlPattern;
+ };
+ response = {
+ status = 500;
+ body = "Package already published";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ }
+ # DELETE in Started state (package exists) transitions to Deleted
+ {
+ request = {
+ method = "DELETE";
+ urlPattern = urlPattern;
+ };
+ response = {
+ status = 204;
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ newScenarioState = "Deleted";
+ }
+ # DELETE in Deleted state fails (already deleted)
+ {
+ request = {
+ method = "DELETE";
+ urlPattern = urlPattern;
+ };
+ response = {
+ status = 404;
+ body = "Not Found";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Deleted";
+ }
+ ]
+ else
+ # Unpublished package: PUT succeeds and transitions to Present, DELETE transitions to Deleted
+ [
+ {
+ request = {
+ method = "PUT";
+ urlPattern = urlPattern;
+ };
+ response = {
+ status = 200;
+ headers."ETag" = ''"abc123"'';
+ headers."Content-Type" = "application/xml";
+ body = "";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ newScenarioState = "Present";
+ }
+ # PUT in Present state fails (already uploaded)
+ {
+ request = {
+ method = "PUT";
+ urlPattern = urlPattern;
+ };
+ response = {
+ status = 500;
+ body = "Package already uploaded";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Present";
+ }
+ # DELETE in Started state fails (doesn't exist yet)
+ {
+ request = {
+ method = "DELETE";
+ urlPattern = urlPattern;
+ };
+ response = {
+ status = 404;
+ body = "Not Found";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ }
+ # DELETE in Present state (after publish) transitions to Deleted
+ {
+ request = {
+ method = "DELETE";
+ urlPattern = urlPattern;
+ };
+ response = {
+ status = 204;
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Present";
+ newScenarioState = "Deleted";
+ }
+ # DELETE in Deleted state fails (already deleted)
+ {
+ request = {
+ method = "DELETE";
+ urlPattern = urlPattern;
+ };
+ response = {
+ status = 404;
+ body = "Not Found";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Deleted";
+ }
+ ]
+ ) storageTarballs
+ );
+
+ # Pursuit API mappings with scenario support
+ pursuitMappings =
+ (lib.concatMap (
+ pkg:
+ let
+ scenario = "${pkg.name}-${pkg.version}";
+ isPublished = builtins.elem pkg.name publishedPackageNames;
+ versionsUrl = "/packages/purescript-${pkg.name}/available-versions";
+ publishedVersionsBody = ''[["${pkg.version}","https://pursuit.purescript.org/packages/purescript-${pkg.name}/${pkg.version}"]]'';
+ in
+ if isPublished then
+ # Published package: versions available in Started, empty in Deleted
+ [
+ {
+ request = {
+ method = "GET";
+ url = versionsUrl;
+ };
+ response = {
+ status = 200;
+ body = publishedVersionsBody;
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ }
+ {
+ request = {
+ method = "GET";
+ url = versionsUrl;
+ };
+ response = {
+ status = 200;
+ body = "[]";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Deleted";
+ }
+ ]
+ else
+ # Unpublished package: empty in Started, has version in Present, empty in Deleted
+ [
+ {
+ request = {
+ method = "GET";
+ url = versionsUrl;
+ };
+ response = {
+ status = 200;
+ body = "[]";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Started";
+ }
+ {
+ request = {
+ method = "GET";
+ url = versionsUrl;
+ };
+ response = {
+ status = 200;
+ body = publishedVersionsBody;
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Present";
+ }
+ {
+ request = {
+ method = "GET";
+ url = versionsUrl;
+ };
+ response = {
+ status = 200;
+ body = "[]";
+ };
+ scenarioName = scenario;
+ requiredScenarioState = "Deleted";
+ }
+ ]
+ ) storageTarballs)
+ ++ [
+ # Accept documentation uploads (POST /packages)
+ {
+ request = {
+ method = "POST";
+ url = "/packages";
+ };
+ response.status = 201;
+ }
+ ];
# Healthchecks API wiremock mappings (simple ping endpoint)
healthchecksMappings = [
@@ -322,46 +711,9 @@ let
}
];
- # Pursuit API wiremock mappings
- pursuitMappings = [
- {
- request = {
- method = "GET";
- url = "/packages/purescript-prelude/available-versions";
- };
- response = {
- status = 200;
- body = ''[["6.0.1","https://pursuit.purescript.org/packages/purescript-prelude/6.0.1"]]'';
- };
- }
- {
- request = {
- method = "GET";
- url = "/packages/purescript-effect/available-versions";
- };
- response = {
- status = 200;
- body = ''[]'';
- };
- }
- {
- request = {
- method = "GET";
- url = "/packages/purescript-type-equality/available-versions";
- };
- response = {
- status = 200;
- body = ''[["4.0.1","https://pursuit.purescript.org/packages/purescript-type-equality/4.0.1"]]'';
- };
- }
- {
- request = {
- method = "POST";
- url = "/packages";
- };
- response.status = 201;
- }
- ];
+ # Combined storage mappings (S3 + bucket + Pursuit)
+ storageMappings = s3Mappings ++ bucketMappings ++ pursuitMappings;
+ storageFiles = s3Files;
# Wiremock root directory builder
mkWiremockRoot =
@@ -380,7 +732,9 @@ let
${lib.concatMapStrings (f: "cp ${f.path} $out/__files/${f.name}\n") files}
'';
- # All wiremock configurations
+ # All WireMock configurations.
+ # Add new WireMock services here; both test-env.nix and integration.nix
+ # derive their processes from this attribute set automatically.
wiremockConfigs = {
github = {
port = ports.github;
@@ -389,26 +743,13 @@ let
mappings = githubMappings;
};
};
- s3 = {
- port = ports.s3;
- rootDir = mkWiremockRoot {
- name = "s3";
- mappings = s3Mappings;
- files = s3Files;
- };
- };
- bucket = {
- port = ports.bucket;
- rootDir = mkWiremockRoot {
- name = "bucket";
- mappings = bucketMappings;
- };
- };
- pursuit = {
- port = ports.pursuit;
+ # Single storage WireMock instance with stateful scenarios
+ storage = {
+ port = ports.storage;
rootDir = mkWiremockRoot {
- name = "pursuit";
- mappings = pursuitMappings;
+ name = "storage";
+ mappings = storageMappings;
+ files = storageFiles;
};
};
healthchecks = {
@@ -437,7 +778,7 @@ let
name = "setup-git-fixtures";
runtimeInputs = [ pkgs.git ];
text = ''
- FIXTURES_DIR="''${1:-${defaultStateDir}/repo-fixtures}"
+ FIXTURES_DIR="''${1:-${stateDir}/repo-fixtures}"
# Run git as pacchettibotti
gitbot() {
@@ -453,6 +794,7 @@ let
# Copy fixtures and make writable (nix store files are read-only)
cp -r ${rootPath}/app/fixtures/{registry-index,registry,package-sets} "$FIXTURES_DIR/purescript/"
cp -r ${rootPath}/app/fixtures/github-packages/effect-4.0.0 "$FIXTURES_DIR/purescript/purescript-effect"
+ cp -r ${rootPath}/app/fixtures/github-packages/console-6.1.0 "$FIXTURES_DIR/purescript/purescript-console"
chmod -R u+w "$FIXTURES_DIR/purescript"
for repo in "$FIXTURES_DIR"/purescript/*/; do
@@ -460,10 +802,13 @@ let
git init -b master && git add .
gitbot commit -m "Fixture commit"
git config receive.denyCurrentBranch ignore
+ # Tag the initial commit so we can reset to it for test isolation
+ gitbot tag -m "initial-fixture" initial-fixture
done
gitbot -C "$FIXTURES_DIR/purescript/package-sets" tag -m "psc-0.15.9-20230105" psc-0.15.9-20230105
gitbot -C "$FIXTURES_DIR/purescript/purescript-effect" tag -m "v4.0.0" v4.0.0
+ gitbot -C "$FIXTURES_DIR/purescript/purescript-console" tag -m "v6.1.0" v6.1.0
'';
};
@@ -550,7 +895,7 @@ in
{
inherit
ports
- defaultStateDir
+ stateDir
mockUrls
testEnv
envToExports
@@ -564,10 +909,8 @@ in
serverStartScript
# For custom wiremock setups
githubMappings
- s3Mappings
- s3Files
- bucketMappings
- pursuitMappings
+ storageMappings
+ storageFiles
mkWiremockRoot
;
}
diff --git a/nix/test/integration.nix b/nix/test/integration.nix
index bc4f333e0..75b6e6487 100644
--- a/nix/test/integration.nix
+++ b/nix/test/integration.nix
@@ -1,7 +1,9 @@
{
pkgs,
spagoSrc,
- testEnv,
+ # Test support module from test-env.nix. Named 'testSupport' to avoid confusion
+ # with testSupport.testEnv (the environment variables attribute set).
+ testSupport,
}:
if pkgs.stdenv.isDarwin then
pkgs.runCommand "integration-skip" { } ''
@@ -29,7 +31,7 @@ else
'';
};
- ports = testEnv.ports;
+ ports = testSupport.ports;
in
pkgs.runCommand "e2e-integration"
{
@@ -38,10 +40,11 @@ else
pkgs.curl
pkgs.jq
pkgs.git
+ pkgs.sqlite
pkgs.nss_wrapper
- testEnv.wiremockStartScript
- testEnv.serverStartScript
- testEnv.setupGitFixtures
+ testSupport.wiremockStartScript
+ testSupport.serverStartScript
+ testSupport.setupGitFixtures
];
NODE_PATH = "${pkgs.registry-package-lock}/node_modules";
# Use nss_wrapper to resolve S3 bucket subdomain in the Nix sandbox.
@@ -57,9 +60,10 @@ else
set -e
export HOME=$TMPDIR
export STATE_DIR=$TMPDIR/state
+ export REPO_FIXTURES_DIR="$STATE_DIR/repo-fixtures"
# Export test environment variables for E2E test runners
- ${testEnv.testConfig.envToExports testEnv.testConfig.testEnv}
+ ${testSupport.envToExports testSupport.testEnv}
mkdir -p $STATE_DIR
@@ -68,8 +72,8 @@ else
start-wiremock &
WIREMOCK_PID=$!
- # Wait for wiremock (github, bucket, s3, pursuit)
- for port in ${toString ports.github} ${toString ports.bucket} ${toString ports.s3} ${toString ports.pursuit}; do
+ # Wait for wiremock (github, storage, healthchecks)
+ for port in ${toString ports.github} ${toString ports.storage} ${toString ports.healthchecks}; do
until curl -s "http://localhost:$port/__admin" > /dev/null 2>&1; do
sleep 0.5
done
diff --git a/nix/test/smoke.nix b/nix/test/smoke.nix
index 1365d8283..c67919158 100644
--- a/nix/test/smoke.nix
+++ b/nix/test/smoke.nix
@@ -46,11 +46,14 @@ else
timeout=30
)
- # Verify we get a valid JSON response (empty array for jobs)
+ # Verify we get a valid JSON response (the jobs endpoint responds)
result = registry.succeed(
"curl -s http://localhost:${envVars.SERVER_PORT}/api/v1/jobs"
)
- assert result.strip() == "[]", f"Expected empty jobs array, got: {result}"
+
+ # The server may create matrix jobs on startup for new compilers, so we just verify
+ # the response is valid JSON (starts with '[')
+ assert result.strip().startswith("["), f"Expected JSON array, got: {result}"
# Verify the database was created and migrations ran
registry.succeed("test -f ${stateDir}/db/registry.sqlite3")
diff --git a/nix/test/test-env.nix b/nix/test/test-env.nix
index f7d7fb058..a68b393af 100644
--- a/nix/test/test-env.nix
+++ b/nix/test/test-env.nix
@@ -59,18 +59,15 @@ let
version = "0.5";
processes = {
wiremock-github = mkWiremockProcess "github" ports.github;
- wiremock-s3 = mkWiremockProcess "s3" ports.s3;
- wiremock-bucket = mkWiremockProcess "bucket" ports.bucket;
- wiremock-pursuit = mkWiremockProcess "pursuit" ports.pursuit;
+ # Unified storage WireMock instance for S3 + bucket + Pursuit with stateful scenarios
+ wiremock-storage = mkWiremockProcess "storage" ports.storage;
wiremock-healthchecks = mkWiremockProcess "healthchecks" ports.healthchecks;
registry-server = {
command = "${serverStartScript}/bin/start-server";
depends_on = {
wiremock-github.condition = "process_healthy";
- wiremock-s3.condition = "process_healthy";
- wiremock-bucket.condition = "process_healthy";
- wiremock-pursuit.condition = "process_healthy";
+ wiremock-storage.condition = "process_healthy";
wiremock-healthchecks.condition = "process_healthy";
};
readiness_probe = {
@@ -92,22 +89,26 @@ let
processComposeYaml = pkgs.writeText "process-compose.yaml" (builtins.toJSON processComposeConfig);
+ testEnvExports = testConfig.envToExports testConfig.testEnv;
+
+ # The state directory is fixed (not configurable) to avoid mismatch between
+ # the test-env and spago-test-e2e shells.
+ stateDir = testConfig.testEnv.STATE_DIR;
+
testEnvScript = pkgs.writeShellScriptBin "test-env" ''
set -e
- # Export test environment variables for E2E test runners
- ${testConfig.envToExports testConfig.testEnv}
+ # Clean up previous test state and create fresh directory
+ rm -rf ${stateDir}
+ mkdir -p ${stateDir}
- if [ -z "''${STATE_DIR:-}" ]; then
- STATE_DIR="$(mktemp -d)"
- export STATE_DIR
- echo "Using temporary directory: $STATE_DIR"
- trap 'echo "Cleaning up $STATE_DIR..."; rm -rf "$STATE_DIR"' EXIT
- else
- export STATE_DIR
- fi
+ # Export all test environment variables
+ ${testEnvExports}
- mkdir -p "$STATE_DIR"
+ echo
+ echo "[test-env] State directory: ${stateDir}"
+ echo "[test-env] In another terminal, run: spago-test-e2e"
+ echo
exec ${pkgs.process-compose}/bin/process-compose up \
-f ${processComposeYaml} \
diff --git a/scripts/src/LegacyImporter.purs b/scripts/src/LegacyImporter.purs
index 732937f50..05e73ae84 100644
--- a/scripts/src/LegacyImporter.purs
+++ b/scripts/src/LegacyImporter.purs
@@ -761,7 +761,7 @@ buildLegacyPackageManifests rawPackage rawUrl = Run.Except.runExceptAt _exceptPa
Legacy.Manifest.fetchLegacyManifest package.name package.address (RawVersion tag.name) >>= case _ of
Left error -> throwVersion { error: InvalidManifest error, reason: "Legacy manifest could not be parsed." }
Right result -> pure result
- pure $ Legacy.Manifest.toManifest package.name (LenientVersion.version version) location legacyManifest
+ pure $ Legacy.Manifest.toManifest package.name (LenientVersion.version version) location tag.name legacyManifest
case manifest of
Left err -> Log.info $ "Failed to build manifest for " <> PackageName.print package.name <> "@" <> tag.name <> ": " <> printJson versionValidationErrorCodec err
Right val -> Log.info $ "Built manifest for " <> PackageName.print package.name <> "@" <> tag.name <> ":\n" <> printJson Manifest.codec val
@@ -1463,7 +1463,7 @@ fetchSpagoYaml address ref = do
| location /= GitHub { owner: address.owner, repo: address.repo, subdir: Nothing } -> do
Log.warn "spago.yaml file does not use the same location it was fetched from, this is disallowed..."
pure Nothing
- Right config -> case SpagoYaml.spagoYamlToManifest config of
+ Right config -> case SpagoYaml.spagoYamlToManifest (un RawVersion ref) config of
Left err -> do
Log.warn $ "Failed to convert parsed spago.yaml file to purs.json " <> contents <> "\nwith errors:\n" <> err
pure Nothing
diff --git a/scripts/src/PackageDeleter.purs b/scripts/src/PackageDeleter.purs
index 0e7fd131a..257a7b1a2 100644
--- a/scripts/src/PackageDeleter.purs
+++ b/scripts/src/PackageDeleter.purs
@@ -228,21 +228,24 @@ deleteVersion arguments name version = do
Just published, Nothing -> pure (Just (Right published))
Nothing, Just unpublished -> pure (Just (Left unpublished))
Nothing, Nothing -> pure Nothing
+ -- Read manifest before deleting it (needed for reimport)
+ maybeManifest <- Registry.readManifest name version
let
newMetadata = Metadata $ oldMetadata { published = Map.delete version oldMetadata.published, unpublished = Map.delete version oldMetadata.unpublished }
Registry.writeMetadata name newMetadata
Registry.deleteManifest name version
-- --reimport
when arguments.reimport do
- case publishment of
- Nothing -> Log.error "Cannot reimport a version that was not published"
- Just (Left _) -> Log.error "Cannot reimport a version that was specifically unpublished"
- Just (Right specificPackageMetadata) -> do
+ case publishment, maybeManifest of
+ Nothing, _ -> Log.error "Cannot reimport a version that was not published"
+ Just (Left _), _ -> Log.error "Cannot reimport a version that was specifically unpublished"
+ Just (Right _), Nothing -> Log.error $ "Cannot reimport: manifest not found for " <> formatted
+ Just (Right _), Just (Manifest manifest) -> do
-- Obtains `newMetadata` via cache
void $ API.publish Nothing
{ location: Just oldMetadata.location
, name: name
- , ref: specificPackageMetadata.ref
+ , ref: manifest.ref
, version: version
, compiler: unsafeFromRight $ Version.parse "0.15.4"
, resolutions: Nothing
diff --git a/spago.lock b/spago.lock
index a6dbae907..ea939ddeb 100644
--- a/spago.lock
+++ b/spago.lock
@@ -227,7 +227,6 @@
"exceptions",
"exists",
"exitcodes",
- "fetch",
"fixed-points",
"foldable-traversable",
"foreign",
@@ -239,14 +238,10 @@
"functors",
"gen",
"graphs",
- "http-methods",
"identity",
"integers",
"invariant",
"js-date",
- "js-fetch",
- "js-promise",
- "js-promise-aff",
"js-uri",
"json",
"language-cst-parser",
@@ -254,7 +249,6 @@
"lcg",
"lists",
"maybe",
- "media-types",
"mmorph",
"newtype",
"node-buffer",
@@ -299,11 +293,7 @@
"unfoldable",
"unicode",
"unsafe-coerce",
- "variant",
- "web-dom",
- "web-events",
- "web-file",
- "web-streams"
+ "variant"
]
}
},
@@ -316,22 +306,25 @@
"codec-json",
"console",
"datetime",
- "effect",
- "either",
- "foldable-traversable",
+ "exceptions",
+ "fetch",
+ "integers",
"json",
- "maybe",
+ "node-child-process",
+ "node-execa",
"node-fs",
"node-path",
"node-process",
- "prelude",
+ "ordered-collections",
"registry-app",
"registry-foreign",
"registry-lib",
"registry-test-utils",
+ "routing-duplex",
"spec",
"spec-node",
- "strings"
+ "strings",
+ "transformers"
],
"build_plan": [
"aff",
@@ -644,7 +637,6 @@
"exceptions",
"exists",
"exitcodes",
- "fetch",
"fixed-points",
"foldable-traversable",
"foreign",
@@ -656,14 +648,10 @@
"functors",
"gen",
"graphs",
- "http-methods",
"identity",
"integers",
"invariant",
"js-date",
- "js-fetch",
- "js-promise",
- "js-promise-aff",
"js-timers",
"js-uri",
"json",
@@ -672,7 +660,6 @@
"lcg",
"lists",
"maybe",
- "media-types",
"mmorph",
"newtype",
"node-buffer",
@@ -722,11 +709,7 @@
"unicode",
"unsafe-coerce",
"unsafe-reference",
- "variant",
- "web-dom",
- "web-events",
- "web-file",
- "web-streams"
+ "variant"
]
}
},
@@ -886,7 +869,6 @@
"exceptions",
"exists",
"exitcodes",
- "fetch",
"fixed-points",
"foldable-traversable",
"foreign",
@@ -898,14 +880,10 @@
"functors",
"gen",
"graphs",
- "http-methods",
"identity",
"integers",
"invariant",
"js-date",
- "js-fetch",
- "js-promise",
- "js-promise-aff",
"js-timers",
"js-uri",
"json",
@@ -914,7 +892,6 @@
"lcg",
"lists",
"maybe",
- "media-types",
"mmorph",
"newtype",
"node-buffer",
@@ -964,11 +941,7 @@
"unicode",
"unsafe-coerce",
"unsafe-reference",
- "variant",
- "web-dom",
- "web-events",
- "web-file",
- "web-streams"
+ "variant"
]
}
},
@@ -1144,22 +1117,16 @@
"path": "test-utils",
"core": {
"dependencies": [
- "aff",
"arrays",
"bifunctors",
"codec-json",
"datetime",
- "effect",
"either",
"exceptions",
- "fetch",
"foldable-traversable",
"formatters",
- "integers",
"json",
"maybe",
- "newtype",
- "node-process",
"ordered-collections",
"partial",
"prelude",
@@ -1192,7 +1159,6 @@
"enums",
"exceptions",
"exists",
- "fetch",
"fixed-points",
"foldable-traversable",
"foreign",
@@ -1204,14 +1170,10 @@
"functors",
"gen",
"graphs",
- "http-methods",
"identity",
"integers",
"invariant",
"js-date",
- "js-fetch",
- "js-promise",
- "js-promise-aff",
"js-uri",
"json",
"language-cst-parser",
@@ -1219,14 +1181,12 @@
"lcg",
"lists",
"maybe",
- "media-types",
"mmorph",
"newtype",
"node-buffer",
"node-event-emitter",
"node-fs",
"node-path",
- "node-process",
"node-streams",
"nonempty",
"now",
@@ -1238,7 +1198,6 @@
"parsing",
"partial",
"pipes",
- "posix-types",
"prelude",
"profunctor",
"profunctor-lenses",
@@ -1260,11 +1219,7 @@
"unfoldable",
"unicode",
"unsafe-coerce",
- "variant",
- "web-dom",
- "web-events",
- "web-file",
- "web-streams"
+ "variant"
]
},
"test": {
diff --git a/test-utils/spago.yaml b/test-utils/spago.yaml
index d85190964..4362f8e77 100644
--- a/test-utils/spago.yaml
+++ b/test-utils/spago.yaml
@@ -3,22 +3,16 @@ package:
build:
pedanticPackages: true
dependencies:
- - aff
- arrays
- bifunctors
- codec-json
- datetime
- - effect
- either
- exceptions
- - fetch
- foldable-traversable
- formatters
- - integers
- json
- maybe
- - newtype
- - node-process
- ordered-collections
- partial
- prelude
diff --git a/test-utils/src/Registry/Test/Assert.purs b/test-utils/src/Registry/Test/Assert.purs
index 55c0f2277..2d15e7a74 100644
--- a/test-utils/src/Registry/Test/Assert.purs
+++ b/test-utils/src/Registry/Test/Assert.purs
@@ -38,6 +38,18 @@ shouldNotContain container elem =
when (elem `Foldable.elem` container) do
fail (Utils.unsafeStringify elem <> "\n\nshould not be a member of\n\n" <> Utils.unsafeStringify container)
+-- | Assert that all elements in `expected` are present in `actual`.
+-- | This is a subset check, not an equality check - `actual` may contain
+-- | additional elements.
+-- |
+-- | Useful for E2E tests where a shared database means we can't predict
+-- | exact contents, only that certain expected items are present.
+shouldContainAll :: forall m a. MonadThrow Error m => Eq a => Array a -> Array a -> m Unit
+shouldContainAll actual expected =
+ Foldable.for_ expected \elem ->
+ when (elem `Foldable.notElem` actual) do
+ fail ("Expected element not found:\n" <> Utils.unsafeStringify elem <> "\n\nin array:\n" <> Utils.unsafeStringify actual)
+
shouldSatisfy :: forall m a. MonadThrow Error m => a -> (a -> Boolean) -> m Unit
shouldSatisfy a predicate =
unless (predicate a) do
diff --git a/test-utils/src/Registry/Test/E2E/Client.purs b/test-utils/src/Registry/Test/E2E/Client.purs
deleted file mode 100644
index 8d31b9850..000000000
--- a/test-utils/src/Registry/Test/E2E/Client.purs
+++ /dev/null
@@ -1,180 +0,0 @@
--- | HTTP client for making requests to the registry server during E2E tests.
--- | This module provides typed helpers for interacting with the Registry API.
-module Registry.Test.E2E.Client
- ( Config
- , ClientError(..)
- , defaultConfig
- , configFromEnv
- , getJobs
- , getJob
- , getStatus
- , publish
- , pollJob
- , printClientError
- ) where
-
-import Prelude
-
-import Codec.JSON.DecodeError as CJ.DecodeError
-import Control.Monad.Error.Class (class MonadThrow, throwError)
-import Control.Monad.Except (runExceptT)
-import Control.Monad.Trans.Class (lift)
-import Data.Array as Array
-import Data.Bifunctor (lmap)
-import Data.Codec.JSON as CJ
-import Data.DateTime (DateTime)
-import Data.Either (Either(..))
-import Data.Formatter.DateTime as Formatter.DateTime
-import Data.Int as Int
-import Data.Maybe (Maybe(..))
-import Data.Newtype (unwrap)
-import Effect (Effect)
-import Effect.Aff (Aff, Milliseconds(..), delay)
-import Effect.Aff.Class (class MonadAff, liftAff)
-import Effect.Exception (Error, error)
-import Effect.Exception as Effect.Exception
-import Fetch (Method(..))
-import Fetch as Fetch
-import JSON as JSON
-import Node.Process as Process
-import Registry.API.V1 (Job, JobId(..), LogLevel)
-import Registry.API.V1 as V1
-import Registry.Internal.Format as Internal.Format
-import Registry.Operation (PublishData)
-import Registry.Operation as Operation
-
--- | Configuration for the E2E test client
-type Config =
- { baseUrl :: String
- , timeout :: Milliseconds
- , pollInterval :: Milliseconds
- , maxPollAttempts :: Int
- }
-
--- | Default configuration for production use (port 8080 matches HTTPurple default)
-defaultConfig :: Config
-defaultConfig =
- { baseUrl: "http://localhost:8080"
- , timeout: Milliseconds 30000.0
- , pollInterval: Milliseconds 2000.0
- , maxPollAttempts: 30
- }
-
--- | Create config from environment, reading SERVER_PORT.
--- |
--- | SERVER_PORT is required and must be set by the test environment.
--- | See `nix/lib/env.nix` for the centralized environment configuration.
-configFromEnv :: Effect Config
-configFromEnv = do
- maybePort <- Process.lookupEnv "SERVER_PORT"
- case maybePort of
- Nothing -> Effect.Exception.throw "SERVER_PORT environment variable is not set. Run tests via 'nix run .#test-env' or 'nix build .#checks.x86_64-linux.integration'."
- Just port -> pure $ defaultConfig { baseUrl = "http://localhost:" <> port }
-
--- | Errors that can occur during client operations
-data ClientError
- = HttpError { status :: Int, body :: String }
- | ParseError { msg :: String, raw :: String }
- | Timeout String
- | NetworkError String
-
-printClientError :: ClientError -> String
-printClientError = case _ of
- HttpError { status, body } -> "HTTP Error " <> Int.toStringAs Int.decimal status <> ": " <> body
- ParseError { msg, raw } -> "Parse Error: " <> msg <> "\nOriginal: " <> raw
- Timeout msg -> "Timeout: " <> msg
- NetworkError msg -> "Network Error: " <> msg
-
--- | Convert a ClientError to an Effect Error for throwing
-toError :: ClientError -> Error
-toError = error <<< printClientError
-
--- | Parse JSON response body using a codec
-parseResponse :: forall a. CJ.Codec a -> String -> Either String a
-parseResponse codec body = do
- json <- lmap (append "JSON parse error: ") $ JSON.parse body
- lmap CJ.DecodeError.print $ CJ.decode codec json
-
--- | Make a GET request and decode the response
-get :: forall a. CJ.Codec a -> Config -> String -> Aff (Either ClientError a)
-get codec config path = runExceptT do
- response <- lift $ Fetch.fetch (config.baseUrl <> path) { method: GET }
- body <- lift response.text
- if response.status >= 200 && response.status < 300 then
- case parseResponse codec body of
- Left err -> throwError $ ParseError { msg: err, raw: body }
- Right a -> pure a
- else
- throwError $ HttpError { status: response.status, body }
-
--- | Make a POST request with JSON body and decode the response
-post :: forall req res. CJ.Codec req -> CJ.Codec res -> Config -> String -> req -> Aff (Either ClientError res)
-post reqCodec resCodec config path reqBody = runExceptT do
- let jsonBody = JSON.print $ CJ.encode reqCodec reqBody
- response <- lift $ Fetch.fetch (config.baseUrl <> path)
- { method: POST
- , headers: { "Content-Type": "application/json" }
- , body: jsonBody
- }
- responseBody <- lift response.text
- if response.status >= 200 && response.status < 300 then
- case parseResponse resCodec responseBody of
- Left err -> throwError $ ParseError { msg: err, raw: responseBody }
- Right a -> pure a
- else
- throwError $ HttpError { status: response.status, body: responseBody }
-
--- | Get the list of jobs
-getJobs :: Config -> Aff (Either ClientError (Array Job))
-getJobs config = get (CJ.array V1.jobCodec) config "/api/v1/jobs?include_completed=true"
-
--- | Get a specific job by ID, with optional log filtering
-getJob :: Config -> JobId -> Maybe LogLevel -> Maybe DateTime -> Aff (Either ClientError Job)
-getJob config (JobId jobId) level since = do
- let
- params = Array.catMaybes
- [ level <#> \l -> "level=" <> V1.printLogLevel l
- , since <#> \s -> "since=" <> Formatter.DateTime.format Internal.Format.iso8601DateTime s
- ]
- query = case params of
- [] -> ""
- ps -> "?" <> Array.intercalate "&" ps
- get V1.jobCodec config ("/api/v1/jobs/" <> jobId <> query)
-
--- | Check if the server is healthy
-getStatus :: Config -> Aff (Either ClientError Unit)
-getStatus config = runExceptT do
- response <- lift $ Fetch.fetch (config.baseUrl <> "/api/v1/status") { method: GET }
- if response.status == 200 then
- pure unit
- else do
- body <- lift response.text
- throwError $ HttpError { status: response.status, body }
-
--- | Publish a package
-publish :: Config -> PublishData -> Aff (Either ClientError V1.JobCreatedResponse)
-publish config publishData =
- post Operation.publishCodec V1.jobCreatedResponseCodec config "/api/v1/publish" publishData
-
--- | Poll a job until it completes or times out
-pollJob
- :: forall m
- . MonadAff m
- => MonadThrow Error m
- => Config
- -> JobId
- -> m Job
-pollJob config jobId = go 1
- where
- go attempt
- | attempt > config.maxPollAttempts =
- throwError $ toError $ Timeout $ "Job " <> unwrap jobId <> " did not complete after " <> Int.toStringAs Int.decimal config.maxPollAttempts <> " attempts"
- | otherwise = do
- liftAff $ delay config.pollInterval
- result <- liftAff $ getJob config jobId (Just V1.Debug) Nothing
- case result of
- Left err -> throwError $ toError err
- Right job ->
- case (V1.jobInfo job).finishedAt of
- Just _ -> pure job
- Nothing -> go (attempt + 1)
diff --git a/test-utils/src/Registry/Test/E2E/Fixtures.purs b/test-utils/src/Registry/Test/E2E/Fixtures.purs
deleted file mode 100644
index 70f1242b0..000000000
--- a/test-utils/src/Registry/Test/E2E/Fixtures.purs
+++ /dev/null
@@ -1,76 +0,0 @@
--- | Test fixtures for E2E tests.
--- | Contains package operation data used across multiple test suites.
-module Registry.Test.E2E.Fixtures
- ( effectPublishData
- , failingTransferData
- , trusteeAuthenticatedData
- ) where
-
-import Prelude
-
-import Data.Codec.JSON as CJ
-import Data.Maybe (Maybe(..))
-import JSON as JSON
-import Registry.Location as Location
-import Registry.Operation (AuthenticatedData, AuthenticatedPackageOperation(..), TransferData, UnpublishData)
-import Registry.Operation as Operation
-import Registry.SSH (Signature(..))
-import Registry.Test.Utils as Utils
-
--- | Standard publish data for effect@4.0.0, used by E2E tests.
--- | This matches the fixtures in app/fixtures/github-packages/effect-4.0.0
-effectPublishData :: Operation.PublishData
-effectPublishData =
- { name: Utils.unsafePackageName "effect"
- , location: Just $ Location.GitHub
- { owner: "purescript"
- , repo: "purescript-effect"
- , subdir: Nothing
- }
- , ref: "v4.0.0"
- , compiler: Utils.unsafeVersion "0.15.9"
- , resolutions: Nothing
- , version: Utils.unsafeVersion "4.0.0"
- }
-
--- | Authenticated transfer data for prelude, which has no owners in fixtures.
--- | Used to test failure scenarios in E2E tests - will fail because no owners
--- | are listed to verify the signature against.
-failingTransferData :: AuthenticatedData
-failingTransferData =
- let
- transferPayload :: TransferData
- transferPayload =
- { name: Utils.unsafePackageName "prelude"
- , newLocation: Location.GitHub
- { owner: "someone-else"
- , repo: "purescript-prelude"
- , subdir: Nothing
- }
- }
- rawPayload = JSON.print $ CJ.encode Operation.transferCodec transferPayload
- in
- { payload: Transfer transferPayload
- , rawPayload
- , signature: Signature "invalid-signature-for-testing"
- }
-
--- | Authenticated data with an intentionally invalid signature.
--- | When submitted by a trustee (packaging-team-user), pacchettibotti will re-sign it.
--- | If re-signing works, the job succeeds; if not, signature verification fails.
--- | Uses prelude@6.0.1 which exists in app/fixtures/registry/metadata/prelude.json.
-trusteeAuthenticatedData :: AuthenticatedData
-trusteeAuthenticatedData =
- let
- unpublishPayload :: UnpublishData
- unpublishPayload =
- { name: Utils.unsafePackageName "prelude"
- , version: Utils.unsafeVersion "6.0.1"
- , reason: "Testing trustee re-signing"
- }
- rawPayload = JSON.print $ CJ.encode Operation.unpublishCodec unpublishPayload
- in
- { payload: Unpublish unpublishPayload
- , rawPayload
- , signature: Signature "invalid-signature-for-testing"
- }
diff --git a/test-utils/src/Registry/Test/Fixtures.purs b/test-utils/src/Registry/Test/Fixtures.purs
new file mode 100644
index 000000000..28692c13c
--- /dev/null
+++ b/test-utils/src/Registry/Test/Fixtures.purs
@@ -0,0 +1,18 @@
+module Registry.Test.Fixtures where
+
+import Prelude
+
+import Data.Either as Either
+import Data.Maybe (Maybe(..))
+import Partial.Unsafe as Partial
+import Registry.Location (Location(..))
+import Registry.Sha256 (Sha256)
+import Registry.Sha256 as Sha256
+
+-- | A Location for use within tests.
+defaultLocation :: Location
+defaultLocation = GitHub { owner: "purescript", repo: "registry-dev", subdir: Nothing }
+
+-- | A Sha256 for use within tests.
+defaultHash :: Sha256
+defaultHash = Either.fromRight' (\_ -> Partial.unsafeCrashWith "Failed to parse Sha256") $ Sha256.parse "sha256-fN9RUAzN21ZY4Y0UwqUSxwUPVz1g7/pcqoDvbJZoT04="
diff --git a/test-utils/src/Registry/Test/Utils.purs b/test-utils/src/Registry/Test/Utils.purs
index 2db7280e5..57f177890 100644
--- a/test-utils/src/Registry/Test/Utils.purs
+++ b/test-utils/src/Registry/Test/Utils.purs
@@ -25,7 +25,6 @@ import Registry.PackageName (PackageName)
import Registry.PackageName as PackageName
import Registry.Range as Range
import Registry.SSH as SSH
-import Registry.Sha256 (Sha256)
import Registry.Sha256 as Sha256
import Registry.Version (Version)
import Registry.Version as Version
@@ -149,6 +148,7 @@ unsafeManifest name version dependencies = Manifest
{ url: "https://github.com/purescript/purescript-" <> name <> ".git"
, subdir: Nothing
}
+ , ref: "v" <> version
, description: Nothing
, owners: Nothing
, includeFiles: Nothing
@@ -158,11 +158,3 @@ unsafeManifest name version dependencies = Manifest
-- | Format a package version as a string in the form 'name@X.Y.Z'
formatPackageVersion :: PackageName -> Version -> String
formatPackageVersion name version = PackageName.print name <> "@" <> Version.print version
-
--- | A Location for use within tests.
-defaultLocation :: Location
-defaultLocation = GitHub { owner: "purescript", repo: "registry-dev", subdir: Nothing }
-
--- | A Sha256 for use within tests.
-defaultHash :: Sha256
-defaultHash = fromRight "Failed to parse Sha256" $ Sha256.parse "sha256-fN9RUAzN21ZY4Y0UwqUSxwUPVz1g7/pcqoDvbJZoT04="
diff --git a/types/v1/Manifest.dhall b/types/v1/Manifest.dhall
index e9fe88850..2f1a6fa5b 100644
--- a/types/v1/Manifest.dhall
+++ b/types/v1/Manifest.dhall
@@ -13,6 +13,7 @@ let Manifest =
, license : License
, version : Version
, location : ./Location.dhall
+ , ref : Text
, owners : Optional (List ./Owner.dhall)
, description : Optional Text
, includeFiles : Optional (List Text)