Open VSX is a vendor-neutral open-source alternative to the Visual Studio Marketplace. It provides a server application that manages VS Code extensions in a database, a web application similar to the VS Code Marketplace, and a command-line tool for publishing extensions similar to vsce.
A public instance of Open VSX is running at open-vsx.org. Please report issues related to that instance at EclipseFdn/open-vsx.org.
For information on publishing and managing extensions at open-vsx.org, please see the EclipseFdn/open-vsx.org wiki.
See the openvsx Wiki for documentation of general concepts and usage of this project.
- The easiest way to get a development environment for this project is to open it in Gitpod.
Click Open Browser on port 3000 to see the running web application.
- Open a development environment in Red Hat OpenShift Dev Spaces, it is an open source product based on Eclipse Che that is running on OpenShift Dedicated.
yarn build
— build the library andovsx
commandyarn watch
— watch (build continuously)
The command line tool is available at cli/lib/ovsx
.
The default frontend is the one bundled in the Docker image, and is also used for testing in the development environment. It depends on the compiled library, so make sure to build or watch the library before you build or watch the default frontend.
yarn build
— build the libraryyarn watch
— watch (build continuously)yarn build:default
— build the default frontend (run webpack)yarn watch:default
— run webpack in watch modeyarn start:default
— start Express to serve the frontend on port 3000
The Express server is started automatically in Gitpod. A restart is usually not necessary.
./gradlew build
— build and test the server./gradlew assemble -t
— build continuously (the server is restarted after every change)./gradlew runServer
— start the Spring server on port 8080./scripts/test-report.sh
— display test results on port 8081
The Spring server is started automatically in Gitpod. It includes spring-boot-devtools
which detects changes in the compiled class files and restarts the server.
If you would like to test authorization through GitHub, you need to create an OAuth app with a callback URL pointing to the exposed port 8080 of your Gitpod workspace. You can get it by calling a script:
server/scripts/callback-url.sh github
Note that the callback URL needs to be updated on GitHub whenever you create a fresh Gitpod workspace.
After you created the GitHub OAuth app, the next step is to copy the Client ID and Client Secret into Gitpod environment variables named GITHUB_CLIENT_ID
and GITHUB_CLIENT_SECRET
and bound to this repository. If you change the variables in a running workspace, run scripts/generate-properties.sh
in the server
directory to update the application properties.
With these settings in place, you should be able to log in by authorizing your OAuth app.
If you prefer to quickly get started with a local, docker-based development environment, you can use the approach described in our docker compose setup. You can use our docker compose profiles, allowing you the option to either run a service directly in a docker container or to manually build and run it on your local machine.
If you would like to test file storage via Google Cloud, follow these steps:
- Create a GCP project and a bucket.
- Make the bucket public by granting the role "Storage Object Viewer" to
allUsers
. - Configure CORS on the bucket with origin
"*"
and method"GET"
. - Create environment variables named
GCP_PROJECT_ID
andGCS_BUCKET_ID
containing your GCP project and bucket identifiers. If you change the variables in a running workspace, runscripts/generate-properties.sh
in theserver
directory to update the application properties. - Create a GCP service account with role "Storage Object Admin" and copy its credentials file into your workspace.
- Create an environment variable
GOOGLE_APPLICATION_CREDENTIALS
containing the path to the credentials file.
If you would like to test file storage via Azure Blob, follow these steps:
- Create a file storage account and a container named
openvsx-resources
(a different name is possible if you change theovsx.storage.azure.blob-container
property). - Allow Blob public access in the storage account and set the container's public access level to "Blob".
- Configure CORS in your storage account with origin
"*"
, method"GET"
and allowed headers"x-market-client-id, x-market-user-id, x-client-name, x-client-version, x-machine-id, x-client-commit"
- Create an environment variable
AZURE_SERVICE_ENDPOINT
with the "Blob service" URL of your storage account. If you change the variables in a running workspace, runscripts/generate-properties.sh
in theserver
directory to update the application properties. - Generate a "Shared access signature" and put its token into an environment variable
AZURE_SAS_TOKEN
.
If you also would like to test download count via Azure Blob, follow these steps:
- Create an additional storage account for diagnostics logging.
- IMPORTANT: set the same location as the file storage account (e.g. North Europe).
- Disable Blob public access.
- In the file storage account
- Open the diagnostic settings (
Monitoring
->Diagnostic settings (preview)
).- Click
blob
. - Click
Add diagnostic setting
. - Select
StorageRead
,Transaction
andArchive to a storage account
. - Select the diagnostic storage account you created in the previous step as
Storage account
.
- Click
- Open the diagnostic settings (
- Back to the diagnostic storage account
- Navigate to
Data Storage
->Containers
- The
insights-logs-storageread
container should have been added (it might take a few minutes and you might need to do some test downloads or it won't get created). - Create a "Shared access token" for the
insights-logs-storageread
container.- Click on the
insights-logs-storageread
container.- Click on
Settings
->Shared access token
- Must have
Read
andList
permissions. - Set the expiry date to a reasonable value
- Set the "Allowed IP Addresses" to the server's IP address.
- Must have
- Click on
- Click on the
- The
- Go to
Data Management
->Lifecycle management
- Create a rule, so that logs don't pile up and the download count service stays performant.
- Select
Limit blobs with filters
,Block blobs
andBase blobs
. - Pick number of days (e.g. 7).
- Enter
insights-logs-storageread/resourceId=
blob prefix to limit the rule to theinsights-logs-storageread
container.
- Navigate to
- You need to add two environment variables to your server environment
AZURE_LOGS_SERVICE_ENDPOINT
with the "Blob service" URL of your diagnostic storage account. The URL must end with a slash!AZURE_LOGS_SAS_TOKEN
with the shared access token for theinsights-logs-storageread
container.- If you change the variables in a running workspace, run
scripts/generate-properties.sh
in theserver
directory to update the application properties.
If you would like to test file storage via Amazon S3, follow these steps:
- Login to the AWS Console and create an S3 storage bucket
- Go to the bucket's
Permissions
tab.- Disable the
Block all public access
setting. - Add a
Cross-origin resource sharing (CORS)
configuration:[ { "AllowedHeaders": [ "*" ], "AllowedMethods": [ "GET", "HEAD" ], "AllowedOrigins": [ "*" ], "ExposeHeaders": [] } ]
- Disable the
- Follow the steps for programmatic access to create your access key id and secret access key
- Configure the following environment variables on your server environment
AWS_ACCESS_KEY_ID
with your access key idAWS_SECRET_ACCESS_KEY
with your secret access keyAWS_REGION
with your bucket region nameAWS_SERVICE_ENDPOINT
with the url of your S3 provider if not using AWS (for AWS do not set)AWS_BUCKET
with your bucket nameAWS_PATH_STYLE_ACCESS
whether or not to use path style access, (defaults tofalse
)- Path-style access:
https://s3.<region>.amazonaws.com/<bucket-name>/<resource-key>
- Virtual-style access:
https://<bucket-name>.s3.<region>.amazonaws.com/<resource-key>
- Path-style access: