diff --git a/demo/docs/002_what_is_the_seqera_platform.md b/demo/docs/002_what_is_the_seqera_platform.md index 43b5379..f48ffee 100644 --- a/demo/docs/002_what_is_the_seqera_platform.md +++ b/demo/docs/002_what_is_the_seqera_platform.md @@ -24,6 +24,6 @@ Seqera offers two deployment methods: ## Core components -The Platform consists of three main architectural components: a backend container, a frontend container, and a database that stores all of the data required by the application. The frontend container communicates with the backend container and database via API calls. As a result, all features and activities available through the user interface can also be accessed programmatically via the Seqera Platform API. For more information, see the [Automation](./014_automation_on_the_seqera_platform.md) section later in the walkthrough. +The Platform consists of three main architectural components: a backend container, a frontend container, and a database that stores all of the data required by the application. The frontend container communicates with the backend container and database via API calls. As a result, all features and activities available through the user interface can also be accessed programmatically via the Seqera Platform API. For more information, see the [Automation](./015_automation_on_the_seqera_platform.md) section later in the walkthrough. This walkthrough will demonstrate the various features of the Seqera Platform which makes it easier to build, launch, and manage scalable data pipelines. \ No newline at end of file diff --git a/demo/docs/004_launching_pipelines.md b/demo/docs/004_launching_pipelines.md index 34d18c7..3c66e93 100644 --- a/demo/docs/004_launching_pipelines.md +++ b/demo/docs/004_launching_pipelines.md @@ -4,10 +4,9 @@ Each workspace has a Launchpad that allows users to easily create and share Next Users can create their own pipelines, share them with others on the Launchpad, or tap into over a hundred community pipelines available on nf-core and other sources. - /// details | Advanced - type: info - +type: info + Adding a new pipeline is relatively simple and can be included as part of the demonstration. See [Add a Pipeline](./005_adding_a_pipeline.md). /// @@ -17,66 +16,74 @@ Adding a new pipeline is relatively simple and can be included as part of the de Navigate to the Launchpad in the `seqeralabs/showcase` workspace and select **Launch** next to the `nf-core-rnaseq` pipeline to open the launch form. - /// details | Click to show animation - type: example +type: example - ![Launch a pipeline](assets/sp-cloud-launch-form.gif) +![Launch a pipeline](assets/sp-cloud-launch-form.gif) /// +The launch form consists of General config, Run parameters, and Advanced options sections to specify your run parameters before execution, and an execution summary. -### 2. Nextflow parameter schema +### 2. Launch form: General config -When you select **Launch**, a parameters page is shown to allow you to fine-tune the pipeline execution. This parameters form is rendered from a file called [`nextflow_schema.json`](https://github.com/nf-core/rnaseq/blob/master/nextflow_schema.json) which can be found in the root of the pipeline Git repository. The `nextflow_schema.json` file is a simple JSON-based schema describing pipeline parameters that allows pipeline developers to easily adapt their in-house Nextflow pipelines to be executed via the interactive Seqera Platform web interface. +The General config section contains the following fields: -See the ["Best Practices for Deploying Pipelines with the Seqera Platform"](https://seqera.io/blog/best-practices-for-deploying-pipelines-with-seqera-platform/) blog for further information on how to automatically build the parameter schema for any Nextflow pipeline using tooling maintained by the nf-core community. +- **Pipeline to launch**: The pipeline Git repository name or URL. +- **Revision number**: A valid repository commit ID, tag, or branch name. For nf-core/rnaseq, this is prefilled. +- **Config profiles**: One or more configuration profile names to use for the execution. This pipeline will use the `test` profile. +- **Workflow run name**: A unique identifier for the run, initially generated as a combination of an adjective and a scientist's name, but can be modified as needed. +- **Labels**: Assign new labels to the run in addition to `yeast`. +- **Compute environment**: Select an existing workspace compute environment. This pipeline will use the `seqera_aws_ireland_fusionv2_nvme` compute environment. +- **Work directory**: The (cloud or local) file storage path where pipeline scratch data is stored. Platform will create a scratch sub-folder if only a cloud bucket location is specified. This pipeline will use the `s3://seqeralabs-showcase` bucket. -### 3. Parameter selection +### 3. Launch form: Run parameters -Adjust the following Platform-specific options if needed: +After specifying the General config, the Run parameters page appears, allowing you to fine-tune pipeline execution. This form is generated from the pipeline's `nextflow_schema.json` file, which defines pipeline parameters in a simple JSON-based schema. This schema enables pipeline developers to easily adapt their Nextflow pipelines for execution via the Seqera Platform web interface. -- `Workflow run name`: +For more information on automatically building the parameter schema for any Nextflow pipeline, refer to the ["Best Practices for Deploying Pipelines with the Seqera Platform"](https://seqera.io/blog/best-practices-for-deploying-pipelines-with-seqera-platform/) blog. - A unique identifier for the run, pre-filled with a random name. This can be customized. +You can enter Run parameters in three ways: -- `Labels`: +- **Input form view**: Enter text or select attributes from lists, and browse input and output locations with Data Explorer. +- **Config view**: Edit raw configuration text directly in JSON or YAML format. +- **Upload params file**: Upload a JSON or YAML file containing run parameters. - Assign new or existing labels to the run. For example, Project ID or genome version. +Specify the following parameters for nf-core/rnaseq: -Each pipeline including nf-core/rnaseq will have its own set of parameters that need to be provided in order to run it. The following parameters are mandatory: +- `input`: Most nf-core pipelines have standardized the usage of the `input` parameter to specify an input samplesheet that contains paths to any input files (such as FastQ files) and any additional metadata required to run the pipeline. The `input` parameter can accept a file path to a samplesheet in the S3 bucket selected through Data Explorer (such as `s3://my-bucket/my-samplesheet.csv`). Alternatively, the Seqera Platform has a Datasets feature that allows you to upload structured data like samplesheets for use with Nextflow pipelines. For the purposes of this demonstration, select **Browse** next to the `input` parameter and search and select a pre-loaded dataset called "rnaseq_samples". + +/// details | Click to show animation +type: example -- `input`: +![Input parameters](assets/sp-cloud-launch-parameters-input.gif) +/// - Most nf-core pipelines have standardized the usage of the `input` parameter to specify an input samplesheet that contains paths to any input files (such as FastQ files) and any additional metadata required to run the pipeline. The `input` parameter can accept a file path to a samplesheet in the S3 bucket selected through Data Explorer (such as `s3://my-bucket/my-samplesheet.csv`). Alternatively, the Seqera Platform has a Datasets feature that allows you to upload structured data like samplesheets for use with Nextflow pipelines. +/// details | Advanced +type: info - For the purposes of this demonstration, select **Browse** next to the `input` parameter and search and select a pre-loaded dataset called "rnaseq_samples". +Users can upload their own samplesheets and make them available as a dataset in the 'Datasets' tab. See [Add a dataset](./006_adding_a_dataset.md). +/// - /// details | Click to show animation - type: example +- `outdir`: Most nf-core pipelines have standardized the usage of the `outdir` parameter to specify where the final results created by the pipeline are published. `outdir` must be different for each different pipeline run. Otherwise, your results will be overwritten. Since we want to publish these files to an S3 bucket, we must provide the directory path to the appropriate storage location (such as `s3://my-bucket/my-results). - ![Input parameters](assets/sp-cloud-launch-parameters-input.gif) - /// - + For the `outdir` parameter, specify an S3 directory path manually, or select **Browse** to specify a cloud storage directory using Data Explorer. - /// details | Advanced - type: info - - Users can upload their own samplesheets and make them available as a dataset in the 'Datasets' tab. See [Add a dataset](./006_adding_a_dataset.md). - /// + /// details | Click to show animation + type: example -- `outdir`: + ![Output parameters](assets/sp-cloud-run-parameters.gif) + /// - Most nf-core pipelines have standardized the usage of the `outdir` parameter to specify where the final results created by the pipeline are published. `outdir` must be different for each different pipeline run. Otherwise, your results will be overwritten. Since we want to publish these files to an S3 bucket, we must provide the directory path to the appropriate storage location (such as `s3://my-bucket/my-results). +Users can easily modify and specify other parameters to customize the pipeline execution through the parameters form. For example, in the **Read trimming options** section of the parameters page, change the `trimmer` to select `fastp` in the dropdown menu, instead of `trimgalore`. - For the `outdir` parameter, specify an S3 directory path manually, or select **Browse** to specify a cloud storage directory using Data Explorer. +![Read trimming options](./assets/trimmer-settings.png) +### 4. Launch form: Advanced options - /// details | Click to show animation - type: example - - ![Output parameters](assets/sp-cloud-launch-parameters-outdir.gif) - /// +The Advanced options allow you to specify additional settings for the pipeline execution. These include: -Users can easily modify and specify other parameters to customize the pipeline execution through the parameters form. For example, in the **Read trimming options** section of the parameters page, change the `trimmer` to select `fastp` in the dropdown menu, instead of `trimgalore`, and select **Launch** button. +- **Resource labels**: Use resource labels to tag the computing resources created during the workflow execution. +- **Nextflow config**: Specify Nextflow configuration options to customize task execution. For example, you can specify an error handling strategy to continue the workflow even if some tasks fail. +- **Pipeline secrets**: Pipeline secrets store keys and tokens used by workflow tasks to interact with external systems. Enter the names of any stored user or workspace secrets required for the workflow execution. -![Read trimming options](./assets/trimmer-settings.png) +After you have filled the necessary launch details, select **Launch**. diff --git a/demo/docs/011_tertiary_analysis_data_studios.md b/demo/docs/011_interactive_analysis_data_studios.md similarity index 71% rename from demo/docs/011_tertiary_analysis_data_studios.md rename to demo/docs/011_interactive_analysis_data_studios.md index 76c518a..9332a0f 100644 --- a/demo/docs/011_tertiary_analysis_data_studios.md +++ b/demo/docs/011_interactive_analysis_data_studios.md @@ -1,22 +1,36 @@ ## Introduction to Data Studios -After running a pipeline, you may want to perform tertiary analysis in platforms like Jupyter Notebook or RStudio. Setting up the infrastructure for these platforms, including accessing pipeline data, results, and necessary bioinformatics packages, can be complex and time-consuming. +After running a pipeline, you may want to perform interactive analysis in platforms like Jupyter Notebook or RStudio using your preferred tools. Setting up the infrastructure for these platforms, including accessing pipeline data, results, and necessary bioinformatics packages, can be complex and time-consuming. -Data Studios streamlines this process for Seqera Platform users by allowing them to add interactive analysis environments based on templates, similar to how they add and share pipelines and datasets. +Data Studios simplifies this process for Seqera Platform users by enabling them to create interactive analysis environments using container image templates or custom images, much like the way they add and share pipelines and datasets. Platform manages all the details, enabling users to easily select their preferred interactive tool and analyze their data within the platform. On the **Data Studios** tab, you can monitor and see the details of the data studios in your workspace. -Data studios will have a name, followed by the cloud provider they are run on, the container image being used (Jupyter, VS Code, or RStudio), the user who created the data studio, the timestamp of creation, and the status indicating whether it has started, stopped, or is running. +Data studios will have a name, followed by the cloud provider they are run on, the container image being used (Jupyter, VS Code, RStudio or custom container), the user who created the data studio, the timestamp of creation, and the [status of the session](https://docs.seqera.io/platform/24.2/data_studios#session-statuses). ![Data studios overview](./assets/data-studios-overview.png) Select the three dots menu to: -- See the details of the data studio -- Connect to the studio +- View the details of the data studio - Start the studio -- Stop the studio +- Start the studio as a new sessions - Copy the data studio URL +- Stop the studio + +### Environments +Data Studios offers four container image templates: JupyterLab, RStudio Server, Visual Studio Code, and Xpra. These templates initially install a minimal set of packages, allowing you to add more as needed during a session. Customized studios display an arrow icon with a tooltip indicating the modified template. + +In addition to the Seqera-provided container template images, you can provide your own custom container environments by augmenting the Seqera-provided images with a list of Conda packages or by providing your own base container template image. + +Data Studios uses the Wave service to build custom container template images. + +/// details | Click to show animation + type: example + +![Data Studio overview details](assets/sp-cloud-data-studios-overview.gif) +/// + ## Analyse RNAseq data in Data Studios @@ -118,34 +132,8 @@ To share the results of your RNAseq analysis or allow colleagues to perform expl ### 5. Takeaway This example demonstrates how Data Studios allows you to perform interactive analysis and explore the results of your secondary data analysis all within one unified platform. It simplifies the setup and data management process, making it easier for you to gain insights from your data efficiently. -## Analysing genomic data using IGV desktop in Data Studios - -We can use the Xpra data studio image to visualize genetic variants using IGV desktop. The stock Xpra image does not come with IGV preinstalled, so we will need to install it and then use IGV to visualize a variant in the 1000 Genomes Project. - -### 1. Open the Xpra data studio -Select the existing **xpra-demo** data studio. - -When you click on "Start" you will see that the data studio is mounting the `xpra-1000G` bucket. This is the 1000 Genomes public bucket ` -s3://1000genomes`, but we have created a second data link inside the workspace to not block or collide with the data link titled `1000 genomes`. - - -### 2. Upload IGV install script and copy to `/workspace` -To make it easier to get IGV and it's requirements installed efficiently, we created a small script that will download the IGV prebuild binaries for Linux and install them inside the data studio. To use the install script, upload `download_and_install_igv.sh` by clicking the top left navbar, select Server -> Upload file. This will upload the file to `/root`. Let's copy it to `/workspace` with the `cp` command: `cp /root/download_and_install_igv.sh /workspace`. - -![Xpra upload file](assets/xpra-data-studios-upload-file.png) - -### 3. Install IGV -Run the script with `bash`: `bash /workspace/download_and_install_igv.sh`. This will download and install IGV desktop and open it. You should see the IGV desktop window open if everything worked correctly. - -![Xpra IGV desktop](assets/xpra-data-studios-IGV-desktop.png) - - -### 4. View 1000 Genomes Project data in IGV -Inside IGV desktop, change the genome version to hg19. Then click on File -> Load from File and select the following file as shown in the screenshot. -`/workspace/data/xpra-1000Genomes/phase3/data/HG00096/high_coverage_alignment`. -![Xpra IGV desktop](assets/xpra-data-studios-IGV-load-bam.png) +For more examples of using Data Studios for interactive analysis, see the following guides: +- [Analysing genomic data using IGV desktop in Data Studios](013_create_xpra_igv_environment.md) -Search for PCSK9 and zoom into one of the exons of the gene. If you are on genome version hg19 and everything worked as expected, you should be able to see a coverage graph and reads as shown in the screenshot below: -![Xpra IGV desktop](assets/xpra-data-studios-IGV-view-bam.png) \ No newline at end of file diff --git a/demo/docs/012_setting_up_data_studio.md b/demo/docs/012_setting_up_data_studio.md index 02b43e5..09d52ed 100644 --- a/demo/docs/012_setting_up_data_studio.md +++ b/demo/docs/012_setting_up_data_studio.md @@ -2,98 +2,132 @@ ### Create a data studio -#### 1. Add a data studio {#hidden-heading} +In a workspace, select Data Studios, and then select **Add data studio**. -To create a data studio, select **Add data studio** and select a template. Currently, templates for Jupyter, VS Code, and RStudio are available. +#### 1. Compute & Data {#hidden-heading} +Customize the following fields: + +For Compute: + +- 1\. Select an available AWS Batch compute environment +- 2\. **CPUs allocated**: Select the number of CPUs to allocate to the data studio +- 3\. **GPUs allocated**: Available only if the selected compute environment has GPU support enabled +- 4\. **Maximum memory allocated**: Select the maximum memory to allocate + +For Data: + +- To mount data, select **Mount data** and choose the data to mount from the Data Explorer modal. Confirm your selection by clicking **Mount data**. The mounted data will be accessible at `/workspace/data/` via the Fusion file system. Note that the mounted data does not need to be in the same compute environment or region as the data studio's cloud provider. However, this may lead to increased costs or errors. +- Click the **Next** button to proceed to the next section. /// details | Click to show animation - type: example + type: example -![Add a data studio](assets/create-data-studio.gif) +![Add a data studio](assets/sp-cloud-ds-compute-and-data.gif) /// -#### 2. Select a compute environment {#hidden-heading} - -Currently, only AWS Batch is supported. +#### 2. General config {#hidden-heading} -#### 3. Mount data using Data Explorer {#hidden-heading} -##### Create a data link -To enable access to data in a dtudio, create a custom data link pointing to the directory in the AWS S3 bucket where the results are saved. This will allow us to read and write only the data we need from cloud storage, from within our Studio. +For the General config section, you can specify details about your data studio environment. You can: -Select the **Add cloud bucket** button in Data Explorer and specify the path to the output directory: +- Use a pre-built container template provided by Seqera +- Use a pre-built container template provided by Seqera and install Conda packages +- Use a custom container template image that you supply -![Create data link](assets/create-a-data-link.png){ .center } +To use one of the Seqera-provided container templates, complete the following steps: -##### Mount the data link into the studio -Select data to mount into your data studio environment using the Fusion file system in Data Explorer. In the Data Explorer, you can select the newly created data link to mount. +- 1\. Container template: Select a data studio template from the dropdown list. +- 2\. Data studio name +- 3\. Optional: Description +- 4\. Optional: Select **Install Conda packages** to enter or upload a list of Conda packages + For example: -This data will be available at `/workspace/data/`. +``` +name: myenv +channels: + - conda-forge +dependencies: + - python=3.10 + - numpy +``` +- 5. Select **Next**. /// details | Click to show animation type: example -![Mount data into studio](assets/mount-data-into-studio.gif) +![Use a studio template](assets/sp-cloud-ds-use-template.gif) /// -#### 4. Resources for environment {#hidden-heading} +To use a custom container template image that you supply, complete the following steps: -Enter a CPU or memory allocation for your data studio environment (optional). The default is 2 CPUs and 8192 MB of memory. - -Then, select **Add**. - -The data studio environment will be available in the Data Studios landing page with the status 'stopped'. Select the three dots and **Start** to begin running the studio. +- 1\. Container template: Select **Prebuild container image** from the list. This should be a Docker image stored in public registry or AWS private/public ECR. For information about providing your own template, see [Custom container template image](https://docs.seqera.io/platform/24.2/data_studios#custom-container-template-image) on how to provide your own image with the dependencies you need for your data studio. +- 2\. Data studio name +- 3\. Optional: Description +If you select the Prebuild container image template, you cannot select Install Conda packages as these options are mutually exclusive. /// details | Click to show animation type: example -![Start a studio](assets/start-studio.gif) +![Use a custom container](assets/sp-cloud-ds-custom-container.gif) /// +#### 3. Summary {#hidden-heading} + +The last step will bring you to a summary page where you can review your configuration and save the data studio. Once you have ensured that the specified configuration is correct, you can: -![Connect to a studio](assets/connect-to-studio.png){ .right .image} +- Save your configuration: If you want to save the data studio for future use, select **Add only**. +- Save and immediately start the data studio: If you want to save and immediately start the data studio, select **Add and start**. + +After setup, you'll be directed to the Data Studios landing page, where you can view and manage your data studio sessions. The status of your newly created data studio will be displayed as either "stopped" or "starting", depending on whether you chose to add it or start a session immediately. + +/// details | Click to show animation + type: example + +![Add and start studio](assets/sp-cloud-ds-add-start.gif) +/// ### Connect to a data studio -To connect to a running data studio session, select the three dots next to the status message and choose **Connect**. A new browser tab will open, displaying the status of the data studio session. Select **Connect**. -
-
+Once you have created a data studio, you can connect to it by selecting the three dots next to the status message and choosing **Connect**. A new browser tab will open, displaying the status of the data studio session. Select **Connect**. ### Collaborate in a data studio Collaborators can also join a data studios session in your workspace. For example, to share the results of the nf-core/rnaseq pipeline, you can share a link by selecting the three dots next to the status message for the data studio you want to share, then select **Copy data studio URL**. Using this link, other authenticated users with the "Connect" role (at minimum) can access the session directly. +
![Stop a studio session](assets/stop-a-studio.png){ .right .image} + ### Stop a data studio To stop a running session, select the three dots next to the status and select **Stop**. Any unsaved analyses or results will be lost.
+

- - /// details | Advanced - type: info + type: info -For a more detailed use case of performing tertiary analysis with the results of the nf-core/rnaseq pipeline in an RStudio/RShiny app environment, take see [Tertiary analysis with Data Studios](./011_tertiary_analysis_data_studios.md). +For a more detailed use case of performing interactive analysis with the results of the nf-core/rnaseq pipeline in an RStudio/RShiny app environment, take see [Interactive analysis with Data Studios](./011_interactive_analysis_data_studios.md). /// -## Checkpoints in Data Studios +### Checkpoints in Data Studios + +Data Studios automatically saves changes to the root filesystem every five minutes in the compute environment's `.studios/checkpoints` folder. These checkpoints are valuable for long-term projects or complex environments, ensuring that setup isn't lost between sessions. They can be shared with colleagues, saving them setup time. -When starting a data studio, a checkpoint gets created. This checkpoint allows you to restart a data studio with previously installed software and changes made to the root filesystem of the container. Please note, that if you stop a data studio and restart it, this will restart it from the latest checkpoint. To go back to a specific previous configuration of data studio session, please restart it from a checkpoint as highlighted in the screenshot below: +Checkpoints preserve packages and configurations but not mounted data changes. You can restore from a previous checkpoint when starting a new session. Checkpoints can be renamed and are automatically cleaned up when their data studios are deleted. To return to a previous configuration, restart the session from a checkpoint as shown below: ![alt text](assets/data-studio-checkpoints.png) ## More information -For a detailed explanation about specific concepts of Data Studios and the tools preinstalled in Data Studios images, see the [Seqera Platform docs](https://docs.seqera.io/platform/23.4.0/data/data-studios). +For a detailed explanation about specific concepts of Data Studios and the tools preinstalled in Seqera-provided Data Studios images and help on creating your own custom images for Studios, see the [Seqera Platform docs](https://docs.seqera.io/platform/24.2/data_studios). /// details | Advanced - type: info + type: info -For additional details on Data Studios based on a demonstration from Rob Newman, see [Data Studios deep dive](./013_data_studios_deep_dive.md). +For additional details on Data Studios based on a demonstration from Rob Newman, see [Data Studios deep dive](./014_data_studios_deep_dive.md). /// diff --git a/demo/docs/013_create_xpra_igv_environment.md b/demo/docs/013_create_xpra_igv_environment.md new file mode 100644 index 0000000..b768a24 --- /dev/null +++ b/demo/docs/013_create_xpra_igv_environment.md @@ -0,0 +1,31 @@ +## Analysing genomic data using IGV desktop in Data Studios + +We can use the Xpra data studio image to visualize genetic variants using IGV desktop. The stock Xpra image does not come with IGV preinstalled, so we will need to install it and then use IGV to visualize a variant in the 1000 Genomes Project. + +### 1. Open the Xpra data studio +Select the existing **xpra-demo** data studio. + +When you click on "Start" you will see that the data studio is mounting the `xpra-1000G` bucket. This is the 1000 Genomes public bucket ` +s3://1000genomes`, but we have created a second data link inside the workspace to not block or collide with the data link titled `1000 genomes`. + + +### 2. Upload IGV install script and copy to `/workspace` +To make it easier to get IGV and it's requirements installed efficiently, we created a small script that will download the IGV prebuild binaries for Linux and install them inside the data studio. To use the install script, upload `download_and_install_igv.sh` by clicking the top left navbar, select Server -> Upload file. This will upload the file to `/root`. Let's copy it to `/workspace` with the `cp` command: `cp /root/download_and_install_igv.sh /workspace`. + +![Xpra upload file](assets/xpra-data-studios-upload-file.png) + +### 3. Install IGV +Run the script with `bash`: `bash /workspace/download_and_install_igv.sh`. This will download and install IGV desktop and open it. You should see the IGV desktop window open if everything worked correctly. + +![Xpra IGV desktop](assets/xpra-data-studios-IGV-desktop.png) + + +### 4. View 1000 Genomes Project data in IGV +Inside IGV desktop, change the genome version to hg19. Then click on File -> Load from File and select the following file as shown in the screenshot. +`/workspace/data/xpra-1000Genomes/phase3/data/HG00096/high_coverage_alignment`. +![Xpra IGV desktop](assets/xpra-data-studios-IGV-load-bam.png) + + +Search for PCSK9 and zoom into one of the exons of the gene. If you are on genome version hg19 and everything worked as expected, you should be able to see a coverage graph and reads as shown in the screenshot below: + +![Xpra IGV desktop](assets/xpra-data-studios-IGV-view-bam.png) \ No newline at end of file diff --git a/demo/docs/013_data_studios_deep_dive.md b/demo/docs/014_data_studios_deep_dive.md similarity index 100% rename from demo/docs/013_data_studios_deep_dive.md rename to demo/docs/014_data_studios_deep_dive.md diff --git a/demo/docs/014_automation_on_the_seqera_platform.md b/demo/docs/015_automation_on_the_seqera_platform.md similarity index 100% rename from demo/docs/014_automation_on_the_seqera_platform.md rename to demo/docs/015_automation_on_the_seqera_platform.md diff --git a/demo/docs/015_seqera_pipelines.md b/demo/docs/016_seqera_pipelines.md similarity index 95% rename from demo/docs/015_seqera_pipelines.md rename to demo/docs/016_seqera_pipelines.md index ea1a633..caa24e6 100644 --- a/demo/docs/015_seqera_pipelines.md +++ b/demo/docs/016_seqera_pipelines.md @@ -26,4 +26,4 @@ Select the **Launch Pipeline** tab to see various methods for launching the pipe ![Launch Seqera Pipeline](assets/seqera-pipelines-launch-cli.png) -If you’re more at home in the terminal, you can use the launch box to grab commands for Nextflow, [Seqera Platform CLI](014_automation_on_the_seqera_platform.md), and [nf-core/tools](https://nf-co.re/docs/nf-core-tools). +If you’re more at home in the terminal, you can use the launch box to grab commands for Nextflow, [Seqera Platform CLI](015_automation_on_the_seqera_platform.md), and [nf-core/tools](https://nf-co.re/docs/nf-core-tools). diff --git a/demo/docs/016_seqera_containers.md b/demo/docs/017_seqera_containers.md similarity index 100% rename from demo/docs/016_seqera_containers.md rename to demo/docs/017_seqera_containers.md diff --git a/demo/docs/017_walkthrough_summary.md b/demo/docs/018_walkthrough_summary.md similarity index 100% rename from demo/docs/017_walkthrough_summary.md rename to demo/docs/018_walkthrough_summary.md diff --git a/demo/docs/018_resources.md b/demo/docs/019_resources.md similarity index 100% rename from demo/docs/018_resources.md rename to demo/docs/019_resources.md diff --git a/demo/docs/assets/create-a-data-link.png b/demo/docs/assets/create-a-data-link.png deleted file mode 100644 index 52656ac..0000000 Binary files a/demo/docs/assets/create-a-data-link.png and /dev/null differ diff --git a/demo/docs/assets/create-data-studio.gif b/demo/docs/assets/create-data-studio.gif deleted file mode 100644 index ae5bde6..0000000 Binary files a/demo/docs/assets/create-data-studio.gif and /dev/null differ diff --git a/demo/docs/assets/mount-data-into-studio.gif b/demo/docs/assets/mount-data-into-studio.gif deleted file mode 100644 index 1fb29b4..0000000 Binary files a/demo/docs/assets/mount-data-into-studio.gif and /dev/null differ diff --git a/demo/docs/assets/sp-cloud-data-studios-overview.gif b/demo/docs/assets/sp-cloud-data-studios-overview.gif new file mode 100644 index 0000000..c10966d Binary files /dev/null and b/demo/docs/assets/sp-cloud-data-studios-overview.gif differ diff --git a/demo/docs/assets/sp-cloud-ds-add-start.gif b/demo/docs/assets/sp-cloud-ds-add-start.gif new file mode 100644 index 0000000..3c78869 Binary files /dev/null and b/demo/docs/assets/sp-cloud-ds-add-start.gif differ diff --git a/demo/docs/assets/sp-cloud-ds-compute-and-data.gif b/demo/docs/assets/sp-cloud-ds-compute-and-data.gif new file mode 100644 index 0000000..f97077e Binary files /dev/null and b/demo/docs/assets/sp-cloud-ds-compute-and-data.gif differ diff --git a/demo/docs/assets/sp-cloud-ds-custom-container.gif b/demo/docs/assets/sp-cloud-ds-custom-container.gif new file mode 100644 index 0000000..ca80a7e Binary files /dev/null and b/demo/docs/assets/sp-cloud-ds-custom-container.gif differ diff --git a/demo/docs/assets/sp-cloud-ds-use-template.gif b/demo/docs/assets/sp-cloud-ds-use-template.gif new file mode 100644 index 0000000..0a45882 Binary files /dev/null and b/demo/docs/assets/sp-cloud-ds-use-template.gif differ diff --git a/demo/docs/assets/sp-cloud-launch-form.gif b/demo/docs/assets/sp-cloud-launch-form.gif index 2e64cba..435236b 100644 Binary files a/demo/docs/assets/sp-cloud-launch-form.gif and b/demo/docs/assets/sp-cloud-launch-form.gif differ diff --git a/demo/docs/assets/sp-cloud-run-parameters.gif b/demo/docs/assets/sp-cloud-run-parameters.gif new file mode 100644 index 0000000..f59bd2c Binary files /dev/null and b/demo/docs/assets/sp-cloud-run-parameters.gif differ diff --git a/demo/docs/assets/start-studio.gif b/demo/docs/assets/start-studio.gif deleted file mode 100644 index 6587f50..0000000 Binary files a/demo/docs/assets/start-studio.gif and /dev/null differ diff --git a/demo/mkdocs.yml b/demo/mkdocs.yml index c141bb0..64f57a6 100644 --- a/demo/mkdocs.yml +++ b/demo/mkdocs.yml @@ -110,10 +110,10 @@ nav: - View run information: 008_viewing_run_information.md - Optimize pipelines: 009_optimizing_pipelines.md - Use Data Explorer: 010_using_data_explorer.md - - Tertiary analysis in Data Studios: 011_tertiary_analysis_data_studios.md + - Interactive analysis in Data Studios: 011_interactive_analysis_data_studios.md - Set up a Data Studio: 012_setting_up_data_studio.md - - Automation on the Seqera Platform: 014_automation_on_the_seqera_platform.md - - Seqera Pipelines: 015_seqera_pipelines.md - - Seqera Containers: 016_seqera_containers.md - - Walkthrough summary: 017_walkthrough_summary.md - - Resources: 018_resources.md + - Automation on the Seqera Platform: 015_automation_on_the_seqera_platform.md + - Seqera Pipelines: 016_seqera_pipelines.md + - Seqera Containers: 017_seqera_containers.md + - Walkthrough summary: 018_walkthrough_summary.md + - Resources: 019_resources.md \ No newline at end of file