Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rephrase rearrange #533

Merged
merged 18 commits into from
Aug 6, 2024
Merged

Rephrase rearrange #533

merged 18 commits into from
Aug 6, 2024

Conversation

Pratichhya
Copy link
Collaborator

In this PR, I wanted to simplify the text for:

  • Jupyterlab
  • A couple of pages in EOplaza
  • Application Introduction

@Pratichhya Pratichhya requested a review from HansVRP May 21, 2024 07:17
Copy link

github-actions bot commented May 21, 2024

PR Preview Action v1.4.7
🚀 Deployed preview to https://eu-cdse.github.io/documentation/pr-preview/pr-533/
on branch gh-pages at 2024-08-05 09:59 UTC

@Pratichhya
Copy link
Collaborator Author

Pratichhya commented May 28, 2024

updated as suggested

Copy link
Collaborator

@jdries jdries left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

think I made it halfway through, big PR :-)

One of the key goals of openEO, is to support [FAIR principles](https://www.go-fair.org/fair-principles/) and open science. The implementation in the Copernicus dataspace
makes it easier to comply with these principles, by incorporating these principles in the implementation, so that
users are automatically a step closer to generating FAIR-compliant open data. If your project has requirements related to these topics,
One of the key goals of openEO, is to support [FAIR principles](https://www.go-fair.org/fair-principles/) and open science. Directly implementing these principles in Copernicus Dataspace makes it easier to follow them. In other words, with the use of openEO, users are creating FAIR-compliant open solutions automatically. If your project has requirements related to these topics,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the meaning gets changed here from:
being a step closer to fair-compliant data
to:
automatically generating fair compliant data

This is not intended as pure marketing, so I prefer to keep the nuance.

this should serve as a good starting point.

These are a few examples:

- *[F2 Rich metadata](https://www.go-fair.org/fair-principles/f2-data-described-rich-metadata/)* openEO generates rich STAC metadata,
that includes processing info, complete raster metadata, band information, etcetera.
- *[R1.2 Detailed provenance](https://www.go-fair.org/fair-principles/r1-2-metadata-associated-detailed-provenance/)* In result metadata [derived-from](https://github.com/radiantearth/stac-spec/blob/master/item-spec/item-spec.md#derived_from)
links link back to all input products to provide provenance.
- *[R1.2 Detailed provenance](https://www.go-fair.org/fair-principles/r1-2-metadata-associated-detailed-provenance/)* As a result, metadata "[derived-from](https://github.com/radiantearth/stac-spec/blob/master/item-spec/item-spec.md#derived_from)" links trace back to all input products, ensuring provenance.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was referring here to 'result metadata', so the metadata associated with a result generated by openEO.

A rendering of a very simple process graph that simply extracts Sentinel-2 data is shown below. While the example is simple,
the underlying steps needed to generate an analysis ready datacube from raw Sentinel-2 L2A products are already quite complex,
and most equivalent code will be a lot harder to understand and analyze.
Below is a straightforward process graph illustrating the extraction of Sentinel-2 data. Although this example is simple, the steps required to generate an analysis-ready datacube from raw Sentinel-2 L2A products are considerably complex. As a result, most comparable codes would be considerably more complex to understand and analyze.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

language: 'considerably complex' used twice

Applications.qmd Outdated

![](Applications/_images/SearchResult.png)

From the browser it is possible to acces the [Data workspace](Applications/DataWorkspace.qmd), which is a tool to manage and and order satellite products. These products can then be further processed and downloaded for various purposes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if data workspace should be mentioned right after browser.
This looks like a very niche application, there's a list of processors which uses a bunch of acronyms that are hard to understand, and have no documentation.
Mention this app more towards the bottom.

Applications.qmd Outdated

From the browser it is possible to acces the [Data workspace](Applications/DataWorkspace.qmd), which is a tool to manage and and order satellite products. These products can then be further processed and downloaded for various purposes.

Additionally, the [Catalogue CSV](Applications/Catalogue-csv.qmd) documentation provides access to additional information on Sentinel product lists in CSV format.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

very niche application, move to bottom


Within the Copernicus Dataspace ecosystem, a free of charge jupyterkab service is offered. It allows users to access and analyze Earth observation data effectively.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

typo: jupyterkab


![](_images/JupyterLab_welcome.png)
Within JupyterLab, python notebooks can be used as an interactive programming environment for those who want to prototype their EO data processing quickly. Each notebook is set-up in order to access the EO Data repository. Additionally, example notebooks are available to help users get started with their analysis.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the notebooks are not really set up to access the eo data repository easily
in fact, access from these notebooks is the same as from your own laptop, because data is not mounted, nor is there easy s3 access or anything like that.
I already logged an issue about it, but it doesn't get solved for some reason, so perhaps we should adjust docs to avoid overselling things.

@Pratichhya
Copy link
Collaborator Author

think I made it halfway through, big PR :-)

haha indeed, it was unintentional 😅. Thank you for the feedback. I will update them as suggested 😄

@Pratichhya
Copy link
Collaborator Author

updated as suggested

makes it easier to comply with these principles, by incorporating these principles in the implementation, so that
users are automatically a step closer to generating FAIR-compliant open data. If your project has requirements related to these topics,
this should serve as a good starting point.
One of the key goals of openEO, is to support [FAIR principles](https://www.go-fair.org/fair-principles/) and open science. Directly implementing these principles in Copernicus Dataspace makes it easier to follow them. In other words, with the use of openEO, users are creating FAIR-compliant open solutions automatically. If a project has requirements related to these topics, this should serve as a good starting point.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

*These principles are seamlessly integrated into the Copernicus Dataspace Ecosystem, making it intuitive to adhere to them.

Consequently, using openEO allows users to automatically develop FAIR-compliant open solutions. For projects with requirements in these areas, openEO offers an excellent starting point.

Applications.qmd Outdated
This section provides an overview of the EO Applications available from Copernicus Data Space Ecosystem.
Copernicus Data Space Ecosystem provides a wide range of applications that can be used to access, process, and visualize Copernicus data. These applications are designed for users of different skill level to be used for various applications.

For new Copernicus dataspace ecosystem users we advise to explore the [Copernicus Data Space Browser](Applications/Browser.qmd) as starting point. In this platform users can explore various datasets, employing tools for visualization, comparison, and downloading with ease.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

users, we

Applications.qmd Outdated

![](Applications/_images/PlazaOverview.png)

Furthermore, the ecosystem encompasses a [QGIS Plugin](Applications/QGIS.qmd) designed to facilitate access and processing of EO data. The documentation for the QGIS Plugin offers an overview of its functionality and usage within the QGIS platform.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

isn't qgis mainly used for visualisation?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

indeed

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so why do we currently describe it as a processing tool?

Copy link
Collaborator Author

@Pratichhya Pratichhya Jun 14, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Qgis is a processing tool as well, along with Visualisation. However, the QGIS Plugin provided by Sentinel Hub is indeed for visualisation. Thus, I updated the content of the sentence

Applications.qmd Outdated

Furthermore, the ecosystem encompasses a [QGIS Plugin](Applications/QGIS.qmd) designed to facilitate access and processing of EO data. The documentation for the QGIS Plugin offers an overview of its functionality and usage within the QGIS platform.

From the [Copernicus Data Space Browser](Applications/Browser.qmd) it is possible to acces the [Data workspace](Applications/DataWorkspace.qmd), which is a tool to manage and and order satellite products. These products can then be further processed and downloaded for various purposes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

to manage and and (double and)

Applications.qmd Outdated

Additionally, the [Catalogue CSV](Applications/Catalogue-csv.qmd) documentation provides access to additional information on Sentinel product lists in CSV format.

Among the array of applications, there's also the Copernicus Dashboard. Through this public platform, to monitor activities within the ecosystem, keeping track of ongoing updates.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider the sentence; Through this public platform, to monitor activities within the ecosystem, keeping track of ongoing updates.

Try to improve the grammar and textual flow


## Overview

A captivating feature of the marketplace is the growing and diverse catalogue of EO services from different providers. To enhance user experience when searching a service, a text filter bar is available at the top of the page in addition to attribute filtering.
The marketplace showcase a diverse catalogue of EO services from various providers. These services are classified based on their maturity levels, which indicate the quality and reliability of the service.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The marketplace showcases



## What is a service?

openEO Algorithm Plaza offers a wide range of services in Earth Observation. These services support algorithms ranging from simple computations like the Normalized Difference Vegetation Index (NDVI) to more complex algorithms that utilize machine learning and multiple parameters.
openEO Algorithm Plaza offers a wide range of workflows in Earth Observation termed as services. These services can ranging from simple computations like the Normalized Difference Vegetation Index (NDVI) to more complex algorithms that utilize machine learning and multiple parameters.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These services can range...

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would remove: that utilize machine learning and multiple parameters.


In addition to providing existing services, the marketplace also supports users in showcasing their algorithms as services in its catalogue. To advertise your service on the marketplace, the algorithm must be built using openEO. It's important to consider your target audience, especially if reaching a non-scientific audience; you may want to hold back on hard-to-interpret options.
In addition to providing access to available services, the marketplace also supports users in showcasing their algorithms. To advertise an algorithms on the marketplace, they must be built using openEO. Visit the openEO [UDP](https://open-eo.github.io/openeo-python-client/udp.html){target="_blank"} for more information on openEO user-defined-processes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reread this passage;

provide access to the available services

To advertise an algorithm


Once your algorithm is exposed as a service, users can 'invoke' it with a given set of parameters.
Once an algorithm is exposed as a service, other users readily access and use it for their own workflows. To enhance reusability, it's essential to consider the audience when preparing materials. Especially for non-scientific users, it's advisable to avoid complex options that may be difficult to understand.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would leave: "Especially for non-scientific users, it's advisable to avoid complex options that may be difficult to understand."" out

* Stability
* Scalability
* Documentation
* Validation of the results: That ensures the accuracy and reliability of the outcomes.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

validation of the result; ensures the...

@@ -50,46 +54,48 @@ The table below provides an overview of the different maturity levels that are a

For more information on dealing with the services, please refer to the [Manage your services guide](PlazaDetails/ManageService.qmd).

Once familiar with the concepts used in the openEO Algorithm Plaza, individuals can begin to explore the platform.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can start to explore


## Interested in using or publishing services?

As a user, you have the option to either use an existing service or publish your own algorithms. Here, we have provided a list of important steps to consider when using or sharing a service. It's worth noting that this list is not exhaustive and additional steps may be required based on your specific needs.
Users can either use an existing service or publish their algorithms. Here, a list of important steps to consider when using or sharing a service is provided. It's worth noting that this list is not specific, and additional steps may be required based on specific needs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here? I do not see a link? Maybe use below in stead.

Below a non-extensive list is provided of considerations to take when sharing a service.

Be writing non-extensive you can leave out the second sentence


### Managing your account

You can explore the available services and features within the openEO Algorithm Plaza. However, to use them, you must be logged in. Nevertheless, if you still need to register, you can follow the registration process mentioned [here](../Registration.qmd).
Executing the available services and features within the openEO Algorithm Plaza requires logging in. If registration is still needed, the registration process outlined [here](../Registration.qmd) can be followed.
Copy link
Collaborator

@HansVRP HansVRP Jun 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Plaza, requires

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also reconsider the second sentence, the grammar seems to be off


###### Step 3: Check your Credits

Using any openEO processes, including those offered as services in this marketplace, consumes a certain amount of credits. Credits are crucial in executing openEO processes, serving as the main currency for accessing services and processing resources. Notably, these credits are shared among organisations. Whenever a service or a supported processing platform is executed, credits from the shared pool cover the resource consumption.

This marketplace simplifies credit management, allowing users to monitor their account's credits easily. You can check your openEO credit under the [`Billing`](https://marketplace-portal.dataspace.copernicus.eu/billing){target="_blank"} section. Moreover, every user is provided with 4000 credits each month, with which they can execute multiple services.
This marketplace simplifies credit management, allowing users to monitor their account's credits easily. The credit can be monitored credit under the [`Billing`](https://marketplace-portal.dataspace.copernicus.eu/billing){target="_blank"} section. Moreover, every user is provided with 4000 credits each month, with which they can execute multiple services.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the credit can be monitored credit; please revise this sentence

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe write 4000 free credits? And are these openEO credits or cdse credits?

How does it work with sentinel hub?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are openEO credits. For sentinel hub and CF there is a seperate set of free packages as shown here https://documentation.dataspace.copernicus.eu/Quotas.html



![](_images/billing2.png)

Credits are deducted based on the chosen services and spatial extent.The amount will vary depending on the processing complexity and time required for each type of service. Detailed examples of some well-known services and how they fit into the 4000 free credits can further be found [here](PlazaDetails/Strength.qmd).

If you think your available credits are insufficient or you run out of credits, you can [create a ticket](https://helpcenter.dataspace.copernicus.eu/hc/en-gb){target="_blank"} with your username, email for further support and guidance.
If available credits are insufficient or are depleted, users can [create a ticket](https://helpcenter.dataspace.copernicus.eu/hc/en-gb){target="_blank"} with their username and registered email for further support and guidance.
Copy link
Collaborator

@HansVRP HansVRP Jun 12, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if the available credits are insufficient or depleted


When you click on any of these services, you will be redirected to the service details page. Here, you can find information about the service such as a general description and instructions on how to execute the service. For more information on how to execute a service, please refer to the [Execute a service](PlazaDetails/ExecuteService.qmd) page.
Clicking on any of these services redirects users to the service details page. Here, information about the service, including a general description and execution instructions, can be found. For more information on how to execute a service, please refer to the [Execute a service](PlazaDetails/ExecuteService.qmd) page.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

services, redirects

@@ -99,14 +105,14 @@ Every user have the choice to onboard their services as an individual or as part

##### Publish your algorithm

To publish your service on marketplace, the algorithm must be built using one of the supported processing platforms(currently we only support openEO as orchestrator). This ensures that users can take full advantage of plaza's features, such as accounting and reporting, as well as the ability to execute services directly through the web editor.
To publish a service on marketplace, the algorithm must be built using one of the supported processing platforms(currently we only support openEO as orchestrator). This ensures that users can take full advantage of plaza's features, such as accounting and reporting, as well as the ability to execute services directly through the web editor.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the algorithm must be built using openEO.

Experiencing issues with executing a service or publishing it onto the openEO Algorithm Plaza? Feel free to contact our support team by reaching out to us by [creating a ticket](https://helpcenter.dataspace.copernicus.eu/hc/en-gb){target="_blank"}.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please contact the support team for further assistance [creating a ticket]

@@ -3,72 +3,47 @@ title: "JupyterLab"
image: _images/jupyter_login.png
---

JupyterLab is an advanced interactive development environment (IDE) that offers a flexible and feature-rich interface for working with notebooks, code, and data. It allows users to organize their workspaces using a flexible layout system with panels, views, and tabs for various activities. Furthermore, it supports various document formats, including notebooks, text files, code files, and markdown files. With its modular and extensible architecture, JupyterLab enables customization through extensions, additional functionalities, and integration with external tools.
It enhances the user experience with features like a file browser, command palette, debugger, and console, making it a versatile tool for interactive data exploration, analysis, and scientific computing.
JupyterLab is a user-friendly tool for working with data and code. User can customize it by adding extra features and connecting it with other software. This makes organizing work, rectifying mistakes, and interacting with data easier. For more detailed information, please visit the [JupyterLab documentation](https://jupyterlab.readthedocs.io/en/stable/){target="_blank"}.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

leave out rectifying mistakes

The size of the instance is determined by number of resources available to the notebook kernels run by the user - CPU cores and memory.
All flavors are suitable for performing typical tasks and will be capable of running all samples provided in /samples folder.
To ensure fair use of resources by the CDSE users, it is recommended to start with the Small flavor and switch to a bigger only when you experience issues with kernel crashing due to the lack of available memory.
Once logged in to JupyterLab, users are offered a choice of 3 flavors of Jupyter instances: Small, Medium and Large.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

into

To ensure fair use of resources by the CDSE users, it is recommended to start with the Small flavor and switch to a bigger only when you experience issues with kernel crashing due to the lack of available memory.
Once logged in to JupyterLab, users are offered a choice of 3 flavors of Jupyter instances: Small, Medium and Large.
The size of the instance is based on how many resources, like CPU cores and memory, are allocated to the notebook kernels.
All flavours are suitable for performing simple tasks and can run all samples provided in the samples folder. For fair resource usage, it is suggested to start with the "Small" flavour and switch to a larger one if kernel crashes is encounted caused by insufficient memory.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there also an implication for credit cost when running operations? Do the large ones consume more?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, they don't. These are still free services. But do you suggest mentioning that, too?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, I would not add additional information. I was simply wondering if there were any real implications


![](_images/Flavors_captcha.png)

## JupyterLab User Interface

Once you have successfully signed in, you will be presented with a launcher that offers various Python environments to work in, including Python 3, Geo science, OpenEO, and Sentinel Hub. Each environment is equipped with specific Python packages tailored to different requirements. You can choose to run your code either in a notebook or a console, depending on your preference. Additionally, the launcher provides options to create text files, markdown files, or Python files, allowing you to work with different types of documents as needed.
After signing in, users will see a launcher with Python environments such as Python 3, Geo Science, OpenEO, and Sentinel Hub. Each environment is equipped with specific Python packages tailored to various requirements. Users can run their code in a notebook or a console, depending on their preference. Additionally, options are provided to create text files, markdown files, or Python files, allowing users to work with different types of documents as needed.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we link to a place in documentation which helps users pick between the environments?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These kernels are created based on a couple of APIs; however, the different kernels can be used for an API or none of these. So, I'm not sure where I can link these. However, a page/section that describes these kernels or env. can be included within this.

@KanerLev @Slegersj

Copy link
Collaborator

@HansVRP HansVRP left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

partly finished


##### Step 5: Select service visibility

You can choose to make your service public or private. If you select public, your service will be visible to all users in the openEO algorithm plaza. If you select private, only you and your organization members will be able to see your service.
The service can be designated as either public or private. If set to public, it will be visible to all users in the openEO Algorithm Plaza. On the other hand, only the developer and organization members have access to the service if set to private.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would use however instead of on the other hand. Or nevertheless


##### Step 4: Add labels

You can add multiple labels to your service to help users find your service in the openEO algorithm plaza. The labels can be used to filter services within the marketplace and also give an idea of its category.
Multiple labels can be added for a service to help users find the service within the platform. These labels serve as filters within the marketplace and indicate the service's category.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

service its category

* Results: You can briefly describe your service's results and mention the output format of the result.
* Cost Estimation: You can give a user an idea of the resource consumption and time required to run your service for a given input.
* References: You can provide a list of references to publications, websites, or other resources relevant to your service. Provide details on resource consumption, processing time, and output format to help users effectively understand and utilize your service.
* Parameters: A list of all the parameters that should be fed to the service to execute it. Each parameter's name, type, description, and default value are specified.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that should be provided instead of that should be fed

* Cost Estimation: You can give a user an idea of the resource consumption and time required to run your service for a given input.
* References: You can provide a list of references to publications, websites, or other resources relevant to your service. Provide details on resource consumption, processing time, and output format to help users effectively understand and utilize your service.
* Parameters: A list of all the parameters that should be fed to the service to execute it. Each parameter's name, type, description, and default value are specified.
* Usage example(Python code): A Python code example to demonstrate how to use the service. The example should include the service's input parameters and the expected output.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

its

* References: You can provide a list of references to publications, websites, or other resources relevant to your service. Provide details on resource consumption, processing time, and output format to help users effectively understand and utilize your service.
* Parameters: A list of all the parameters that should be fed to the service to execute it. Each parameter's name, type, description, and default value are specified.
* Usage example(Python code): A Python code example to demonstrate how to use the service. The example should include the service's input parameters and the expected output.
* Results: Service's results and it is supported output format.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid 's in written text

* Update button, disabled by default

## Invite Team members

A key aspect of this platform is the capability to invite colleagues, friends, or co-workers to join a shared organisation.
You can invite new members to your organisation by clicking on the `INVITE MEMBER` button available within the `Team` sub-menu of your profile. This will prompt a form where you will have to provide some more information for adding a new user to your organization. This block contains the following fields:
A key feature of this platform is its ability to invite colleagues, friends, or co-workers to join a shared organization. New members can be invited by clicking on the `INVITE MEMBER` button in the `Team` sub-menu of the profile. This action will open a form requesting additional information to add a new user to the organization. This block contains the following fields:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe its better to write; to invite contributors


## Provide your organisation details

When you register in the Copernicus Data Space Ecosystem, a personal organisation is already created for you. On your profile page within the openEO Algorithm Plaza, you'll find your organisation shown under the linked organisation section. Additionally, you can access the Organisation page by selecting the "Organisation" option in the sub-navigation. Here, you can both view and edit your organisation's details, which may include:
The organisation name is displayed on the profile page within the openEO Algorithm Plaza under the linked organisation section. Additionally, the "Organization" option in the sub-navigation provides access to the Organization page. Here, users can view and edit the organisation's details, which includes:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

avoid 's in written text


Organisations are core elements of the openEO algorithm plaza Portal, as they are the entities that relate users, accesses, services, data, etc. One can think of an organisation as a company in most cases, although an individual can be a one-man organisation. This organisational concept allows users to manage shared services and allocate credits accordingly. Users can design their organisation to specific needs, whether for project collaborators, a particular team, or at the organisational level.
The organisations are core elements of the openEO algorithm plaza, as they are the entities that relate users, services, openEO credits, and more. While organisations can encompass multiple users, an individual can be viewed as an organisation. This organisational concept allows users to manage shared services and distribute credits accordingly. Furthermore, the organisations can be tailored to suit specific requirements, whether for project collaborators, a particular team, or at the organisational level.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'Organizations' form a cornerstone of the openEO algorithm plaza.

additional comment, I believe I have also read openEO Algorithm Plaza.

Make sure that you are consistent

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

quite a vague sentence: the entities that relate users, services, openEO credits, and more.

what do you actually want to say?

@@ -4,26 +4,25 @@ aliases:
- "/Applications/PlazaDetails/ManageOrg.html"
---

<div style="text-align: justify">
Assuming that the user has registered in the Copernicus Data Space Ecosystem and has access to the openEO Algorithm Plaza. Upon checking the [profile](https://marketplace-portal.dataspace.copernicus.eu/profile){target='_blank'}, they will notice that a personal organisation is created.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming that.... a personal organization will be created under [profile]

To execute a service from the openEO algorithm plaza through one of the OpenEO client libraries, it is important to use the *datacube_from_process* function. It accepts the ID and namespace of the service.
Both are made available in the service description on the openEO algorithm plaza.
The full documentation on using the function is available on the official [openEO documentation](https://open-eo.github.io/openeo-python-client/datacube_construction.html#datacube-from-process){target="_blank"}.
If any issues are encountered executing a service, please feel free to raise questions in the [forum](https://forum.dataspace.copernicus.eu/){target="_blank"} directly.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are encountered when executing a service

users are automatically a step closer to generating FAIR-compliant open data. If your project has requirements related to these topics,
this should serve as a good starting point.
One of the goals of openEO is to support [FAIR principles](https://www.go-fair.org/fair-principles/) and open science. These principles are seamlessly integrated into the Copernicus Data Space Ecosystem, making it intuitive to adhere to them.
Consequently, using openEO allows users to develop FAIR-compliant open solutions automatically. For projects with requirements in these areas, openEO offers an excellent starting point.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not sure if "For projects with requirements in these areas, openEO offers an excellent starting point." is even required

This also has consequences for replicating work: the same process graph can be executed on different backends, or evaluated
against different datasets. This allows to evaluate whether an algorithm is broadly applicable, or only works in a very specific
environment.
This also impacts the replication of work: the same process graph can be executed on different backends or evaluated against other datasets. This capability allows researchers to determine whether an algorithm is broadly applicable or only effective in a specific environment.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

they cannot necessarily be used on different backends if the input is a specific collection.

So maybe its better to write that they can be executed on different areas or time periods?

@@ -4,14 +4,13 @@ aliases:
- /Largescaleprocessing.html
---

Processing of larger areas extending to a global scale is one of the more challenging tasks in earth observation,
but certainly one that this platform aims to tackle. In this page we describe one of the best practice based on the example of [processing a croptype map for all 27 countries in the European Union](https://github.com/openEOPlatform/openeo-classification){target="_blank"}. We do recommend you to reaching out on the forum or helpdesk regarding your particular case, as workflows can vary, and adequate processing resources may require some advanced planning.
Processing larger areas, especially globally, presents significant challenges in earth observation. Nonetheless, this platform aims to address these challenges. In this context, we highlight one of the best practices by showcasing the example of [processing a croptype map for all 27 countries in the European Union](https://github.com/openEOPlatform/openeo-classification){target="_blank"}. We encourage users to contact the forum with their issues connected to a specific case.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

future discussion with Jeroen; is this example still relevant or deprecated


The basic strategy for processing large areas is to split them up into smaller areas, usually according to a regular tile grid. Splitting reduces the size of the area that needs to be processed by one batch job and avoids running into all kinds of limitations. For instance, when processing a specific projection, you anyway have to stay within the bounds of that projection. Also, the output file size of a job often becomes impractical when working over huge areas. Or you will hit bottlenecks in the backend implementation that does not occur for normally sized jobs. Also, when a smaller job fails or requires reprocessing, the cost will be smaller.
The basic strategy for processing large areas involves splitting them into smaller sections, usually according to a regular tile grid. This division reduces the area size that needs to be processed in one batch job and avoids various limitations. For example, when processing within a specific projection, it's necessary to stay within the bounds of that projection. Additionally, the output file size of a job can become impractical when dealing with large areas. Bottlenecks that do not occur with smaller jobs may also arise in the backend implementation. Also, when a smaller job fails or requires reprocessing, the cost will be smaller.
Copy link
Collaborator

@HansVRP HansVRP Jul 26, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. avoid 's.

  2. It might be good to double check this section with Victor.

From what I understand it is actually beneficial to process an as large as possible area at once (supported by the amount of memory you have.)

Since then spark really shines and lowers the credit cost.

3)"Bottlenecks that do not occur with smaller jobs may also arise in the backend implementation. Also, when a smaller job fails or requires reprocessing, the cost will be smaller."

these sentences are vague and confusing


Having job parameters in a file is also useful for debugging afterwards. Determining parameters at runtime means you don't
have absolute certainty over the value of a specific argument, as there may be bugs in your code.
Having job parameters in a file is also beneficial for debugging afterwards. If parameters are determined at runtime, there is no absolute certainty over the value of a specific argument due to potential bugs in the code.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is a vague sentence:

there is no absolute certainty over the value of a specific argument due to potential bugs in the code.


## Prepare tiling grid

The tiling grid choice depends on your preferred projection system, which, in turn, is determined by your area of interest. For Europe, you can use the EPSG:3035 projection, while for global processing, considering different projections in accordance with UTM zones may be preferable.
The choice of the tiling grid depends on the user's preferred projection system, which is determined by the area of interest. For Europe, users can opt for the EPSG:3035 projection. In contrast, using different projections per UTM zone may be preferable for global processing.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

also here, it would be good to discuss this section with Victor. His investigation revealed a lot of insightful information


The size of tiles in your grid is also important and often ranges from 20km to 100km. For relatively light workflows, a 100km grid can work well, while for more demanding cases, a 20km grid is better. In our example, we chose to work with 20km tiles because the workflow was quite demanding. A smaller tile size can also result in less unneeded processing when your target area has an irregular shape, like most countries and continents.
The size of tiles in the grid is also crucial and typically ranges from 20km to 100km. A 100km grid can suffice for relatively light workflows, whereas a 20km grid is more suitable for demanding cases. We opted for 20km tiles in our example because the workflow was particularly demanding. A smaller tile size can also minimize unnecessary processing, especially in irregular target areas like most countries and continents.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

especially for

@@ -48,7 +46,7 @@ UTM 100km | LAEA 100km
![](_images/UTM100.png)| ![](_images/LAEA100.png)


A grid can be masked based on the countries we want to load, the following script shows an example:
A grid can be masked based on the countries the user want to load; the following script shows an example:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should this be in the CDSE documentation or on a user example?


- The average runtime was 30 minutes, which means that it would take ~15 days of continuous processing with 15 parallel jobs.
- The average cost was below 100 credits, so we would be able to process with a budget of 1100000 credits.
- The average runtime per job was 30 minutes, indicating it would require approximately 15 days of continuous processing with 15 parallel jobs.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this math correct for the standard 2 concurrent jobs?

I think it would result in +200 days


A common bottleneck to parallelization is memory consumption, and it can be useful to know the maximum memory allocation on a single machine in your backend of choice. For instance, in a cloud environment with 16GB per machine and 4 CPUs, using slightly less than 4GB per worker is efficient as you can fit 4 parallel workers on a single VM while requiring 6GB would fit only 2 workers and leave about 4GB unused.
A common challenge in parallelization is managing memory consumption, especially understanding the maximum memory allocation per machine in the chosen backend environment. For example, in a cloud setup where each machine has 16GB RAM and 4 CPUs, efficient usage would involve allocating less than 4GB per worker. This configuration allows four parallel workers to be fitted on a single VM. Conversely, requiring 6GB per worker would accommodate only two workers per VM, leaving approximately 4GB unused.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here we would truly benefit of more clear information for users. Best to align with Jeroen Dries and Victor


![Tracking jobs by CSV](_images/trackingcsv.png)

## Errors during production

It's common for some tasks to run into issues during production, which is okay if it doesn't happen too frequently. If a task fails, take a quick look at the error logs. If there's no obvious reason, a simple retry might do the trick. Sometimes, you might need to allocate more memory.
It's common for some tasks to encounter issues during production, which is generally acceptable if it happens occasionally. If a task fails, it is recommended to check the error logs. A simple retry might solve the problem if there's no obvious reason for the failure. Sometimes, users may need to allocate more memory.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

might be dangerous to write to add more memory. Maybe it is better that we write that if the job keep failing, to contact us through the forum?


We also see a limited number of cases where issues in the underlying product archive cause failures or artifacts. These are harder to resolve and may require interaction with the backend.
Additionally, there are occasional instances where issues in the underlying product archive cause failures or artifacts. These situations can be more challenging to resolve and may require interaction with the backend support team.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

then this part is no longer required

Applications.qmd Outdated

![](Applications/_images/JupyterLab_welcome.png)

Moreover, these jupyterlab environments support the use of openEO to access and process the data in an interactive manner. More information on the openEO API can be found [here](APIs/openEO/openEO.qmd). Nevertheless, for users who are intersted in using openEO API in a GUI environment, we recommend exploring the [openEO Web Editor](Applications/WebEditor.qmd) documentation. This tool allows users to interact with the openEO API in a more visual manner.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

'intersted' missed an e there


---

Below is the comprehensive list of applications available within the Copernicus Data Space Ecosystem:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very good rewrite!


The [openEO Algorithm Plaza](https://marketplace-portal.dataspace.copernicus.eu/){target="_blank"} is a marketplace within Copernicus Data Space Ecosystem that allows user to discover and share different Earth Observation(EO) algorithms expressed as openEO process graphs. It's a one-stop-shop where they can either share their algorithm or use existing ones as a service.
Therefore, the plaza enhances algorithm reusability, which is a cornerstone of the FAIR principles. Assuming familiarity with EO data and [openEO](../APIs/openEO/openEO.qmd) concepts, this documentation section is a beginner's guide for sharing algorithms.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

beginners



## What is a service?

openEO Algorithm Plaza offers a wide range of services in Earth Observation. These services support algorithms ranging from simple computations like the Normalized Difference Vegetation Index (NDVI) to more complex algorithms that utilize machine learning and multiple parameters.
The openEO Algorithm Plaza offers a wide range of EO workflows as services. These services can range from simple computations like the Normalized Difference Vegetation Index (NDVI) to more complex algorithms.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

simple computations, such as the .., to more

| _**Operational**_ | The service has been shown to be fit for larger scale production and integration in operational systems. Rules for estimating resource usage are available, or a unit cost is established. (€ per hectare, € per request, etc.) |
| _**Prototype**_ | Service is provided ‘as-is’, with a short description and possibly a reference to what is implemented. |
| _**Incubating**_ | The service is documented with example requests (sets of parameters), the corresponding output, and the resources required to generate that output. |
| _**Verified**_ | The service is labelled verified based on its software readiness and verification that validation reports are not required. |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

still very confusing that you intermix verified and validated.

I do not understand how a service would go from incubating to Verified.

Please be precise.

| _**Prototype**_ | Service is provided ‘as-is’, with a short description and possibly a reference to what is implemented. |
| _**Incubating**_ | The service is documented with example requests (sets of parameters), the corresponding output, and the resources required to generate that output. |
| _**Verified**_ | The service is labelled verified based on its software readiness and verification that validation reports are not required. |
| _**Validated**_ | The service is labelled validated when the validation reports and software readiness are verified by the openEO Algorithm Plaza team. |
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is there any information on what software readiness means?


This marketplace simplifies credit management, allowing users to monitor their account's credits easily. You can check your openEO credit under the [`Billing`](https://marketplace-portal.dataspace.copernicus.eu/billing){target="_blank"} section. Moreover, every user is provided with 4000 credits each month, with which they can execute multiple services.
This marketplace simplifies credit management, allowing users to monitor their accounts. The credit can be monitored under the [`Billing`](https://marketplace-portal.dataspace.copernicus.eu/billing){target="_blank"} section. Moreover, every user is provided with 4000 free openEO credits each month to execute multiple services.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the credits can be monitored



![](_images/billing2.png)

Credits are deducted based on the chosen services and spatial extent.The amount will vary depending on the processing complexity and time required for each type of service. Detailed examples of some well-known services and how they fit into the 4000 free credits can further be found [here](PlazaDetails/Strength.qmd).
Credits are deducted based on the chosen services. The amount will vary depending on the processing complexity and CPU time required for each type of service. Detailed examples of the credit usage of well-known services can be found [here](PlazaDetails/Strength.qmd).
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is not necessarily only cpu time, even when the instance is doing nothing you pay. Maybe good to verify with Jeroen D


Managing your services in this marketplace is a simple process. You can edit or delete services, as well as hide or show them in the plaza's catalogue. For detailed instructions on how to manage a service, please refer to the [manage your service](PlazaDetails/ManageService.qmd) page.
Managing services in this marketplace is a simple process. Users can edit or delete services and hide or show them in the plaza's catalogue. Please refer to the [manage your service](PlazaDetails/ManageService.qmd) page for detailed instructions on managing a service.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

's is for spoken language. In written language, write it full-out


::: {.callout-important}

There is a limitation when executing a service (User Defined Processes) in Copernicus Dataspace Ecosystem, that it only works collection from 2017 or so onwards. Moreover, it is recommended to test it for multiple consecutive years.
A limitation when executing a service in the Copernicus Data Space Ecosystem is that it only works collection from 2017 or so onwards. Therefore, it is recommended to test it for multiple consecutive years
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reconsider this part of the sentence; the grammar is incorrect: it only works collection from 2017 or so onwards.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is the root cause that we only have collections from 2017 onwards? Maybe its better to be specific and mention that instead


:::

## Online user interface

openEO provide an online user interface where users can execute services directly in a [web-editor](https://openeo.dataspace.copernicus.eu/){target='_blank'}.
Through these graphical user interfaces, users can execute, link, and configure different services. More information on the usage of the online applications is presented in the table below.
A new window opens when a user chooses to run a service in the webeditor using the `Execute in Web Editor` option. Here, users can execute services directly in a [web editor](https://openeo.dataspace.copernicus.eu/){target='_blank'} by simply inputting the required parameters and running them.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

webeditor or web editor?


:::

## Online user interface

openEO provide an online user interface where users can execute services directly in a [web-editor](https://openeo.dataspace.copernicus.eu/){target='_blank'}.
Through these graphical user interfaces, users can execute, link, and configure different services. More information on the usage of the online applications is presented in the table below.
A new window opens when a user chooses to run a service in the webeditor using the `Execute in Web Editor` option. Here, users can execute services directly in a [web editor](https://openeo.dataspace.copernicus.eu/){target='_blank'} by simply inputting the required parameters and running them.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is inputting a proper verb?

Through these graphical user interfaces, users can execute, link, and configure different services. More information on the usage of the online applications is presented in the table below.
A new window opens when a user chooses to run a service in the webeditor using the `Execute in Web Editor` option. Here, users can execute services directly in a [web editor](https://openeo.dataspace.copernicus.eu/){target='_blank'} by simply inputting the required parameters and running them.

The full web editor documentation can be found [in this section](../WebEditor.qmd). Additionally, below are some additional resources to help users get started with the web editor:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

repetition of additional

@@ -3,72 +3,46 @@ title: "JupyterLab"
image: _images/jupyter_login.png
---

JupyterLab is an advanced interactive development environment (IDE) that offers a flexible and feature-rich interface for working with notebooks, code, and data. It allows users to organize their workspaces using a flexible layout system with panels, views, and tabs for various activities. Furthermore, it supports various document formats, including notebooks, text files, code files, and markdown files. With its modular and extensible architecture, JupyterLab enables customization through extensions, additional functionalities, and integration with external tools.
It enhances the user experience with features like a file browser, command palette, debugger, and console, making it a versatile tool for interactive data exploration, analysis, and scientific computing.
JupyterLab is a user-friendly tool for working with data and code. Users can customize it by adding extra features and connecting it with other software. This makes organizing work and interacting with data more manageable. For more detailed information, please visit the [JupyterLab documentation](https://jupyterlab.readthedocs.io/en/stable/){target="_blank"}.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

software packages instead of software?

@Pratichhya Pratichhya merged commit bc0b72d into publish Aug 6, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants