Replies: 3 comments 1 reply
-
Sounds good. |
Beta Was this translation helpful? Give feedback.
-
Summarizing the discussion on the design of the S1 CSLC workflow. The interface (#1) of the S1 CSLC workflow for a control manager like PCM will have:
One SAS will handle the processing of reference and secondary bursts. If the flag
If the flag
|
Beta Was this translation helpful? Give feedback.
-
@LucaCinquini @riverma @hookhua @hhlee445 and @collinss-jpl, the discussion above is very relevant to the data system and in particular to PCM and to the topic that algorithms with shorter run times reduce risks during operation when using AWS on the spot market. In the design that our team has considered and discussed here the SAS will accept a list of burst IDs. The data system can choose to feed only one burst at a time. Therefore I think for coreg-SLC-Sentinel-1 the SAS design is already compatible with the idea of processing individual bursts at a time. Regardless if the data system will get access to individual bursts during operation or if the access will be on zip files, the design above should still allow to run one burst at a time. However, I think if we want to scale jobs with individual bursts, then the value of an API which allows burst access instead of frame access will be evident vs the current ASF's API which only allow for a frame query and access. Imagine if we will only have access to a frame. That way if we pull one frame, we can generate ~30 burst CSLC-S1 on one AWS instance or on 30 instance. With current ASF API, the farmer will result in one time data movement of one frame (~30 bursts) to one instance and the latter would mean 30 times moving the same frame (i,e, 30 bursts) to 30 instances. |
Beta Was this translation helpful? Give feedback.
-
I just looked at PR #1 and I though its easier to start a discussion before getting to more details. Also an excuse to explore discussions feature for this repo :) .
What is the exact need/problem that we want to solve:
Let's assume we have N S1 frames of SLCs in the archive which cover an area of interest. We need to coregister each burst of the stack with geometrical algorithm + ancillary corrections
Let's assume we have a static local stack with full control (ignoring the real world with data systems component for a second). In this case the efficient approach would be to
a. use geometry of the reference burst and run geo2rdr to estimate offsets
b. resample the secondary burst and archive it
This is essentially what we have done in the isce2 stack processor (with geometry coregistration approach)
How it may be operationalized within our data system with current DACC API functionality that provides zip files and not bursts.
Let's assume a burst map of Sentinel-1 exists. A burst map is simply a database of burst IDs with their geometries such that one can query the IDs extract the geometry and query the ASF archive to get the required SLC zip files. A prototype to create such database exists here
isReference=True
is set indicating that this is the reference date of the stack.In this scenario, the SAS will be given:
a. one or more zip files for one date
b. the reference coregisterd burst (empty for the reference burst)
b. the orbit file that covers the date of the zip file(s)
c. the DEM file
d. the list of burst IDs to be processed
e. the list of output coregistered burst product names
f. the
isReference
flagg. other possible input parameters as already started by @vbrancat here #1
Note: in this design the SAS always runs on one date**
How to test the end2end workflow on a stack
If we agree with the design explained above, we should develop the SAS as explained above. Basically no reference and secondary. always one date is provided in the config file.
Since we don't have a PCM system during the development we should simulate one with simple scripts. Similar to what is done in isce2 stack processor, we can write high level scripts to manage processing for testing purposes. These scripts should create the SAS run configuration files with the final format of the schema as suggested here here #1 .
@vbrancat @yunjunz @LiangJYu this is how I have the whole flow in mind. We don't need to necessarily enforce this design but I hope this will start a discussion and will lead to a final design.
Beta Was this translation helpful? Give feedback.
All reactions