-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NIH AIM:1 YR:2 TASK:1.1.1 | 2.1.1 | Test and apply metadata registry for large datasets and integration with research computing for a few NIH-funded projects, piloting the cost recovery model #36
Comments
This issue represents a deliverable funded by the NIH Aim 1: Support the sharing of very large datasets (>TBs) by integrating the metadata in the repository with the data in the research computing storage An increasing number of research studies deal with very large datasets (>TB to PBs). When the study is completed or ready to be distributed, it is not always feasible nor desirable to deposit the data in the repository. Instead, in this project we propose to publish the metadata to the repository for discoverability of the study and access the data remotely from the research computing cluster or cloud storage. In this scenario, the data does not need to be downloaded to the user’s computer but can be viewed, explored, and analyzed directly in the research computing environment. The Harvard Dataverse repository will leverage the Northeast Storage Exchange (NESE) and the New England Research Cloud (NERC) to provide storage and compute for these very large datasets by finding and accessing them through the repository and keeping the metadata and data connected via a persistent link. These two services - NESE and NERC - are large-scale multi-institutional infrastructure components of the Massachusetts Green High Performance Computing Center (MGHPCC) -- a five member public-private partnership between Boston University, Harvard University, Massachusetts Institute of Technology, Northeastern University, and the University of Massachusetts. MGHPCC is a $90 million facility that has the capacity to grow up to 768 rack, 15 MW of power and 1 terabit of network capacity in the current 95,000 sq. ft data center. One of the key integration points to support large data transfers is to incorporate Globus endpoints. Globus is a distributed data transfer technology developed at University of Chicago that is becoming ubiquitous for research computing services. This will allow the realistic transfer of TBs of data in less than an hour. Globus will also be a front end of NESE Tape, a 100+ PB tape library within MGHPCC. The integration of the repository with research computing is one of the components of a Data Commons that will facilitate collaboration, dissemination, preservation and validation of data-centric research. Related Deliverables: This work also represents a deliverable funded internally. Harvard Data Commons MVP: Objective 1
This picture shows how the the Harvard Data Commons work maps to Dataverse work. This is a closer look at the Harvard Datacommons work: GDCC DataCommons Objective 1 Task Tracking
|
March update: The closing update for 1.1.1 inf Febrary 2023 pretty much identifies where we're going to start the work on this for year 2. I put that into the description. (2.1.1) Planning continues around supporting the Globus endpoint for Dataverse at the Northeast Storage Exchange (NESE) and moving beyond the MVP. The MVP enables connection from Harvard Dataverse to the Globus endpoint and storage but does not support real time browsing for large files yet due to specific technological characteristics of tape support. Technical plan for this last step is anchored in issue 9123. This activity will be performed in the first half of year 2, as the necessary development resources have been identified |
|
2024/01/03
|
Planning continues around supporting the Globus endpoint for Dataverse at the Northeast Storage Exchange (NESE) and moving beyond the MVP. The MVP enables connection from Harvard Dataverse to the Globus endpoint and storage but does not support real time browsing for large files yet due to specific technological characteristics of tape support. Technical plan for this last step is anchored in issue 9123. This activity will be performed in the first half of year 2, as the necessary development resources have been identified
┆Issue is synchronized with this Smartsheet row by Unito
The text was updated successfully, but these errors were encountered: