Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: EigenDA M0 data availability client #3041

Closed

Conversation

lferrigno
Copy link
Contributor

@lferrigno lferrigno commented Oct 8, 2024

What ❔

M0: Read and Write integration

Scope: Spin up a local EigenDA dev environment, spin up a local zksync-era dev environment. Instead of sending blobs to 4844, the zksync-era sends blobs to EigenDA. EigenDA provides a high level client called eigenda-proxy, and it should be used. On L1, mock the verification logics, such that blocks continue building. Increase the blob size from 4844 size to 2MiB blob.

Integration docs available on eigenda-integration.md

PR should be merged alongside contract changes

This PR also adds a concurrent optimization for the da dispatcher.

Why ❔

EigenDA M0 implementation based on Trusted Verification Strategy (M0)

Checklist

  • PR title corresponds to the body of PR (we generate changelog entries from PRs).
  • Tests for the changes have been added / updated.
  • Documentation comments have been added / updated.
  • Code has been formatted via zk_supervisor fmt and zk_supervisor lint.

juan518munoz and others added 30 commits September 17, 2024 20:17
* Add blob id to batch

* Update submodule

* Add script for blob commitments in L1

* Clean up script

* Address PR comments

* Add blobs to file

* Add missing change

* Remove yq requirement

* Address PR comments
* add more metrics

* eigenda integration docs

* changes to integration doc

* remove blob retriever
* Fix get blobs l1

* Remove logs
* initial commit

* use Notify for a more deterministic approach

* replace atomic for mutex

* move const to config
* initial commit

* add more steps

* add backup and restore ecosystem scripts

* remove unnecessary step

* improve docs

* fix docs

* fix the fix docs

* add extra step

* fix restore path

* simplify restoration note

* more docs

* fix paths in backup restoration

* fix whitespace

* replacement fixes

* moved holesky rpc url to env var
* Remove unneeded formatting

* Add script explanations

* Remove observability changes
juan518munoz and others added 17 commits October 25, 2024 18:46
* Add initial implementation disperser client

* Add holesky tests

* Add error handling

* Remove proxy from name

* Add new configs

* Update eigenda-integration.md

* Address pr comments

* initial commit

* add conditional compilation attribute to test

* remove unused imports

* improve err

* remove unwraps

* implement `IntoResponse` for `RequestProcessorError`

* use a single `MemStoreConfig`

* add new step

* change suggested api_node_url

* memstore integration

* remove comments & fix memstore test

* fix memstore config

* initial commit

* remove unused imports

* modularize code

* non auth: wait for dispersal

* auth: wait for dispersal

* add config for auth dispersal

* implement get blob data for remote disperser

* remove unwraps, improve tests

* remove eigenda_proxy layer

* remove field from cfg & update readme

* add padding before dispersal request

* remove unwrap

* remove proxy mention from integration doc

* Fix pr comments

---------

Co-authored-by: Gianbelinche <39842759+gianbelinche@users.noreply.github.com>
…evert-changes

fix(da-eigen-implementation-m0): revert incorrect changes
@EmilLuta
Copy link
Contributor

EmilLuta commented Jan 9, 2025

Hello folks. What's the status of this PR? Any path to integration?

cc: @dimazhornyk

@dimazhornyk
Copy link
Contributor

This PR must've been abandoned in favor of #3243.
@juanbono @juan518munoz please close if this one isn't planned to be merged

@lferrigno lferrigno closed this Jan 10, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants