-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: update to v3.1.1 celestia-app, v0.20.4 celestia-node, celestia-types 0.9.0, celestia-rpc 0.8.0 #2
Conversation
…types 0.9.0, celestia-rpc 0.8.0
WalkthroughThis pull request introduces updates across various components of the Celestia project, focusing on upgrading Docker configurations, shell scripts, and Rust dependencies. Key modifications include updating the Alpine base images and Celestia node/application binaries to newer versions. Additionally, the Docker Compose file has been modified for improved service configuration, and shell scripts have been enhanced for dynamic peer management. Rust dependencies have also been updated to ensure compatibility with the latest features, particularly regarding blob versioning. Changes
Sequence DiagramsequenceDiagram
participant Docker as Docker Compose
participant Bridge as Bridge Node
participant Light as Light Node
participant Validator as Validator Node
Docker->>Bridge: Start with SKIP_AUTH=true
Docker->>Light: Configure with BRIDGE_COUNT=2
Bridge-->>Light: Share Trusted Peers
Light->>Validator: Request Genesis Hash
Validator-->>Light: Provide Genesis Hash
Light->>Bridge: Establish Connection
Poem
📜 Recent review detailsConfiguration used: CodeRabbit UI 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
CodeRabbit Configuration File (
|
Used deltadevsde/prism#176 as a reference 🫡 |
I now see the LN failing to start not being able to find trusted peers:
|
got LN working, now having issue when running tictactoe with bridge node funding(?) |
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 6
🔭 Outside diff range comments (2)
ci/run-lightnode.sh (2)
Line range hint
69-73
: Security: Remove hardcoded passwordThe function uses a hardcoded password "password" which is a security risk, even in a test environment.
Consider using an environment variable:
- echo "password" | cel-key import "$NODE_NAME" "$NODE_KEY_FILE" \ + echo "${KEYRING_PASSWORD:-password}" | cel-key import "$NODE_NAME" "$NODE_KEY_FILE" \
Line range hint
112-113
: Replace arbitrary sleep with proper readiness checkUsing
sleep 10
is a brittle way to wait for validator setup. Consider implementing a proper readiness check.Consider implementing a function to check validator readiness:
wait_for_validator() { local max_attempts=30 local attempt=1 while [ $attempt -le $max_attempts ]; do if curl -s validator:26657/status >/dev/null 2>&1; then echo "Validator is ready" return 0 fi echo "Waiting for validator... attempt $attempt/$max_attempts" sleep 1 attempt=$((attempt + 1)) done echo "Validator failed to become ready" return 1 }Then replace the sleep:
- # give validator some time to set up - sleep 10 + # ensure validator is ready + wait_for_validator
🧹 Nitpick comments (6)
ci/run-lightnode.sh (2)
32-64
: Consider making the timeout configurableThe wait_for_provision function has good timeout handling, but the 30-second timeout is hardcoded. In some environments or under heavy load, this might not be sufficient.
Consider making it configurable:
-timeout=30 +BRIDGE_TIMEOUT="${BRIDGE_TIMEOUT:-30}" # Allow override via environment +timeout=$BRIDGE_TIMEOUT
86-91
: Improve variable declarations per shellcheck recommendationsThe current variable declarations could mask return values. Let's separate declarations and assignments.
- local trusted_peers="$(cat "$TRUSTED_PEERS_FILE")" - local formatted_peers=$(echo "$trusted_peers" | sed 's/\([^,]*\)/"\1"/g') + local trusted_peers + local formatted_peers + trusted_peers="$(cat "$TRUSTED_PEERS_FILE")" + formatted_peers=$(echo "$trusted_peers" | sed 's/\([^,]*\)/"\1"/g')🧰 Tools
🪛 Shellcheck (0.10.0)
[warning] 87-87: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 88-88: Declare and assign separately to avoid masking return values.
(SC2155)
examples/tictactoe/src/node.rs (2)
6-6
: Consider extracting common blob handling logicThe blob creation and versioning logic is identical between
src/templates/node.rs
andexamples/tictactoe/src/node.rs
. Consider extracting this common functionality into a shared utility module to promote DRY principles and ensure consistent version handling across the codebase.Example shared utility module:
// src/utils/blob.rs use celestia_types::{Blob, AppVersion, nmt::Namespace}; pub fn create_versioned_blob(namespace: Namespace, data: Vec<u8>) -> Result<Blob> { Blob::new(namespace, data, AppVersion::V3) }Also applies to: 97-107
Line range hint
23-24
: Consider addressing TODOs during this upgradeThere are two TODO items that might be relevant to the current upgrade:
- Backwards sync with trusted state (related to block pruning)
- Migration to Lumina for transaction posting
Given the reported issues with light node startup and batch posting, addressing the Lumina migration TODO might help resolve these problems.
Would you like me to help create implementation proposals for either of these TODOs?
Also applies to: 29-31
src/templates/Cargo.toml (2)
Line range hint
9-9
: Standardize reqwest version across templatesThere's a minor version inconsistency in reqwest:
- tictactoe: 0.12.7
- shard-template: 0.12.9
Consider standardizing the versions to ensure consistent behavior across templates.
- reqwest = { version = "0.12.9", features = ["json"] } + reqwest = { version = "0.12.7", features = ["json"] }Also applies to: 13-14
13-14
: Consider using workspace dependenciesSince both templates share identical celestia dependencies, consider using a workspace-level dependency management to ensure version consistency and simplify future updates.
This could be implemented by:
- Creating a workspace Cargo.toml at the root
- Moving shared dependencies to the workspace
- Referencing workspace dependencies in individual projects
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (1)
examples/tictactoe/Cargo.lock
is excluded by!**/*.lock
📒 Files selected for processing (11)
ci/Dockerfile.bridge
(1 hunks)ci/Dockerfile.lightnode
(1 hunks)ci/Dockerfile.validator
(1 hunks)ci/docker-compose.yml
(5 hunks)ci/run-bridge.sh
(3 hunks)ci/run-lightnode.sh
(3 hunks)ci/run-validator.sh
(3 hunks)examples/tictactoe/Cargo.toml
(1 hunks)examples/tictactoe/src/node.rs
(3 hunks)src/templates/Cargo.toml
(1 hunks)src/templates/node.rs
(3 hunks)
✅ Files skipped from review due to trivial changes (3)
- ci/Dockerfile.lightnode
- ci/Dockerfile.bridge
- ci/Dockerfile.validator
🧰 Additional context used
🪛 Shellcheck (0.10.0)
ci/run-lightnode.sh
[warning] 87-87: Declare and assign separately to avoid masking return values.
(SC2155)
[warning] 88-88: Declare and assign separately to avoid masking return values.
(SC2155)
🔇 Additional comments (9)
ci/run-lightnode.sh (1)
14-14
: Verify the impact of configuration directory changes
The configuration directory has been simplified and new shared paths have been introduced. Please ensure:
- The change from
.celestia-light-$P2P_NETWORK
to$CELESTIA_HOME
doesn't break existing setups - The
/shared
directory is properly mounted in the container
Also applies to: 20-23
✅ Verification successful
Let me gather more information about the configuration directory changes and volume mounts.
Let me check the actual usage of these directories in the scripts to understand the complete picture.
Configuration directory changes are properly handled
The changes to the configuration directory are safe because:
$CELESTIA_HOME
is correctly set to/root
in all Dockerfiles- The
/shared
directory is properly mounted in all services via Docker volumes - The script uses the directories consistently:
$CONFIG_DIR
for node configuration/shared
for inter-node communication/credentials
for node keys and JWT tokens
All paths are properly mounted and accessible within the containers, and the change from .celestia-light-$P2P_NETWORK
to $CELESTIA_HOME
is handled correctly in the containerized environment.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Check if these changes align with docker-compose configuration
rg -A 5 'CELESTIA_HOME|/shared' | grep -E 'volumes:|CELESTIA_HOME'
Length of output: 395
Script:
#!/bin/bash
# Check docker-compose volume configurations and environment variables
cat ci/docker-compose.yml
Length of output: 2579
Script:
#!/bin/bash
# Check how CONFIG_DIR and shared directories are used in the scripts
cat ci/run-lightnode.sh
Length of output: 3623
src/templates/node.rs (3)
6-6
: LGTM: Import change aligns with celestia-types upgrade
The addition of AppVersion
import is consistent with the upgrade to celestia-types 0.9.0.
139-143
: LGTM: Improved code formatting
The reformatting of BlobClient::blob_get_all
improves readability without affecting functionality.
97-107
: Verify compatibility with existing batches
The addition of AppVersion::V3
is required for the celestia-app v3.0.2 upgrade. However, we should verify that this change maintains compatibility with existing batches in the system.
✅ Verification successful
Based on the gathered information, I can now provide the final response:
AppVersion::V3 change is safe to proceed
The version change is safe because:
- The Batch deserialization logic in both template and example code is version-agnostic, using bincode serialization without version checks
- The version is only used during blob creation for celestia-app compatibility
- Existing batches can still be deserialized as the TryFrom implementation for Batch focuses on the blob data content, not the version
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check if there are any existing batches in the system that might be affected by the version change
# Search for Batch implementations to verify version handling
ast-grep --pattern 'impl $_ for Batch {
$$$
}'
# Search for Batch deserialization code
rg -A 5 'try_from.*Blob.*Batch'
Length of output: 1819
Script:
#!/bin/bash
# Let's check for AppVersion usage in Blob and related deserialization code
rg -A 5 "AppVersion"
# Check Blob implementation and version handling
ast-grep --pattern 'impl $_ for Blob {
$$$
}'
# Search for any version-related validation or checks
rg -A 5 "version.*check|validate.*version"
Length of output: 4040
Script:
#!/bin/bash
# Let's check the Blob struct definition and its version handling
ast-grep --pattern 'struct Blob {
$$$
}'
# Check if there are any version-specific deserialization or conversion logic
rg -A 10 'impl.*TryFrom.*Blob'
# Look for any version-related constants or enums
rg -A 5 'enum.*Version|const.*VERSION'
Length of output: 1594
examples/tictactoe/Cargo.toml (1)
Line range hint 9-9
: Verify dependency compatibility matrix
Please ensure compatibility between:
- reqwest 0.12.7 and the updated celestia-rpc 0.8.0
- The git dependencies (prism-common, keystore-rs) and the new celestia component versions
Let's check for potential version conflicts:
Also applies to: 13-14
✅ Verification successful
Dependencies are compatible and correctly aligned
The verification shows that:
- celestia-rpc 0.8.0 and celestia-types 0.9.0 are correctly paired in both the Cargo.toml files and Cargo.lock
- celestia-rpc 0.8.0 explicitly depends on celestia-types 0.9.0 in its own dependencies
- The git dependencies (prism-common, keystore-rs) are using compatible versions of celestia components
- No version conflicts are detected in the dependency tree
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for version requirements in dependencies
# Look for version specifications in cargo files
rg -g "Cargo.toml" "celestia-(rpc|types)"
# Check for potential version conflicts in cargo lock
fd -g "Cargo.lock" --exec cat {} | grep -A 5 -B 5 "celestia-"
Length of output: 5145
ci/docker-compose.yml (3)
70-72
: Verify bridge count configuration
The BRIDGE_COUNT=2
in the light node configuration must match the actual number of bridge nodes defined in the compose file.
✅ Verification successful
Bridge count configuration is correctly set
The verification confirms that the BRIDGE_COUNT=2
matches exactly with the number of bridge services (2) defined in the docker-compose file.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Count bridge services and compare with BRIDGE_COUNT
actual_bridges=$(rg "^ bridge-" ci/docker-compose.yml | wc -l)
configured_count=$(rg "BRIDGE_COUNT=" ci/docker-compose.yml | head -n1 | cut -d'=' -f2)
echo "Actual bridges: $actual_bridges"
echo "Configured count: $configured_count"
Length of output: 252
14-14
: Verify volume mount consistency across services
The new shared
volume is mounted consistently across all services, which is good. However, consider adding volume mount options to prevent potential permission issues.
Consider adding mount options for better security:
volumes:
- credentials:/credentials
- - shared:/shared
+ - shared:/shared:rw,z
Also applies to: 35-35, 56-56, 77-77
26-30
: Enhance container naming configuration
The CONTAINER_NAME
environment variable is correctly set to match the service name, which is crucial for peer discovery. However, consider adding validation to ensure these values stay in sync.
Consider adding a validation script in the entrypoint to verify that CONTAINER_NAME
matches the actual container name:
environment:
- CONTAINER_NAME=bridge-0
+ entrypoint: >
+ /bin/sh -c '
+ [ "$$(hostname)" = "$$CONTAINER_NAME" ] ||
+ (echo "Container name mismatch" && exit 1) &&
+ /your/original/entrypoint'
Also applies to: 49-51
ci/run-validator.sh (1)
194-195
: Verify transaction indexing configuration
The change to enable transaction indexing with indexer = "kv"
is good, but ensure sufficient disk space is available as this can significantly increase storage requirements.
Consider adding monitoring for disk usage when transaction indexing is enabled. Also, document the storage requirements in the README.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Excellent, thanks Josh!
Todo:
just celestia-up
. the validator fails to startSummary by CodeRabbit
New Features
Bug Fixes
Documentation
Chores
celestia-rpc
andcelestia-types
across multiple packages.