Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Token SATI #6

Open
wants to merge 28 commits into
base: 21
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
d3a7d99
vote: deprecate unused legacy vote tx plumbing (#274)
AshwinSekar Mar 16, 2024
403225f
Remove public visibility of program cache from bank (#279)
pgarg66 Mar 17, 2024
928ede1
add stats for write cache flushing (#233)
jeffwashington Mar 18, 2024
6846756
[solana-install-init] Optimize error message for Windows user permiss…
WGB5445 Mar 18, 2024
f6c22e9
Make the quic server connection table use an async lock, reducing thr…
ryleung-solana Mar 18, 2024
62c458e
[TieredStorage] TieredStorageFile -> TieredReadonlyFile and TieredWri…
yhchiang-sol Mar 18, 2024
9e2768a
Net script fix for expected shred version (#280)
lijunwangs Mar 18, 2024
fee4d82
[TieredStorage] Use BufWriter in TieredWritableFile (#261)
yhchiang-sol Mar 18, 2024
21eff36
cli: skip no-op program buffer writes (#277)
jstarry Mar 18, 2024
f35bda5
add in method for building a `TpuClient` for `LocalCluster` tests (#258)
gregcusack Mar 19, 2024
01e4823
fix polarity for concurrent replay (#297)
bw-solana Mar 19, 2024
e39bd8d
install: Fix check for windows build (#295)
joncinque Mar 19, 2024
170df83
SVM integration test (#307)
LucasSte Mar 19, 2024
e2c1cbf
SVM: minor refactoring to improve code readability (#317)
dmakarov Mar 19, 2024
7ad99f3
vote: reuse ff to gate tvc constant update from 8 -> 16 (#322)
AshwinSekar Mar 19, 2024
dcdce7c
accounts-db: unpack_archive: unpack accounts straight into their fina…
alessandrod Mar 19, 2024
0119437
qos service should also accumulate executed but errored units (#328)
tao-stones Mar 20, 2024
5871a0e
CI: Add windows clippy job and fix clippy errors (#330)
joncinque Mar 20, 2024
78f033d
Move code to check_program_modification_slot out of SVM (#329)
pgarg66 Mar 20, 2024
3043334
Revert deprecate executable feature (#309)
HaoranYi Mar 20, 2024
81c8ed7
rpc-sts: add config options for stake-weighted qos (#197)
t-nelson Mar 20, 2024
e97d359
[TieredStorage] Refactor TieredStorage::new_readonly() code path (#195)
yhchiang-sol Mar 20, 2024
2273098
[TieredStorage] Store account address range (#172)
yhchiang-sol Mar 20, 2024
1d89ea0
Rename LoadedPrograms to ProgramCache for readability (#339)
dmakarov Mar 20, 2024
973d05c
Allow configuration of replay thread pools from CLI (#236)
steviez Mar 20, 2024
27eff84
Revert "Allow configuration of replay thread pools from CLI (#236)"
willhickey Mar 22, 2024
de3c798
chore: readme
nickfrosty Jan 22, 2025
7700cb3
chore: readme
nickfrosty Jan 22, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .github/scripts/cargo-clippy-before-script.sh
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,10 @@ os_name="$1"

case "$os_name" in
"Windows")
vcpkg install openssl:x64-windows-static-md
vcpkg integrate install
choco install protoc
export PROTOC='C:\ProgramData\chocolatey\lib\protoc\tools\bin\protoc.exe'
;;
"macOS")
brew install protobuf
Expand Down
2 changes: 2 additions & 0 deletions .github/workflows/cargo.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ jobs:
matrix:
os:
- macos-latest-large
- windows-latest
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
Expand All @@ -53,6 +54,7 @@ jobs:
matrix:
os:
- macos-latest-large
- windows-latest
runs-on: ${{ matrix.os }}
steps:
- uses: actions/checkout@v4
Expand Down
1 change: 1 addition & 0 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

37 changes: 25 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,12 @@
# PLEASE READ: This repo is now a public archive

This repo still exists in archived form, feel free to fork any reference
implementations it still contains.

See Agave, the Solana validator implementation from Anza: https://github.com/anza-xyz/agave

---

<p align="center">
<a href="https://solana.com">
<img alt="Solana" src="https://i.imgur.com/IKyzQ6T.png" width="250" />
Expand Down Expand Up @@ -26,20 +35,24 @@ $ rustup update
```

When building a specific release branch, you should check the rust version in `ci/rust-version.sh` and if necessary, install that version by running:

```bash
$ rustup install VERSION
```

Note that if this is not the latest rust version on your machine, cargo commands may require an [override](https://rust-lang.github.io/rustup/overrides.html) in order to use the correct version.

On Linux systems you may need to install libssl-dev, pkg-config, zlib1g-dev, protobuf etc.

On Ubuntu:

```bash
$ sudo apt-get update
$ sudo apt-get install libssl-dev libudev-dev pkg-config zlib1g-dev llvm clang cmake make libprotobuf-dev protobuf-compiler
```

On Fedora:

```bash
$ sudo dnf install openssl-devel systemd-devel pkg-config zlib-devel llvm clang cmake make protobuf-devel protobuf-compiler perl-core
```
Expand Down Expand Up @@ -71,8 +84,8 @@ Start your own testnet locally, instructions are in the [online docs](https://do

### Accessing the remote development cluster

* `devnet` - stable public cluster for development accessible via
devnet.solana.com. Runs 24/7. Learn more about the [public clusters](https://docs.solanalabs.com/clusters)
- `devnet` - stable public cluster for development accessible via
devnet.solana.com. Runs 24/7. Learn more about the [public clusters](https://docs.solanalabs.com/clusters)

# Benchmarking

Expand Down Expand Up @@ -103,10 +116,10 @@ $ open target/cov/lcov-local/index.html
```

Why coverage? While most see coverage as a code quality metric, we see it primarily as a developer
productivity metric. When a developer makes a change to the codebase, presumably it's a *solution* to
some problem. Our unit-test suite is how we encode the set of *problems* the codebase solves. Running
the test suite should indicate that your change didn't *infringe* on anyone else's solutions. Adding a
test *protects* your solution from future changes. Say you don't understand why a line of code exists,
productivity metric. When a developer makes a change to the codebase, presumably it's a _solution_ to
some problem. Our unit-test suite is how we encode the set of _problems_ the codebase solves. Running
the test suite should indicate that your change didn't _infringe_ on anyone else's solutions. Adding a
test _protects_ your solution from future changes. Say you don't understand why a line of code exists,
try deleting it and running the unit-tests. The nearest test failure should tell you what problem
was solved by that code. If no test fails, go ahead and submit a Pull Request that asks, "what
problem is solved by this code?" On the other hand, if a test does fail and you can think of a
Expand Down Expand Up @@ -138,10 +151,10 @@ reader is or is working on behalf of a Specially Designated National
(SDN) or a person subject to similar blocking or denied party
prohibitions.

The reader should be aware that U.S. export control and sanctions laws prohibit
U.S. persons (and other persons that are subject to such laws) from transacting
with persons in certain countries and territories or that are on the SDN list.
Accordingly, there is a risk to individuals that other persons using any of the
code contained in this repo, or a derivation thereof, may be sanctioned persons
and that transactions with such persons would be a violation of U.S. export
The reader should be aware that U.S. export control and sanctions laws prohibit
U.S. persons (and other persons that are subject to such laws) from transacting
with persons in certain countries and territories or that are on the SDN list.
Accordingly, there is a risk to individuals that other persons using any of the
code contained in this repo, or a derivation thereof, may be sanctioned persons
and that transactions with such persons would be a violation of U.S. export
controls and sanctions law.
64 changes: 49 additions & 15 deletions accounts-db/src/accounts_db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1719,13 +1719,18 @@ struct FlushStats {
num_flushed: usize,
num_purged: usize,
total_size: u64,
store_accounts_timing: StoreAccountsTiming,
store_accounts_total_us: u64,
}

impl FlushStats {
fn accumulate(&mut self, other: &Self) {
saturating_add_assign!(self.num_flushed, other.num_flushed);
saturating_add_assign!(self.num_purged, other.num_purged);
saturating_add_assign!(self.total_size, other.total_size);
self.store_accounts_timing
.accumulate(&other.store_accounts_timing);
saturating_add_assign!(self.store_accounts_total_us, other.store_accounts_total_us);
}
}

Expand Down Expand Up @@ -6050,7 +6055,7 @@ impl AccountsDb {
// Note even if force_flush is false, we will still flush all roots <= the
// given `requested_flush_root`, even if some of the later roots cannot be used for
// cleaning due to an ongoing scan
let (total_new_cleaned_roots, num_cleaned_roots_flushed) = self
let (total_new_cleaned_roots, num_cleaned_roots_flushed, mut flush_stats) = self
.flush_rooted_accounts_cache(
requested_flush_root,
Some((&mut account_bytes_saved, &mut num_accounts_saved)),
Expand All @@ -6062,7 +6067,7 @@ impl AccountsDb {
// banks

// If 'should_aggressively_flush_cache', then flush the excess ones to storage
let (total_new_excess_roots, num_excess_roots_flushed) =
let (total_new_excess_roots, num_excess_roots_flushed, flush_stats_aggressively) =
if self.should_aggressively_flush_cache() {
// Start by flushing the roots
//
Expand All @@ -6071,8 +6076,9 @@ impl AccountsDb {
// for `should_clean`.
self.flush_rooted_accounts_cache(None, None)
} else {
(0, 0)
(0, 0, FlushStats::default())
};
flush_stats.accumulate(&flush_stats_aggressively);

let mut excess_slot_count = 0;
let mut unflushable_unrooted_slot_count = 0;
Expand Down Expand Up @@ -6123,14 +6129,34 @@ impl AccountsDb {
),
("account_bytes_saved", account_bytes_saved, i64),
("num_accounts_saved", num_accounts_saved, i64),
(
"store_accounts_total_us",
flush_stats.store_accounts_total_us,
i64
),
(
"update_index_us",
flush_stats.store_accounts_timing.update_index_elapsed,
i64
),
(
"store_accounts_elapsed_us",
flush_stats.store_accounts_timing.store_accounts_elapsed,
i64
),
(
"handle_reclaims_elapsed_us",
flush_stats.store_accounts_timing.handle_reclaims_elapsed,
i64
),
);
}

fn flush_rooted_accounts_cache(
&self,
requested_flush_root: Option<Slot>,
should_clean: Option<(&mut usize, &mut usize)>,
) -> (usize, usize) {
) -> (usize, usize, FlushStats) {
let max_clean_root = should_clean.as_ref().and_then(|_| {
// If there is a long running scan going on, this could prevent any cleaning
// based on updates from slots > `max_clean_root`.
Expand Down Expand Up @@ -6161,12 +6187,13 @@ impl AccountsDb {
// Iterate from highest to lowest so that we don't need to flush earlier
// outdated updates in earlier roots
let mut num_roots_flushed = 0;
let mut flush_stats = FlushStats::default();
for &root in cached_roots.iter().rev() {
if self
.flush_slot_cache_with_clean(root, should_flush_f.as_mut(), max_clean_root)
.is_some()
if let Some(stats) =
self.flush_slot_cache_with_clean(root, should_flush_f.as_mut(), max_clean_root)
{
num_roots_flushed += 1;
flush_stats.accumulate(&stats);
}

// Regardless of whether this slot was *just* flushed from the cache by the above
Expand All @@ -6183,7 +6210,7 @@ impl AccountsDb {
// so that clean will actually be able to clean the slots.
let num_new_roots = cached_roots.len();
self.accounts_index.add_uncleaned_roots(cached_roots);
(num_new_roots, num_roots_flushed)
(num_new_roots, num_roots_flushed, flush_stats)
}

fn do_flush_slot_cache(
Expand Down Expand Up @@ -6246,18 +6273,23 @@ impl AccountsDb {
&HashSet::default(),
);

let mut store_accounts_timing = StoreAccountsTiming::default();
let mut store_accounts_total_us = 0;
if !is_dead_slot {
// This ensures that all updates are written to an AppendVec, before any
// updates to the index happen, so anybody that sees a real entry in the index,
// will be able to find the account in storage
let flushed_store = self.create_and_insert_store(slot, total_size, "flush_slot_cache");
self.store_accounts_frozen(
(slot, &accounts[..]),
Some(hashes),
&flushed_store,
None,
StoreReclaims::Default,
);
let (store_accounts_timing_inner, store_accounts_total_inner_us) = measure_us!(self
.store_accounts_frozen(
(slot, &accounts[..]),
Some(hashes),
&flushed_store,
None,
StoreReclaims::Default,
));
store_accounts_timing = store_accounts_timing_inner;
store_accounts_total_us = store_accounts_total_inner_us;

// If the above sizing function is correct, just one AppendVec is enough to hold
// all the data for the slot
Expand All @@ -6273,6 +6305,8 @@ impl AccountsDb {
num_flushed,
num_purged,
total_size,
store_accounts_timing,
store_accounts_total_us,
}
}

Expand Down
79 changes: 53 additions & 26 deletions accounts-db/src/hardened_unpack.rs
Original file line number Diff line number Diff line change
Expand Up @@ -112,27 +112,26 @@ where
// first by ourselves when there are odd paths like including `..` or /
// for our clearer pattern matching reasoning:
// https://docs.rs/tar/0.4.26/src/tar/entry.rs.html#371
let parts = path.components().map(|p| match p {
CurDir => Some("."),
Normal(c) => c.to_str(),
_ => None, // Prefix (for Windows) and RootDir are forbidden
});
let parts = path
.components()
.map(|p| match p {
CurDir => Ok("."),
Normal(c) => c.to_str().ok_or(()),
_ => Err(()), // Prefix (for Windows) and RootDir are forbidden
})
.collect::<std::result::Result<Vec<_>, _>>();

// Reject old-style BSD directory entries that aren't explicitly tagged as directories
let legacy_dir_entry =
entry.header().as_ustar().is_none() && entry.path_bytes().ends_with(b"/");
let kind = entry.header().entry_type();
let reject_legacy_dir_entry = legacy_dir_entry && (kind != Directory);

if parts.clone().any(|p| p.is_none()) || reject_legacy_dir_entry {
let (Ok(parts), false) = (parts, reject_legacy_dir_entry) else {
return Err(UnpackError::Archive(format!(
"invalid path found: {path_str:?}"
)));
}
};

let parts: Vec<_> = parts.map(|p| p.unwrap()).collect();
let account_filename =
(parts.len() == 2 && parts[0] == "accounts").then(|| PathBuf::from(parts[1]));
let unpack_dir = match entry_checker(parts.as_slice(), kind) {
UnpackPath::Invalid => {
return Err(UnpackError::Archive(format!(
Expand All @@ -159,30 +158,32 @@ where
)?;
total_count = checked_total_count_increment(total_count, limit_count)?;

let target = sanitize_path(&entry.path()?, unpack_dir)?; // ? handles file system errors
if target.is_none() {
let account_filename = match parts.as_slice() {
["accounts", account_filename] => Some(PathBuf::from(account_filename)),
_ => None,
};
let entry_path = if let Some(account) = account_filename {
// Special case account files. We're unpacking an account entry inside one of the
// account_paths returned by `entry_checker`. We want to unpack into
// account_path/<account> instead of account_path/accounts/<account> so we strip the
// accounts/ prefix.
sanitize_path(&account, unpack_dir)
} else {
sanitize_path(&path, unpack_dir)
}?; // ? handles file system errors
let Some(entry_path) = entry_path else {
continue; // skip it
}
let target = target.unwrap();
};

let unpack = entry.unpack(target);
let unpack = entry.unpack(&entry_path);
check_unpack_result(unpack.map(|_unpack| true)?, path_str)?;

// Sanitize permissions.
let mode = match entry.header().entry_type() {
GNUSparse | Regular => 0o644,
_ => 0o755,
};
let entry_path_buf = unpack_dir.join(entry.path()?);
set_perms(&entry_path_buf, mode)?;

let entry_path = if let Some(account_filename) = account_filename {
let stripped_path = unpack_dir.join(account_filename); // strip away "accounts"
fs::rename(&entry_path_buf, &stripped_path)?;
stripped_path
} else {
entry_path_buf
};
set_perms(&entry_path, mode)?;

// Process entry after setting permissions
entry_processor(entry_path);
Expand All @@ -204,6 +205,9 @@ where
#[cfg(windows)]
fn set_perms(dst: &Path, _mode: u32) -> std::io::Result<()> {
let mut perm = fs::metadata(dst)?.permissions();
// This is OK for Windows, but clippy doesn't realize we're doing this
// only on Windows.
#[allow(clippy::permissions_set_readonly_false)]
perm.set_readonly(false);
fs::set_permissions(dst, perm)
}
Expand Down Expand Up @@ -1029,4 +1033,27 @@ mod tests {
if message == "too many files in snapshot: 1000000000000"
);
}

#[test]
fn test_archive_unpack_account_path() {
let mut header = Header::new_gnu();
header.set_path("accounts/123.456").unwrap();
header.set_size(4);
header.set_cksum();
let data: &[u8] = &[1, 2, 3, 4];

let mut archive = Builder::new(Vec::new());
archive.append(&header, data).unwrap();
let result = with_finalize_and_unpack(archive, |ar, tmp| {
unpack_snapshot_with_processors(
ar,
tmp,
&[tmp.join("accounts_dest")],
None,
|_, _| {},
|path| assert_eq!(path, tmp.join("accounts_dest/123.456")),
)
});
assert_matches!(result, Ok(()));
}
}
Loading