Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Console logs should ideally print _during_ fuzzing, not just after testing is complete #3844

Closed
webthethird opened this issue Dec 6, 2022 · 10 comments
Labels
A-testing Area: testing C-forge Command: forge T-feature Type: feature

Comments

@webthethird
Copy link

Component

Forge

Describe the feature you would like

It would be ideal if the console logs would print during the fuzzing, rather than only after the testing is complete. Fuzzing is often a long process, and I'd like some insight into what it is doing while it is running. Especially since nothing else is displayed while fuzzing, not even the current number of runs.

Additional context

No response

@webthethird webthethird added the T-feature Type: feature label Dec 6, 2022
@webthethird
Copy link
Author

I realize a workaround for this is to write my logs to a .log file using vm.writeLine, but ideally file system permissions shouldn't be necessary to get real-time insight into a fuzzing campaign as it runs

@mds1
Copy link
Collaborator

mds1 commented Dec 7, 2022

@webthethird wdyt of closing this as a duplicate of #585? It sounds like you just want confirmation things are still running and that the fuzzer isn't hanging. Or do you actually want to see the values being used during the fuzzing? If so I'd be curious to hear more about why

@webthethird
Copy link
Author

webthethird commented Dec 7, 2022

@mds1 I'd actually like to see the values being used during fuzzing, since I'd like to make sure my tests don't have any bugs that would cause fuzzing to either succeed when they should be failing or take longer than they should.

For example, by implementing a workaround of writing the logs to a .log file using vm.writeLine I was able to a) realize mid-fuzz that I was using vm.store on the wrong slot and b) discover a potential bug in Forge (see issue #3848). In the first instance I was able to quickly stop fuzzing, fix my typo and re-run the fuzzer to get the results I was expecting, and the merits of the second point need no explanation!

@Azleal
Copy link
Contributor

Azleal commented Jul 3, 2024

hi @grandizzy please give some advice on this topic. I'd like to give it another try if the approach is fully discussed. also, I have my own idea, how about setup a flag with which user could decide whether console.log in fuzz test should be printed or not. and the default value could be set as false, thus will not affect any fuzz tests prior to the change.

@grandizzy
Copy link
Collaborator

hi @grandizzy please give some advice on this topic. I'd like to give it another try if the approach is fully discussed. also, I have my own idea, how about setup a flag with which user could decide whether console.log in fuzz test should be printed or not. and the default value could be set as false, thus will not affect any fuzz tests prior to the change.

hey @Azleal , yeah, I think a config for fuzz / invariant test printing console logs as they're executed makes sense (cc @mds1 ) Also would be neat to print them not only at the end, but as they go (as requested by this issue), that is after each call, e.g. for invariants

// Execute call from the randomly generated sequence and commit state changes.
let call_result = current_run
.executor
.transact_raw(
tx.sender,
tx.call_details.target,
tx.call_details.calldata.clone(),
U256::ZERO,
)
.map_err(|e| {
TestCaseError::fail(format!("Could not make raw evm call: {e}"))
})?;
// Collect coverage from last fuzzed call.
invariant_test.merge_coverage(call_result.coverage.clone());

and for regular fuzz tests

let mut call = self
.executor
.call_raw(self.sender, address, calldata.clone(), U256::ZERO)
.map_err(|_| TestCaseError::fail(FuzzError::FailedContractCall))?;

Side effect would be that if running with both show logs flag and show progress then progress will become flaky, but I guess that's acceptable.

@grandizzy
Copy link
Collaborator

@Azleal lmk if you already looking into this, otherwise I'll pick it. thank you!

@Azleal
Copy link
Contributor

Azleal commented Jul 5, 2024

@Azleal lmk if you already looking into this, otherwise I'll pick it. thank you!

hi @grandizzy sorry for my delayed reply. yes, I've already into this. I have tried to solve this issue once, so I'd like to give it another try. but i'm new to adding a new config, would appreciate it if you could give some suggestions or code referrences about adding and using a config in foundry. thanks

@grandizzy
Copy link
Collaborator

@Azleal no worries, thanks for looking into! you would need to add a bool like show_execution_logs in FuzzConfig (default false) here

/// Contains for fuzz testing
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct FuzzConfig {

Additionally you can make it inline / per test configurable by merging it here (I think this would be nice)
fn try_merge(&self, configs: &[String]) -> Result<Option<Self>, InlineConfigParserError> {
let overrides: Vec<(String, String)> = Self::get_config_overrides(configs);
if overrides.is_empty() {
return Ok(None)
}
let mut conf_clone = self.clone();
for pair in overrides {
let key = pair.0;
let value = pair.1;
match key.as_str() {
"runs" => conf_clone.runs = parse_config_u32(key, value)?,
"max-test-rejects" => conf_clone.max_test_rejects = parse_config_u32(key, value)?,
"dictionary-weight" => {
conf_clone.dictionary.dictionary_weight = parse_config_u32(key, value)?
}
"failure-persist-file" => conf_clone.failure_persist_file = Some(value),
_ => Err(InlineConfigParserError::InvalidConfigProperty(key))?,
}
}
Ok(Some(conf_clone))
}
}

You will need to update test Configs here

// explicitly set all values
let input = Config {

and here

and I think that should be it

@Azleal
Copy link
Contributor

Azleal commented Jul 5, 2024

@Azleal no worries, thanks for looking into! you would need to add a bool like show_execution_logs in FuzzConfig (default false) here

/// Contains for fuzz testing
#[derive(Clone, Debug, PartialEq, Eq, Serialize, Deserialize)]
pub struct FuzzConfig {

Additionally you can make it inline / per test configurable by merging it here (I think this would be nice)

fn try_merge(&self, configs: &[String]) -> Result<Option<Self>, InlineConfigParserError> {
let overrides: Vec<(String, String)> = Self::get_config_overrides(configs);
if overrides.is_empty() {
return Ok(None)
}
let mut conf_clone = self.clone();
for pair in overrides {
let key = pair.0;
let value = pair.1;
match key.as_str() {
"runs" => conf_clone.runs = parse_config_u32(key, value)?,
"max-test-rejects" => conf_clone.max_test_rejects = parse_config_u32(key, value)?,
"dictionary-weight" => {
conf_clone.dictionary.dictionary_weight = parse_config_u32(key, value)?
}
"failure-persist-file" => conf_clone.failure_persist_file = Some(value),
_ => Err(InlineConfigParserError::InvalidConfigProperty(key))?,
}
}
Ok(Some(conf_clone))
}
}

You will need to update test Configs here

// explicitly set all values
let input = Config {

and here

and I think that should be it

thank you @grandizzy , lemme check it and I'll let you know if there's any updates.

@grandizzy
Copy link
Collaborator

going to close this one as an impl was added in #8387 This is a little bit different than printing during fuzzing - logs are printed when test ends to make sure they're not mixed (to view test progress --show-progress option can be used).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-testing Area: testing C-forge Command: forge T-feature Type: feature
Projects
None yet
Development

No branches or pull requests

5 participants