How to properly write "unit" tests for analyzers? #7441
-
I have been tackling Objective-C analysis for a while, and this is currently culminating in a dedicated plugin for anything related to it. One problem I'm currently facing is that I don't know how to properly write tests for our analyzers.
My jury-rigged method would be to call the headless analyzer with a preScript for configuring the analysis and a postScript for checking the results, but this sounds like something common enough that there is a proper way to integrate this into the Junit tests? At the core I want tests for specific analyzers, that they return the right result after being run on a specific program that is in a certain state already. The specific state could either come from a stored project, or previous analyzer tests. But this sounds a lot more complicated, so I'm fine with just automating the workflow described above, and paying for it with test latency and compute. The What is the canonical way to achieve the workflow described above? Or is there another way to achieve something similar which should be used instead? (I am also curious how to write all other kinds of more sophisticated tests for Ghidra like proper integration tests so if the answer involves a digression into the overall testing approach in Ghidra I'm interested, but the most pressing problem is that I want to refactor code without worrying about accidentally breaking my analyzers because I forgot a corner case) |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Oof. I have very strong opinions on this subject. Rather than waste time expressing those fully here, I will try to give concise answers and only hint at my disdain. We have never established a good framework for testing analyzers. All previous tests that I have seen have been 'tripwire' tests that check results in a course grained fashion, such as 'assert that there are 5215 symbols' after the test runs. I'm sure you can imagine the uselessness of such a test. All analyzer tests that I have seen have been full integration tests that use the GUI environment.
Yes, this is quite silly, although, I think it works as you would expect. The abstract tests each configure an environment that makes sense for them. The headless test will configure a headless environment when it runs. The only oddness that I see here is what you mention, that there are Swing related methods in the hierarchy. Ideally, we could refactor the design in such a way to remove this. This hierarchy has changed countless times over the years. It is better than ever, but still not correct from a clean OO perspective. I think that using a real headless environment, as you described above, with pre and post scripts, sounds like extra complication that you should not need. Of course, you should probably do whatever is the easiest for you to build and maintain. For analyzer testing and full integration testing, this the state of the art for Ghidra:
It looks like we have not good examples in the repo of full integration analysis tests. The basic setup has code that looks like this: This shows how to toggle some analyzers and then run analysis in the tool. private void analyze() {
// turn off some analyzers
setAnalysisOptions("Stack");
setAnalysisOptions("Embedded Media");
setAnalysisOptions("DWARF");
setAnalysisOptions("Create Address Tables");
AutoAnalysisManager analysisMgr = AutoAnalysisManager.getAnalysisManager(program);
analysisMgr.reAnalyzeAll(null);
Command cmd = new AnalysisBackgroundCommand(analysisMgr, false);
tool.execute(cmd, program);
waitForBusyTool(tool);
}
protected void setAnalysisOptions(String optionName) {
int txId = program.startTransaction("Analyze");
Options analysisOptions = program.getOptions(Program.ANALYSIS_PROPERTIES);
analysisOptions.setBoolean(optionName, false);
program.endTransaction(txId, true);
} OR This shows how to use the builder to perform analysis, which is not really full integration: private void setupFooProgram() throws Exception {
builder = new ProgramBuilder("noExit", ProgramBuilder._X64);
builder.setBytes("0x100001c70",
"55 48 89 e5 48 83 ec 10 89 7d fc 8b 7d fc e8 c5 00 00 00 66 90 66 90 66 67 67 c3 00 00 00 00 00");
builder.disassemble("0x100001c70", 27, false);
builder.createEmptyFunction("noReturn", "0x100001c70", 1, DataType.DEFAULT);
builder.setBytes("100001d48", "ff 25 c2 02 00 00");
builder.disassemble("100001d48", 6, false);
Function exit = builder.createEmptyFunction("exit", "100001d48", 1, DataType.DEFAULT);
builder.analyze();
program = builder.getProgram();
} For a more unit-y style test that just calls the analyzer directly, you can look at the |
Beta Was this translation helpful? Give feedback.
-
This is probably the best way to do it. This is how the
This would indeed be a great deal of effort if you were seeking full coverage of edge cases.
We do this by using a repository of programs that we have already imported by hand and saved as gzf files. We then use the
You should be able to create this, for example: @Before
public void setUp() throws Exception {
env = new TestEnv();
tool = env.getTool();
} The exception you listed above is because you do not have the default test tool
|
Beta Was this translation helpful? Give feedback.
Oof.
I have very strong opinions on this subject. Rather than waste time expressing those fully here, I will try to give concise answers and only hint at my disdain.
We have never established a good framework for testing analyzers. All previous tests that I have seen have been 'tripwire' tests that check results in a course grained fashion, such as 'assert that there are 5215 symbols' after the test runs. I'm sure you can imagine the uselessness of such a test.
All analyzer tests that I have seen have been full integration tests that use the GUI environment.