Skip to content

Commit

Permalink
Improve CLI documentation.
Browse files Browse the repository at this point in the history
  • Loading branch information
salsolatragus committed Apr 20, 2018
1 parent cd74661 commit 74fd680
Showing 1 changed file with 46 additions and 66 deletions.
112 changes: 46 additions & 66 deletions mubench.cli/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,52 +4,45 @@

Setting up your own detector for evaluation in MUBench is simple:

1. Implement a [MUBench Runner](#runner) for your detector.
2. Place an executable JAR with your runner as its entry point in `detectors/mydetector/mydetector.jar`.
3. Create [/detectors/mydetector/releases.yml](#list-of-detector-releases), to provide version information.
4. Create [/detectors/mydetector/detector.py](#detector.py), to post process your detector's findings.
5. [Run Benchmarking Experiments](../mubench.pipeline/) using `mydetector` as the detector id.

If you have a detector set up for running on MUBench, please [contact Sven Amann](http://www.stg.tu-darmstadt.de/staff/sven_amann) to publish it with the benchmark. Feel free to do so as well, if you have questions or require assistance.

## Runner

To interface with MUBench, all you need is to implement the `MuBenchRunner` interface, which comes with the Maven dependency `de.tu-darmstadt.stg:mubench.cli` via our repository at http://www.st.informatik.tu-darmstadt.de/artifacts/mubench/mvn/ (check [the pom.xml](pom.xml) for the most-recent version).

A typical runner looks like this:

```java
public class MyRunner extends MuBenchRunner {
public static void main(String[] args) {
new MyRunner().run(args);
}

@Override
protected void detectOnly(DetectorArgs args, DetectorOutput output) throws Exception {
// Run detector in Experiment 1 configuration...
}

@Override
protected void mineAndDetect(DetectorArgs args, DetectorOutput output) throws Exception {
// Run detector in Experiment 2/3 configuration...
}
}
```
1. [Implement a MUBench Runner](#implement-a-mubench-runner) for your detector.
2. Place an executable JAR with your runner at `detectors/<mydetector>/latest/<mydetector>.jar`.
3. Add [detector version and CLI version information](#provide-version-information) to `/detectors/<mydetector>/releases.yml`.
4. [Run benchmarking experiments](../mubench.pipeline/) using `<mydetector>` as the detector id.

If you have a detector set up for running on MUBench, please [contact Sven Amann](http://www.stg.tu-darmstadt.de/staff/sven_amann) to publish it with the benchmark.
Feel free to do so as well, if you have questions or require assistance.


## Implement a MUBench Runner

It supports two run modes:
To run your detector in MUBench experiments, you need to configure a MUBench Runner that invokes your detector and reports its findings to [the MUBench Pipeline](../mubench.pipeline).
We provide infrastructure for implementing runners in the Maven dependency `de.tu-darmstadt.stg:mubench.cli` via our repository

1. "Detect Only" (Experiment 1), where the detector is provided with a hand-crafted correct usage (a one-method class implementing the correct usage) and some target code to identify deviations from these correct usage in.
2. "Mine and Detect" (Experiment 2/3), where the detector should mine its own patterns in the provided codebase and find violations in that same codebase.
http://www.st.informatik.tu-darmstadt.de/artifacts/mubench/mvn/

In either mode, the `DetectorArgs` provide all input as both the Java source code and the corresponding Bytecode. Additionally, it provides the classpath of all the code's dependencies.
Check the [MUBench CLI documentation](http://www.st.informatik.tu-darmstadt.de/artifacts/mubench/cli/) for details on how to implement runners and other utilities available through this dependency.

The `DetectorOutput` is essentially a collection where you add your detector's findings, specifying their file and method location and any other property that may assist manuel reviews of the findings. MUBench expects you to add the findings ordered by the detector's confidence, descending. You may also output general statistics about the detector run.
Once you configured the runner, you need to bundle it together with your detector into an executable Jar. This Jar must have the runner as its entry point.
See the configuration of the `maven-assembly-plugin` in [the `pom.xml` file](./pom.xml) for an example of how we do this for our `DemoDetector`.

## Releases.yml

This file must be at `detectors/mydetector/releases.yml` and contain a list of releases of your detector.
## Provide Version Information

Each detector has a `release.yml` file that provides version information.
The most-simple version of such a file, which suffices to run a detector locally, might look as follows:

```yaml
- cli-version: 0.0.13
md5: foo
```
* The `cli-version` names the version of the `mubench.cli` dependency used to implement the respective [MUBench Runner](#implement-a-mubench-runner).
* The `md5` might be any string.
When running experiments, [the MUBench Pipeline](../mubench.pipeline) uses this string only to determine whether the detector changed (by checking whether the `md5` changed) and to invalidate existing results accordingly.
Only when the detector is integrated into the MUBench detector repository, such that [the MUBench Pipeline](../mubench.pipeline) can download it automatically, the `md5` needs to be changed to the actual hash of the Jar file, for download verification.

Entries look like this:
It is possible to manage multiple versions of a detector via the `release.yml` file.
The file might then look as follows:

```yaml
- cli-version: 0.0.10
Expand All @@ -59,35 +52,22 @@ Entries look like this:
tag: my-tag
```

The must contain at least one entry. By default, MUBench uses the newest version listed. Each entry consists of the following keys:
* `cli-version` - The [MUBench Runner](#runner) version implemented by the respective detector release. This information is used to invoke your detector.
* `md5` (Optional) - The MD5 hash of the `detector/mydetector/mydetector.jar` file. MUBench will use this value to check the integrity of the detector, if it is loaded from the remote detector registry. The MD5 is mandatory in this case.
* `tag` (Optional) - Used to reference specific detector releases. To request a specific detector release, add `--tag my-tag` to the MuBench command. Note that tags are case insensitive, i.e., `Foo` and `foo` are the same tag.

## Detector.py

To post process your detector's results, you need to create `detectors/mydetector/mydetector.py` with a class `mydetector`, which implements [`data.detector.Detector`](https://github.com/stg-tud/MUBench/blob/master/mubench.pipeline/data/detector.py) with the method `_specialize_finding(self, findings_path: str, finding: Finding) -> SpecializedFinding`. A specialized finding prepares a finding of a detector for display on the review page, for example, by converting dot graphs to images. The [`data.detector_specialising.specialising_util`](https://github.com/stg-tud/MUBench/blob/master/mubench.pipeline/data/detector_specialising/specialising_util.py) module contains utilities for this purpose.

Here is an example of a basic implementation which does no post processing:

```python
from data.detector import Detector
from data.finding import Finding, SpecializedFinding
class MyDetector(Detector):
def _specialize_finding(self, findings_path: str, finding: Finding) -> SpecializedFinding:
return SpecializedFinding(finding)
```
* The `cli-version` names the version of the `mubench.cli` dependency used to implement the respective [MUBench Runner](#implement-a-mubench-runner).
* The `md5` is the hash of the respective Jar file.
* The `tag` is an identifier for the detector version (case insensitive).
You may specify this identifier when running experiments, using the `--tag` option.
If the `--tag` option is not specified, [the MUBench Pipeline](../mubench.pipeline) runs the top-most detector version listed in the `release.yml` file.
If this detector version has no `tag`, `latest` is used as the default.
In any case, [the MUBench Pipeline](../mubench.pipeline) expects the respective Jar file at `detectors/<mydetector>/<tag>/<mydetector>.jar`.

Consider [MuDetect.py](https://github.com/stg-tud/MUBench/blob/master/detectors/MuDetect/MuDetect.py) for a more advanced example with post processing.

## Debugging

When debugging your detector, there's no need to package and integrate it into MUBench after every change. You can invoke your runner directly from your IDE instead, which is much more convenient. To this end, just follow these steps:
To debug a [MUBench Runner](#implement-a-mubench-runner) it is more convenient to run it directly from an IDE, instead of bundling an executable Jar to run it in MUBench after every change.
To do this, proceed as follows:

1. Run `./mubench detect DemoDetector E --only P`, where `E` is [the experiment](../mubench.pipeline#experiments) you want to debug your detector in and `P` is [the project or project version](../data) that you want to debug your detector on, e.g., `aclang`.
2. Once this finished, open the newest log file in `./logs` and look for a line from `detect.run` saying something like `Executing 'java -jar DemoDetector.jar ...'`.
3. Copy the command-line parameter of this Java invokation (the `...` above).
4. Invoke your runner implementation with these parameters from your IDE.
5. Happy debugging.
1. Run `./mubench run <E> <mydetector> --only <P>`, where `<E>` is [the experiment](../mubench.pipeline/#experiments) you want to debug and `P` is [the project or project version](../data/#filtering) you want to debug with, e.g., `aclang`.
2. Abort this run, as soon as the detector was started.
3. Open the newest log file in `./logs` and look for a line saying something like `Executing 'java -jar <mydetector>.jar ...'`.
3. Copy the command-line parameters of this Java invocation (the `...` above).
4. Invoke your runner's `main()` method with these parameters from your IDE.

0 comments on commit 74fd680

Please sign in to comment.