-
Notifications
You must be signed in to change notification settings - Fork 4
How to add a new benchmark
We assume that the user knows how to run the benchmark with tachyon-perf.
To add a new benchmark, the basic requisite is to extend tachyon.perf.basic.PerfTask
and tachyon.perf.basic.TaskContext
, then modify the configuration files. Optionally, you can implement tachyon.perf.basic.Supervisible
or extend tachyon.perf.basic.PerfTotalReport
to make more tools supported.
We recommend that you put all your own classes in a new package like tachyon.perf.benchmark.<YourTaskType>
.
Create a new class extends tachyon.perf.basic.PerfTask
, e.g. tachyon.perf.benchmark.foo.FooTask
, which contains the executing logic of your benchmark.
There are three abstract methods you should override, setupTask(TaskContext)
, runTask(TaskContext)
and cleanupTask(TaskContext)
, in which the type of parameter is TaskContext
you should implement in the next step.
In setupTask(TaskContext)
, you need to do the initialization and some preparation work for your benchmark. Note that you should initialize all the properties you need in setupTask(TaskContext)
method instead of adding a constructor of this class. Besides, you are able to get all the configurations of your benchmark by mTaskConf
, which contains the properties in your benchmark's xml file.
In runTask(TaskContext)
, you need to add the executing logic to run your benchmark and record the statistics into the context when needed.
In cleanupTask(TaskContext)
, you can do some cleaning work, e.g. cleaning the workspace, or set some ending statistics to the context.
Create a new class extends tachyon.perf.basic.TaskContext
, e.g. tachyon.perf.benchmark.foo.FooTaskContext
, which contains the statistics of your benchmark.
The only abstract method you need to override is writeToFile(String fileName)
, which is used to output the context to a local file.
This class will be the type of parameters needed when setup, run and cleanup the task. And similarly to FooTask
, you should do the initialization work when setup the task instead of add a constructor.
After adding the two classes, you need to modify the conf/task-type.xml
file to add your own task type, e.g. add the following lines to it:
<type>
<name>Foo</name>
<taskClass>tachyon.perf.benchmark.foo.FooTask</taskClass>
<taskContextClass>tachyon.perf.benchmark.foo.FooTaskContext</taskContextClass>
</type>
Also, you need to create your own benchmark configuration file conf/testSuite/Foo.xml
and add the properties. You can take a look at conf/testSuite/Read.xml
as an example.
If you want tacyon.perf.tools.TachyonPerfSupervision
to monitor the nodes' status when running your benchmark, you can implement the interface tachyon.perf.basic.Supervisible
in your task class.
All the three methods getTfsFailedPath()
, getTfsReadyPath()
and getTfsSuccessPath()
are designed to return a fixed path of the specified file in Tachyon, which is considered as a signal file in TachyonPerfSupervision
.
If you want to use bin/tachyon-perf-collector <YourTaskType>
to collect and generate a total report for your benchmark, you can create a new class extends the tachyon.perf.basic.PerfTotalReport
, e.g. FooTotalReport
.
Then override the two abstract methods, initialFromTaskContexts(File[])
and writeToFile(String)
. The former is used to load from the context output files of each node. The latter is used to output the total report to a local file.
In addition, you should modify your task type configuration to:
<type>
<name>Foo</name>
<taskClass>tachyon.perf.benchmark.foo.FooTask</taskClass>
<taskContextClass>tachyon.perf.benchmark.foo.FooTaskContext</taskContextClass>
<totalReportClass>tachyon.perf.benchmark.foo.FooTotalReport</totalReportClass>
</type>