Skip to content
Andy Theuninck edited this page Nov 9, 2015 · 7 revisions

CORE uses PHPUnit for testing. Installation instructions for PHPUnit can be found here. Alternately, CORE includes PHUnit as a composer dependency. Running "composer install" will install phpunit in the vendor/bin directory.

Tests are located in the top-level tests/ directory. The general rules for running tests are defined in phpunit.xml. Simply running phpunit in the top directory will execute all tests. You probably don't want to do that.

The Test Environment

CORE does not mock database calls during testing. This means running the full test suite can and will modify information in the database. Do not run the full test suite on a production system. It also means the respective InstallTests must run before any additional tests. There are a few reasons for this setup:

  • Continuous integration systems make it really, really easy to spin up a new database and initialize it to a predictable state
  • The barrier to entry for test writers is lower when you don't have to mock anything
  • Tests can catch SQL errors is SQLManager is set to throw exceptions on failed queries.

Testing Services

CORE uses [https://travis-ci.org/CORE-POS/IS4C/](Travis CI) as a continuous integration service. Every time a commit is pushed to master on github, Travis CI runs the full test suite and reports any failures. While you can link your own fork to Travis CI, anyone who has commit access to upstream should feel free to push commits for the sake of testing. Breaking master is OK. Alternately you can trigger a new test run by creating a pull request.

CORE uses [https://codeclimate.com/github/CORE-POS/IS4C](Code Climate) to analyze coverage data generated by testing. It uses a straightforward red/green format to highlight which code is tested and which is not. The static analysis on code quality is also interesting but not strictly related to testing.

Test Coverage

Test coverage indicates which lines of code actually run during testing. It's far from a perfect metric - the fact that code runs without errors/warnings/notices does not guarantee it produces the correct result - but it helps highlight areas that need attention. Getting coverage nearer 100% will still prevent a whole class of bugs from creeping back in by mistake.

Expanding coverage involves writing a new test to call method(s) that aren't being run at all or altering how the method is called so the path of execution changes. Consider a simple function:

function foo($param)
{
    if ($param === 'foo') {
        return true;
    } else {
        return false;
    }
}

and a simple test

function testFoo()
{
    $this->assertEquals(true, foo('foo'));
}

Coverage here will be incomplete because the test does not exercise every potential execution path. A test with full coverage would be:

function testFoo()
{
    $this->assertEquals(true, foo('foo'));
    $this->assertEquals(false, foo('bar'));
}

This is an extremely contrived example. It's very easy to write complete tests when a function does not depend on anything other than its arguments. Consider something like this:

function statefulFoo()
{
    if ($_GET['foo'] === 'foo') {
        return true;
    } elseif (CoreLocal::get('sessionFoo') === 'foo') {
        return true;
    } else {
        return false;
    }
}

That's more of a mess. The behavior of the function depends on the current state of the overall program. Unfortunately a lot of CORE looks more like the stateful example. Improving coverage in these situations involves either manipulating state prior to each test (where "state" could be configuration values, session values, database records, etc) or refactoring to reduce state dependencies (e.g., creating an object containing all form data early in the process and having subsequent code interact with that object rather than reference $_GET and $_POST).

Notable Tests

Both Lane and Office include tests named InstallTest.php. These tests will create database structure and load sample data into these structures. Running this test first is often desirable because once it succeeds subsequent tests can issue test queries against the sample data. Office's InstallTest.php always uses databases named "unit_test_op" and "unit_test_trans". POS' InstallTest.php uses the currently configured database (patches welcome!). This means sample data will overwrite whatever's currently in those tables. Generally this is not a big deal since the lane database is just a copy of the authoritative back end database, but fair warning.

Both Lane and Office include tests named PluginsTest.php. Because of the way plugin detection works in both components, .php files in the respective plugin directories may be included at any time. These tests try to verify nothing catastrophic happens when any plugin file is included. Office's version is currently more comprehensive. These tests are often useful in debugging weird behavior or random crashes.

Office's PagesTest.php attempts to automatically cover reports as they're structurally all very similar. Some things to be aware of:

  • form_content must return a string
  • fetch_report_data must return an array
  • report_description_content must return an array
  • Unless you need a write operation (usually for precalculations or temporary tables), use the class' $connection rather than calling FannieDB. The connection object injected by the test is set to throw exceptions on failed queries and thus catch potential errors.

During testing the class' $form object will contain dummy values for all fields specified in the report's $required_fields. Fields containing "date" will be given YYYY-MM-DD formatted dummy values. $_GET/$_POST/$_REQUEST won't actually exist so if you need to call FormLib for additional values be sure to specify a 2nd parameter for the default value.

Comprehensive Example

A bug cropped up here when some bozo (@gohanman) forgot to initialize a database connection object. This results in a fatal error that crashes the whole page. This is bad. ItemModule::saveFormData() should be tested to make sure it doesn't blow up - and making sure it actually saves too isn't a terrible idea. The catch is this method relies on global state: a database record and submitted form values. To make it testable, we need to control these external pieces. One potentially helpful pattern is dependency injection.

Starting with how saveFormData() is normally called:

$form = new \COREPOS\common\mvc\FormValueContainer();
foreach ($FANNIE_PRODUCT_MODULES as $class => $params) {
    $mod = new $class();
    /** start dependency injection **/
    $mod->setConnection($this->connection);
    $mod->setConfig($this->config);
    $mod->setForm($form);
    /** end dependency injection **/
    $mod->saveFormData($upc);
}

All the information saveFormData() needs is injected into the object. When it runs, saveFormData() only needs to reference its own object. Crucially, we can inject specific values before running saveFormData() and then examine the result afterwards.

Now calling the same saveFormData() method from a unit test:

public function testItemFlags()
{
    $config = FannieConfig::factory();
    $connection = FannieDB::get($config->OP_DB);

    /**
    Setup/verify preconditions for the test
    */
    $upc = BarcodeLib::padUPC('16');
    $product = new ProductsModel($connection);
    $product->upc($upc);
    $product->load();
    if ($product->numflag() != 0) {
        $product->numflag(0);
    }
    $product->save();

    /**
    Simulate form input
    */
    $form = new \COREPOS\common\mvc\ValueContainer();
    $form->flags = array(1, 3); // 0b101 == 5

    $module = new ItemFlagsModule();
    $module->setConnection($connection);
    $module->setConfig($config);
    $module->setForm($form);

    $saved = $module->saveFormData($upc);
    $this->assertEquals(true, $saved, 'Saving item flags failed');

    $product->reset();
    $product->upc($upc);
    $product->load();
    $this->assertEquals(5, $product->numflag(), 'Wrong numflag value ' . $product->numflag());

    $product->numflag(0);
    $product->save();
}

The test chooses a product record with a well-known numflag value, feeds simulated form data into the saveFormData() method, then checks the product record to verify it was updated. There's no need to specifically check for a fatal error; that will immediately & automatically cause the test to fail. Note there's still a real database connection. Some would probably contend that a completely decoupled unit test should use a mock database object that just simulates query results. I think CORE is so database-heavy that issuing actual queries against actual databases makes for more robust tests. The InstallTests (see above) are specifically designed to build a testing database and load it with predictable sample data.

ItemFlagsModule.php itself has to be refactored a bit to actually use the injected objects:

    public function saveFormData($upc)
    {
        $flags = $this->form->flags;
        if (!is_array($flags)) {
            return false;
        }
        $numflag = 0;   
        foreach ($flags as $f) {
            if ($f != (int)$f) {
                continue;
            }
            $numflag = $numflag | (1 << ($f-1));
        }
        $dbc = $this->connection;
        $model = new ProductsModel($dbc);
        $model->upc($upc);
        $model->numflag($numflag);
        $saved = $model->save();

        return $saved ? true : false;
    }

And finally ItemModule needs the appropriate injection members and methods.

    protected $config;
    protected $connection;
    protected $form;

    public function setConfig(\FannieConfig $c)
    {
        $this->config = $c; 
    }

    public function setForm(\COREPOS\common\mvc\ValueContainer $f)
    {
        $this->form = $f;
    }

    public function setConnection(\SQLManager $s)
    {
        $this->connection = $s;
    }

This concludes the example.

Clone this wiki locally