+31 (0)23 532 34 32

Practical Testing of PHP Web Applications

Testing in an ideal world?

Every client and developer knows how excruciating bugs can be, especially in production environments. Prevention is much better than the cure. Three hours spent preventing a costly bug from ever cropping up easily beats one hour of hasty and risky debugging on live servers.

For many projects, developers do their testing manually: developers or quality controllers navigate their way through web applications by hand. This is obviously a time-consuming, repetitive process with a lot of room for error. By contrast, programmatic testing is a solid, reliable approach: it is easy to repeat tests, allows for automation in ways that make it simply impossible to forget running tests before performing updates. It's a pretty sweet bonus that tests can be performed endlessly without turning some poor human's brain to mush.

Now is as good a time as any to get serious about testing. Possibilities for testing in PHP have matured. A versatile mix of testing approaches & solutions make it possible to test every aspect of an application under development.

Back-end code may be unit tested, application features may be integration tested and front-end functionality can be tested in automated browser sessions. All of this can be done using background processes that can be made into an inseparable part releasing project updates.

Ideally, perhaps, every single bit of a project's code and features would be automatically tested. These tests would be written using a strict (and usually opinionated) standard to ensure optimal meshing of fine-grained unit tests with feature-oriented integration tests to catch errors at any operational level.

Pondering this ideal further, there would be controlled environments that guarantee that tests are run for every change. Tests would abort deployment of code to production environments and warn developers of problems to be addressed before a release will be allowed.

Keeping in mind the rising costs of defects during and after a project's development, such extensive testing would limit project cost inflation. 100% bug-free code may not be possible, but with enough tests to provide full coverage, you might come closer than ever.

This sounds pretty neat! With every bit of a project strictly and fully tested, a superb standard of quality would be achievable.

Or would that be taking a good thing too far? Is this a pipe dream?

Pragmatic testing

Sadly, in the real world of day-to-day development, ideals must often make way for a murky reality, full of deadlines, shifting goals and priorities.

Another challenge is the fact that extensive testing is a hard sell to most clients. A few may be technologically savvy and appreciate that the benefits easily outweigh the extra costs. Most clients desire results as soon as possible, and demand quick fixes whether they are cost-effective in the long term or not.

At Pixelindustries, we feel it is our duty to educate clients about the importance of proper testing. That said, the reality is that negotiations may often simply boil down to "the customer being always right."

For ongoing projects, releases may follow one another in rapid succession and code is subject to frequent, unpredictable change. The resources for keeping tests for every single routine up to date may simply not be available.

In short, at Pixelindustries, most of our projects inevitably end up being only partially tested to varying extent. So how do we make the most of this? How can we optimally spend the limited time we have available to get the best results?

We begin by prioritizing candidates for testing. Simple, run-of-the-mill operations are tested only if adequate time is available after critical and complex processes are properly tested. As a rule, for instance, we always test order processes, data imports, connections to ERP systems and other third parties, for instance, while testing content pages, newsletter signups and contact forms are less likely candidates for automated testing.

Another helpful approach to get better long term testing coverage is to leverage ways to re-use code. We develop in-house packages for functionality that is useful in many projects and we make it a point to test these packages especially thoroughly. This code re-use not only standardizes our projects, but also keeps us from having to test similar functionality more than once. In addition, this highlights one of the great advantages of embracing the open source community, which offers many well-written, maintained and tested packages.

Whenever a bug is discovered that did slip through the cracks, we set up tests to specifically check for it and for any problems with similar causes. This streamlines our problem resolution process and prevents the same errors from recurring unnoticed. Once bugs are squashed, this helps to keep them squashed.

A final timesaver we employ is to write tests as additional documentation for our code. By using descriptive names in tests, fellow developers are able to more quickly grasp a project's. Features are best described by integration tests, while unit tests of core components shed light on the inner workings of an application.

Needless to say, we aim at testing as much as possible. Choices will have to be made, however, and these guidelines help use to make the right ones.

For most projects, this means that we do not adopt test- or behavior driven development. In many cases we only write our code test-first for complex price-deduction calculators, record filters, order propagation to payment providers, and so on. As developers we have to be as flexible as the tools that we use.

Leveraging tools

Pragmatic considerations play a role not only in what is being tested, but also in how we do our testing. We have worked with various testing solutions (such as the very opinionated PHPSpec, and Behat with its easy to read Gerkin syntax). We ended up settling for PHPUnit, the standard all-rounder for PHP testing. Not because we're particularly impressed PHPUnit's bare-bones approach or its rather clunky syntax, but because it is a flexible industry standard. It can be adaptively used to set up an effective mix of unit and integration tests. This is best suited for the diversity of our projects, and a reality that often leaves us little time to dedicate to following the rules of more opinionated testing solutions.

At Pixelindustries, we picked let the tools aid us in writing tests efficiently. We use PHPStorm, which offers excellent integration of Xdebug. It is nice to be able to run tests directly from the IDE. It's real power lies in being able to display testing coverage directly in the code editor.

Besides giving a general idea of the extent to which a project's code is actually tested, code coverage serves as a great tool to detect untested paths of logic. By assessing which lines of code remain untested we can quickly determine whether further tests are necessary, and what they should address.

Looking further

As always, this is clearly not the whole story. There are many more important topics that should be considered.

It is important to note that automated testing does not remove the need for manual QA testing. Developers are prone to miss possible problematic paths or user actions, leaving coded tests for these unwritten and coverage wanting. It is often better to rely on specialized quality control as a fresh pair of eyes and as a last line of defense for catching obscure bugs.

Hinted briefly at above, the use of automatic testing to prevent releasing code that failed to pass a testing phase is just the start. Deployment pipelines can streamline the deployment process, leading to continuous integration and deployment. Environments may be automatically updated whenever codebase updates are pushed that safely pass all tests.

This post only discusses quality and bug testing, which leaves out the important concept of performance testing. Using tools like Blackfire and New Relic, this too may be automated or at least made part of a standardized testing phase or deployment process.

Finally, there is a lot more to be said not just about writing tests, but also, and arguably more importantly, about writing the code under test itself. The testability of code is mainly determined by architectural choices, such as adherence to SOLID and what design patterns are implemented.


These are all worthy topics that we aim to dedicate future blogs to.