QA is too often treated like the ugly stepchild of software engineering, particularly in fast-paced startup type environments. Look at many teams’ code and you’ll find a hodgepodge of scripted tests, possibly automated (possibly not), with varying levels of coverage and maintenance, driven by individual initiative more than universal mandate.

In this age of agile Continuous Integration (CI), automated tests have become generally accepted best practice, yet in my experience they remain far from a generally practised best practice.

QA outsourcing has become increasingly popular. It’s easier and cheaper than ever to contract with a human(s) to manually test your software or even use a ‘crowd-testing’ service to mimic automated software tests. Whatever its merits, I believe QA outsourcing dampens adoption of such testing by making it economically feasible to ‘solve’ the QA problem by throwing people at it. This helps perpetuate software developers’ ‘We’re too busy to write tests’ argument.

Common Types of Software Testing

Before I go further, let’s take a moment to review some definitions:

  • Unit Testing: Tests that exercise atomic units of source code. Proper unit tests are self-contained with their own test data and no dependency on external services (even the local file system).
  • Integration Testing: Tests that exercise multiple units or components working together. By definition, integration tests rely on availability of multiple components (and possibly external services, such as a database).
  • Functional Testing: Often defined as testing the functionality of the application according to product requirements. Functional testing is perhaps more easily understood as ‘business-level testing’. For a web-based application, functional testing is testing at the level of a user (whether the tests are actually executed by a human user or by an automated script).

Now that we’ve gotten that out of the way — what’s your team’s adoption of automated testing?


A. We’ve automated our unit tests

B. We’ve automated our integration tests

C. We’ve automated our functional tests

D. Yes to A, B, and C

E. We have people who test — they’re called ‘QA’


(If you answered D, congratulations! You can stop reading now.)

Not too long ago I was managing a team whose answer was ‘E’ — we had people for that (in this case, an outsourced contractor). This was a capable, high-performing software development team, quick to adopt new technologies and generally moving fast (and breaking things along the way). The team was adding unit tests, but slowly. They didn’t see writing and automating tests as a particularly high priority, especially with a contractor doing a seemingly good job of finding a few new bugs nightly after every code drop. It wasn’t ‘broke’ so they weren’t going to fix it.

Perhaps this sounds familiar.

But it shouldn’t be. To make the case to invest in developing a robust portfolio of automated tests, I created the following deck:

​Automated tests are indeed a lot like robots on a factory floor. They don’t replace the people who conceive of the product (formulate test plans), create initial versions of the product (manually run test plans), design and build the robots (write automated tests), or do new product R&D (perform exploratory testing). But for the large part of QA that consists of repetitive testing, robots (automated tests) outperform their human counterparts and should be leveraged for unit, integration, and functional tests across all software environments.

Larry Cynkin is a consultant CTO and writing as a contributor for CTO Craft.

If you, or your CTO / technology lead would benefit from any of the services offered by the CTO Craft community, visit: www.ctocraft.com or contact us via email at: info@ctocraft.com