Bringing Testability into Focus

Anand Kumar Keshavan ( Founder/Partner- SwanSpeed Consulting)

Amongst the many "non-functional" attributes of software - scalability, availability, extensibility and so on- testability remains the one that is most difficult to grasp. Scalability can be tested by using appropriate tools. Availability can be measured by plotting downtime statistics. How does one "test" for testability?

Intuitively, testability of a piece of code or a large software system defines its ability to be tested in some kind of repeatable way by a group of humans or by other programs within a reasonable/ predictable period of time.

While testability may be vague and hard to define, lack of testability throws up a lot of symptoms. Here are few of them:


  • Release cycles are unpredictable- from the time a set of features have been declared to be "done" by developers to the time such features are deployed. This "last mile" problem has existed for non-trivial software, irrespective of the tools used or processes followed, since the earliest systems came into being.
  • Fixing of one set of defects ( or modifying the behaviour of a feature) resulting in hitherto unknown defects showing up in other parts of the software.
  • Frustration shown by the business stakeholders at the amount of time and cost required to add, what appear to be, trivial features or modifications
  • Developers and project managers blaming testers of not being able to find bugs that are subsequently found by users in production. (In general terms, it is the developers who are responsible for every error in the software-- blaming testers for not finding bugs is just plain silly. As Djikstra said "Testing shows the presence, not the absence, of bugs". That is one of the foundational principles of computer science. But that is a topic for another day.)
  • Too much time to add a new programmer to the team. Steep learning curves, build failures- the usual stuff
The reasons for poor testability are be many:


  • Not enough time spent in defining the architecture. In the recent years, even very complex systems are being built using the "build and refactor" model without laying an adequate foundation. Such models usually end up accumulating a lot of technical debt as feature addition takes precedence over everything else. "Techincal debt", like any other debt, grows at a compounding rate!
  • Choice of programming languages and technology stacks- dynamically typed languages can produce systems that can quickly deteriorate into a practically untestable nightmarish situation, if best practices are not defined and followed maniacally by every member of the team.
  • Dissonance between the complexity of software in terms and the level of abstraction used to define and build it. For example, if custom behaviour is implemented using hard coded conditions ( instead of behavioural abstractions), then one is doomed! ( My earlier article, Building a Software Platform, touches upon this.
  • Concurrency ( this is my personal favourite- poorly conceived concurrency almost always results in poor testability)- with many micro services based systems acquiring the characteristics of concurrent systems this problem is becoming more mainstream.
The bad news is the once you build a system ( any non-trivial software with many moving parts-- not your regular apps and websites) without built-in "testability" , it is a hard , if not impossible, to bring it in later. In theory, it is possible to do so by successive refactoring over a period of time. In practice, I have not seen a single instance where this has been done. As more features get added, more and more parts of the system become "untestable" and the refactoring effort always lags behind. ( This is similar to adding test automation very late in the lifecycle of a project-- the automation suite always lags behind the software feature set!)

Poor testability also has a cascading impact on other "abilities"- "reliability" and "extensibility". ( and may be availability- if run-time errors result in crashes)

So what is the solution? Alas, there is no simple one. The solution, if any, lies in bringing in a level of rigour into all aspects of your software development-- choice of technologies, solution architectures, coding conventions ( not stylistic -- but conventions based on which patterns to use, which ones to avoid) and so on.



(The author is a programmer and a founder/partner in SwanSpeed Consulting and a Director in Codewalla .

SwanSpeed Consulting  can help you in architecting robust software for a high degree of testability, scalability and maintainability.

Codewalla is a boutique software development company that specialises in building SAAS software products.)



Comments

Popular posts from this blog

Model correctly and write less code, using Akka Streams

Your Microservices are asynchronous. Your approach to testing has to change!

Building a "Platform"?