Testing in Action: Ensuring Software Works in the Real World

We are becoming progressively more reliant on technology in our day-to-day lives, so software testing plays an increasingly important role in the development of applications. If software is unreliable, users end up with more problems than solutions.

At Smartrak, we know that our software is incorporated into time-sensitive applications and relied upon as a tool to manage assets in high-risk environments. This responsibility to provide reliable, robustly tested software is taken seriously: people's lives depend on it.

Setting Standards

Smartrak uses the testing pyramid to design our test plans. The testing pyramid is widely accepted as the de facto standard for the ratio of different types of software testing that should be performed on an application or service.

As we move up the pyramid, the number of tests at each layer decreases, while the cost - in terms of time and effort to design, run, and maintain the tests - increases. This trade-off between test cost and benefit forces Smartrak's Engineering and Quality Assurance teams to be selective about what we test.

This article explores the testing pyramid in more detail, providing real-world scenarios that demonstrate the scope of tests at each level and explaining how Smartrak uses the pyramid to design software solutions and implement tests that ensure those solutions satisfy the initial brief.

Unit Tests

Unit tests are used to test individual blocks (units) of functionality. They provide peace of mind around the building blocks of a system, can be easily automated, and are relatively inexpensive to run and maintain.

Because of these qualities, unit tests serve as the foundation of the testing pyramid and typically number in the thousands or tens of thousands, depending on the size of the application.

A real-world scenario that could be covered by a unit test is checking that a car door can be unlocked with your car key, but not with your house key, garage remote, or your neighbour’s key (who happens to have the same car). Another example is a discount applied at a supermarket checkout for bringing your own bag: the discount should be applied if you bring a bag sold by that chain, but not if it’s from a different supermarket.

Smartrak uses unit tests to check the correctness of our business logic, ensure edge cases and invalid inputs are handled appropriately, and prevent functionality regression when making changes to our software.

Service Tests

Service-level tests (which we call integration tests at Smartrak) test the interaction between different components. Integration tests are more expensive to write, run, and maintain, so Smartrak is selective about what parts of the system are tested. They also require more resources than unit tests, so their numbers are generally lower, typically in the thousands.

A vehicle-based scenario could be: when throttle input is received, and the vehicle is in gear with the handbrake off, the vehicle moves. Another example, familiar from childhood, is a baking soda and vinegar volcano: the test would verify that combining baking soda with vinegar produces a foamy reaction.

At Smartrak, integration tests are used to ensure that:

  • Database access methods work
  • API and web service endpoints return the expected result
  • Integration points with external services function correctly

When an issue is encountered in our service layer, a “failing test” is created to reproduce the scenario. Once the fix is applied, the expected outcome of the test is inverted to ensure that if the issue occurs again, we are notified automatically.

User Interface Tests

User Interface (UI) tests verify that the correct interface is displayed and that the user can successfully interact with the system. UI tests focus on the visual outcome of an action: confirming that users receive appropriate feedback when performing an action. They are time-consuming and expensive to run, so UI testing is used selectively.

At Smartrak, all users need to be able to log in before they can realise the value of our system. Therefore, we have UI tests covering different login scenarios across our systems. We also create personas based on user levels to determine which functionality is critical for a particular role. For example:

  • A map user must see events appear promptly after they are created.
  • A client administrator must be able to manage users and permissions.

A real-world example is a driver pressing the hazard lights button: the test verifies that a) the hazard lights begin to toggle, and b) the indicator lights on the instrument cluster illuminate.

Manual Testing

Manual testing is performed during the development phase by the team writing the software. Once development is complete, the software is handed off to a separate Quality Assurance (QA) team, which identifies any remaining bugs or user experience issues before the software is shipped and ensures the brief is met.

Manual testing is typically performed in scenarios such as major refactoring (reworking code while maintaining behaviour), professional services work, or testing critical functionality (satellite communications, emergency alerts, etc.).

Although manual testing is time-consuming and expensive, it adds value by uncovering user experience or workflow issues that automated tests might miss. The QA team applies a critical perspective, whereas engineers may subconsciously assume their own code is perfect.

Bringing the Testing Methodology Together

Unit tests run automatically on engineers’ machines during development. Using intelligent tools, only tests affected by changes to the relevant module are executed. Additionally, whenever engineers submit new code - whether features or bug fixes - all unit and integration tests are run on our build servers. To avoid resource contention, integration tests are executed twice a day.

For complex changes or large features, the QA team performs manual testing inside a test environment running against a snapshot of live data before changes are deployed to production.

Summary

Smartrak’s layered approach to software testing - combining unit, integration, UI, and manual testing - ensures our applications are reliable, robust, and safe. This methodology protects our users and helps us deliver software that meets real-world demands.

Related Articles

crossmenu