20 September, 2016

Production Update: Testing (part 2)

A few weeks ago, we published the first post of a series about testing. We talked about how we use functional tests to validate each of the products output on the assembly line. Today we return with the second part: software tests.

We have a strong focus on testing here at SmartHalo, and by using various types of tests, we can ensure that we’re delivering a high-quality product. The ultimate goal of our software tests is to ensure that our code is error-free and easily maintainable over the long-term. We plan to continuously improve the app that we’ll deliver to you on launch day (in December); our suite of tests ensure that we can release these improvements without breaking existing functionality.

To achieve that goal, we use a gamut of testing methodologies at SmartHalo. Each methodology looks at the code from a different perspective: from the view of a developer, from the view of the user-experience designer (UX designer) and finally from the view of an end-user. We’ll explain each perspective in turn.

Developer Tests

Tests that are run from a developer perspective allow our team to work on different parts of the system without introducing errors. Software systems are made up of complex, interacting components. It’s impossible to keep all of these components in your head at once, so we create test cases for each component in isolation as well as the interactions between components. When a developer finishes a feature, they run these automated tests to ensure there haven’t been any bugs introduced before they share their code with the rest of the team. These kinds of tests are usually called unit tests or integration tests, and at SmartHalo we have a suite of them for each software component.

UX Designer Tests

Once a week, we release all of the features that have been completed, and that’s when we run tests from the perspective of the UX designer. The UX designer is the advocate of the end-user on the SmartHalo team, and the tests we run at this point are to ensure that the app does what we expect it to do. These tests are called acceptance tests, and at SmartHalo we run acceptance tests on each of the mobile apps (iOS and Android) as well as the software that powers the hardware device (aka the firmware).

User Tests

The last types of tests we run are at the user perspective, and they are a different type of test entirely. Whereas the last two types of tests were checking for correctness, user tests focus on quality of the finished product. We’re actually going to do a full post on user testing, so we won’t give too much away, but basically we set up the tests by asking questions like, “Can users successfully enter a destination?” or “Does SmartHalo give the user enough warning before a turn?”.

Testing the app with users
Claire, who is in charge of the user tests, came up with a smart solution to record how users interact with our app.

By framing the tests as usability questions, the product owner gains valuable information on how users actually use the app, and can create tasks for the software team to make the app better. This feedback cycle is the basis for the agile software development approach.

Testing a product with as many moving pieces as SmartHalo may sound like an onerous task, but it gets a lot easier when you break down the tests. From the beginning, our focus has always been quality. The tests are a big part of our plan to make a lovable bike accessory (and companion app) for our backers.

In our next update, we’ll talk more in depth about the user tests. Stay tuned!


Curious about the SmartHalo collection?