Always One More, You’re Never Satisfied (With Your Automation)

12 months ago 43

1986! The year I graduated high school! That should tell you something right there. That was also the year that Van Halen released their first post David Lee Roth album, 5150. On the popularity of its first single, Why...

1986! The year I graduated high school! That should tell you something right there.

That was also the year that Van Halen released their first post David Lee Roth album, 5150. On the popularity of its first single, Why Can’t This Be Love, the album went to #1 on the U.S. Billboard charts. The album produced a few more singles, but I was always drawn to the title track, 5150. I love the guitar work (of course), the interplay with the drum fills, and, really, the drums in general. I just love the song.

So, as usual, Paul, what does this have to do with testing and automation? In response, as usual, it’s about the lyrics and what they mean to me. This time, it’s about the chorus:

Always one more

You’re never satisfied

And we really aren’t, are we? We’re never satisfied; we’re always pushing for more automation. Automating more of the testing. Testing is the bottleneck. We need to shorten the testing cycle. More automation is better; indeed, it’s critical! We must automate all the testing!

Deciding that more automation can help us is certainly worth discussing, but we have proceed responsibly. Let’s discount the whole, “we have to automate all the things” where “things” means “testing”. There are many, many blogs, articles, and whitepapers that explain why you can, can’t, should, and shouldn’t expect to automate “all the things”. Let’s, instead, talk about the other “not 100% percent”.

Even in organizations that truly understand that “automating all the things” is not a feasible goal, there’s an expectation that some of the things can or should be completely automated. Organizations often think that they have test cases that have steps, and those steps can therefore be mechanized. In some cases, they are correct. In some cases, the technology to mechanize it doesn’t exist or isn’t sufficiently economical.

Let’s have an example. A rudimentary workflow through an e-comm site could be something like this:

Search for a product. Add a product from the search results list to your cart. Click “checkout”. Log into the site. Complete the purchase. View the order number that’s presented to you. Check that the “order submitted” email was received, and its info matches what was generated during the checkout process.

If we are trying to automate that workflow, the last bullet is a bit less straightforward than the others, right? We’ll have to access a different system/technology from our primary workflow. We need to compare information between the two systems. For some of us, accessing a test email account is trivial. For others of us, depending on our technology and security restrictions, accessing some rando email to check for an “order number” email will set off all the klaxons. There are products that can help with that, but not all of us have access to those either.

So, if we can’t automate that last step, do we just gnash our teeth, rend our cloth, and lament our rotten luck? I suggest we do not.

Sure, it would be great if we could automate all the steps in a specific flow through our applications, but sometimes we can’t or shouldn’t: we may not have access to the appropriate technology, that technology may not exist, or it may be too expensive for the value it brings us. This doesn’t mean that automation can’t provide value here; there may be tremendous value in automating those steps which we can automate and leaving the other step(s) for humans (e.g., testers) to perform. This approach may seem like I’m suggesting that we make the testers do the icky work, but what I’m suggesting is quite the opposite: get the machines to do only what they are good at and can do valuably. This approach can free testers from having to do much of the mechanizable work, i.e., crank-turning, in favor of doing more “thought work”, things that machines are currently bad at. Of course, we can’t eliminate all of the crank-turning, but help is help.

When I worked at an e-comm company, we needed to submit a lot of different kinds of orders through the commerce system and our associated code; the testers needed to observe whether different order types were being processed appropriately and that their associated information was being stored as expected. At the time, our web app didn’t have an API interface that we could use to create orders; that meant the testers would need to drive orders into the system via the web pages, the slow way. Since I’d already used Selenium to create orders via the web, this seemed to be a great candidate for automation except, we didn’t have a valuable way to check that the order information was stored in the commerce system correctly; that system didn’t have an API, it only had a GUI interface. It was determined that automating that commerce system’s interface was not valuable because we were going to replace that product in the mid-term.

We could have said, “Dang, we can’t fully automate this, so the testers have to run the different order types through the system and check the data themselves”. That would have been a huge amount of effort to expend on something that was partially automatable. What did we do? As I briefly mentioned in a previous posting, we created an “order pumper” that would drive the orders through the system via the web pages but, since we couldn’t check the internal data, we created a separate log file of information that included the order number for each order.

We ran the pumper overnight. When the testers came in the next morning, they had a concise log file of the orders that the tool made overnight. They had to access the commerce system’s GUI to check the order data based on the order number, but they didn’t have to perform the higher-effort activities of creating the orders. Clearly, the activity was not “fully automated” but it provided immense value; we decided it didn’t make sense for us to go for “one more” and, instead, decided we could be “satisfied” with partial automation or automation assist.

This order pumper is but one example. Computers are great at stuff like clicking and waiting; humans are not that great at it and are capable of far more complex thought tasks. We need to let the machines do what they are good at and not sweat the “last mile” of a task if that last chunk of effort won’t provide sufficient value.

When we think about automation, specifically automation for software testing, many of us keep thinking about “I need to make the computer test this for me”; we often think about tests being automated as a binary state.

Instead, if we change our thinking of automation to assistance, we can start thinking, “How can I use technology to help me perform this task”. At that point, we can stop thinking that automation is an all-or-nothing activity and start thinking that something is better than nothing, as long as that something provides value.

Like this? Catch me at or book me for an upcoming event!


View Entire Post

Read Entire Article