Patrick's blog

Posted Tue 23 April 2019

Quotes and stuff from "How Google Tests Software" (Part 1)

Sometimes as a tester, you are embedded within a team as the only QA person. Likely you're part of a larger QA group within your organization that you will occassionally have meetings with. But by default, you spend most of your time with devs and product. Every two weeks, you and the rest of the team are responsible for shipping features.

On the surface, since you are all working on the same product, you have it in common. Its quirks and strengths and history. You can complain to others about it or share excitement about it. But your goals on the project are very different from the goals of others. And this difference can lead to a feeling of isolation which I think is natural for a tester.

So what I like to do sometimes is read about how testing is done at other companies. Or how other testers organize their solutions or code or tools. What have you automated? What tools are you using? I just like hearing other testers discuss their work. I like hearing what their opinions are and what they hate and what they don't mind. What they can't wait to try or what they're so excited to get back to later.

Even though it's not your project or the technologies they work with are foreign/arcane you can still easily hear them express very similar opinions to yours. It's good to hear that there is someone else out there who has felt the same way and come to the same conclusion as you have.

It's kind of old now but there's a book I have that has tons of this in it. It's called How Google Tests Software. There are other books that are pretty good at this - namely Lessons Learned in Software Testing.

But I've reread the first chapter of "Google" and collected some quotes I think are very relevant for my professional situation right now.

page 6

If a product breaks in the field, the first point of escalation is the developer who created the problem, not the tester who didn't catch it.

This may seem like common sense and it seems like it was maybe one of the earlier lessons learned when software testing as a profession was first starting to shape up but rhetoric on the team can easily slide into blaming the tester for not catching an obvious bug.

This is not a problem for me, thankfully.

page 9

Testers are essentially on loan to the product teams and are free to raise quality concerns and ask questions about functional areas that are missing tests or that exhibit unacceptable bug rates.

Important note on this one that the testers are "free to raise quality concerns and ask questions" but they are not the cops. Obviously the product team is in the trenches and they are working with their customers every day and understand the context of the project as it is when these concerns are raised so they seem the most well-positioned to decide which and how to address issues raised by testers in this way.

Because we don't report to the product teams, we can't simply be told to get with the program.

It's easy to get desensitized to small issues for a product that are non-functional qualities or that have easy workarounds or would require far more work for too little gain for the product. Especially if the workarounds get automated away or if the workarounds become well-known tribal knowledge. When you have an objective force like a roaming SET, they won't feel the sensitivity of the issue and report it just the same as they would any other issue.

Our priorities are our own and they never waiver from reliability, security, and so on unless we decide something else takes precedence.

A position like this is going to require a lot of trust by management to do the right thing at any given time.

Also, the bit about "unless we decide". To me this implies that the SETs/TEs are somewhat well-connected or at least communicate in an open manner. If they are heads down working some issue in one building they might not be in as good of a position to recognize a more pressing need across the street.

Menial work around any specific feature is the job of the developer who owns the feature and it cannot be pawned off on some hapless tester.

This is the "throw it over the wall" concept. Even worse for issues that are more technical, like tech debt or requiring a specific and complex set of reproduction steps. All of the time the developer needed to drill all the way down to the root cause now has to be repeated by the tester to verify the fix.

Testers are assigned by Engineering Productivity leads who act strategically based on the priority, complexity, and needs of the product team in comparison to other product teams.

If all product teams are fighting for share of the small pool of testing resources within the organization, it seems like there's little incentive for the product teams to cooperate or share because each is acting in the best interest of their product which means getting as many high-quality resources as possible allocated to it.

I think I'll stop here and do another part next week.

Category: testing
Tags: testing