"I am software tester. My job is to break software." , said one student in my Exploratory Testing workshop. I asked him to elaborate and explain me his techniques to break the software. He was silent for moment and then said he did not know how exactly to answer that.
I further asked about the last defect he found and how did he find it. He could explain that to my satisfaction. Then I asked if the defect was already there or something that he did introduced it. Student realised where I was coming from and admitted that he did not break software, he only helped to uncover the software which was already broken. Then I asked him once again to explain his techniques to uncover those already broken points and he explained that to my satisfaction again without getting frozen in between.
One of the important lessons I have learned from James Bach and something that I make sure to propagate in my discussions with testers is that, we (testers) need to be careful of the vocabulary we use to describe our work because it indeed makes big difference. With this little change, I have experienced the change in me in terms of how I perceived things before with the use of established vocabulary and after starting to choose my vocabulary carefully. That's not it, I have witnessed that when you help people to realise the same in a constructive way, they also make sure they help others to realise it. And this "chain reaction" of propagating that gesture is in my opinion, an important part of contributing to the craft.
I came across this thought-provoking post written by Maaret Pyhäjärvi (influential colleague in testing community I respect and admire ) and some of the arguments she has made, made me ponder upon my own attempts and experiences.
In her blog Maaret says:
Instead of changing the vocabulary, I prefer changing people's perceptions. And the people who matter are not random people on twitter, but the ones I work with, create with, every office day.
I totally support the idea of helping to change peoples' perceptions. I have made those efforts and have seen that taking effect. The approach is very much in line with Weinberg's idea of influence and that has always been my first approach towards changing something. However, the results in this particular case in my experience have been short-lived. I found people to be coming back to their initial understanding which was primarily shaped by established vocabulary and every once in a while I had to discuss the same thing with them again. Can I say my efforts were paying off? I guess not really.
What was and is the problem?
I realised that people that I had helped change their perception (mostly programmers and non-testers) got back to old vocabulary because other testers and stakeholders they were working with, were totally unaware of what they were talking about and why. Those people found it very frustrating to explain others their rationale behind using different vocabulary and eventually they gave up. I remember of one programmer friend coming to me and saying that he felt silly and stupid because the tester he was talking to was totally clueless of what he meant. And he finally said if testers themselves don't care about what their vocabulary should mean, why should he? And he was right!
The problem is, testers who understand the problem with established vocabulary are very less in number as compared to an entire lot of project stakeholders who use it. And testers who make an effort and help others to change their perception are even less.
And this is why I personally see the problem with "living with" some established norms which need revision. I think it's a high time that we strongly disapprove of what we do not believe in. Because it badly affects all the efforts made by people who care. When we know about problem with things and still decide to live with them, our awareness about those problems becomes pointless or less effective if not.
It is not just about people around us
The other day I watched this humours video by AIB on mass technical recruitments where they pick two gardeners towards end of recruitment to fill their quota and say, "Let's put these two on manual testing. Who requires talent for that anyway?"
That was very difficult to digest for me but at the same time, I could not blame the producers of that video because our established vocabulary is not their problem. They just presented the widely established (and mistaken) perception of our established vocabulary.
If we as testers don't care enough about changing something wrong just because it is established, we are letting others shape wrong perception about our profession and that is a silent killer. One of the leading testing tool company recently tried to showcase "manual testing" as outdated, bad testing and proposed their tool that supports Exploratory Testing as a solution. Major part of our industry still considers testing = manual testing = bottle neck and hence thinks of eliminating testing all together. But in reality what they want to get rid of is bad testing only and not manual testing or testing as such. If it is not us then who else is supposed to care about these problems and make efforts to solve them? And I don't know how best we can stop it other than getting rid of the labels and classification which is adding a lot to confusion.
In my opinion, we are responsible for how we let others shape their understanding of us. And no matter how hard we try to do it with people around us, there will always be people beyond our scope who will undo what we do. If we keep collecting the karma of living with established vocabulary and do not make deliberate efforts to change it, it is most likely going to haunt us and our generations (if at all we survive).
It is now up to us whether to collect that karma or to cleanse it. Cleansing sounds reasonable to me but I am still looking for more options . What if we do both?
Lately, I happened to have an interesting discussion with my colleague Dirk Meißner on whether programmers should have reasonable understanding of testing or not. A lot has been talked and written about how testers need to be great with their technical skills so that they can contribute effectively and remain valuable. Sure, that's helpful and I too insist that it's high time that testers get over with their traditional way of working (and thinking). However, what surprises me is that there is not enough awareness or enough discussions happening around programmers learning to understand testing to amplify their effectiveness.
Does it matter? Why?
It absolutely does. At least now, if it did not before. "Whole Team Testing" is new cool (again) especially in DevOps contexts. And it has it's own reasons to be that way.
Let me explain. Agile teams typically have one tester dedicatedly looking after testing and related activities in team. This tester is usually busy testing (and automating often) stories for each sprint with primary focus on acceptance criteria. If the tester is "cool kid" then they go over the board and test things beyond acceptance criteria too. Cool! Let's park this thought here for a moment. Okay?
James Bach, in his interesting article "Seven Kinds of Testers“ beautifully explains the key patterns surrounding testing styles and how testers typically fit into one pattern or the other or combination of more some times (or checkout this thought-provoking tweet series by Michael Bolton). In over eight years of hands-on experience of testing, I have found myself to be of one kind (or maximum two) at a time and by the time I wish to change my hat (or style) it is usually almost the time to deploy the feature in production. Pity!
The point is, there is limit on how much versatile a tester can be in limited period of time for each story they test. Sure, it's not impossible but I would say it's not very easy either, given the time constraints. Now, imagine that we add "programmers" from team as other kinds of tester (based on their skills, expertise and experiences) working on same story. Do you not think it will most likely add more coverage to the quality of that feature, without having to spend really additional time on it? Do you not think that testing wisdom of a programmer would help tester and the team to ship quality product? I'm sure, now you do!
When I say programmers should contribute to testing, it does not have to be only in a way that they will need to test the software like testers do. Even if they develop the required mindset, it's already a good start. Indeed it will be great if they can test it but I feel that if they could at-least understand modern testing, it will greatly benefit the project teams.
How exactly programmers can learn to test and start off with it, or how testers can help them to onboard with testing is another topic. It requires a dedicated post (more on that later).
This post is about identifying programmers with testing mindset or skills that can help them test better, while you interview them. I recommend watching out for these skills/traits in interview:
1. Quest for Context
The scholar John von Neumann once said, "There's no sense being exact about something if you don't even know what you're talking about." In a world that is growing increasingly dependent on highly complex, computer-based systems, the importance of defining what you want to make before making it -- that is, knowing what you're talking about -- cannot be stressed enough. (Exploring Requirements: Quality before Design by Weinberg & Gause).
Developing software is not just about writing a program that will do the stuff but it is more about building a product that your customer would like to use. In past, I have come across programmers who were excellent coders but failed to care enough about purpose of the program they used to write. I rarely see programmers questioning the user story beyond acceptance criteria and technical implementations if any (unfortunately, it's not very different for majority of testers either).
If I were to hire a programmer, I would expect him to ask Context Revealing questions. By that, I don't mean just questioning the business value of the user-story. There are things beyond that which matter. What if we find out that other team has worked on similar solution before? What if we could re-use some components developed by other teams? What happens when particular feature development requires specific technology expertise and team does not have it? What happens when stakeholders' understanding of technical details defer from engineering team's? What if implementation of some solution requires tools not available with team or required access levels for that matter?
Sure, one can eventually find these things out when they start working on the ticket but what's the point in findings things with accident and when it's already late?
Programming interviews typically include coding challenge that typically gets assessed for candidate's technical skills, familiarity with known technological issues, understanding of best programming practices, problem solving skills etc. which are indeed important. However, I am yet to see a programmer being assessed for the kind of questions they asked before jumping on to the coding challenge itself. See if they are questioning the very purpose of challenge, see if they question the business value, check if they ask about other elements of Project Environment and Product Elements for that matter. And, please check if they ask questions about testing and Quality Criteria if nothing else. Most of the programmers I got to interview usually assumed that that there will be a tester in team who will QA their code. They just have to write the code and throw it in tester's bucket. You better watch for such kind if your team does not have a tester.
Just like testing, good software development should be treated like an intellectual activity. The better one understands it the more ways one can contribute to product quality. And it all begins with asking questions.
2. Interactional Expertise
If you are unfamiliar with the idea of Interactional Expertise, I suggest you start from understanding it first. Even better if you could read Tacit and Explicit Knowledge by Harry Collins. I personally found it to be very useful learning when I was introduced to it by my friend Iain McCowatt.
The purpose of mentioning Interactional Expertise as a skill here is that, I find it to be very important skill when it comes to have technical discussions with non-technical people. Or, even when it is about having meaningful technical discussions in short period of time.
Bringing up technical topic for discussion in planning meetings or grooming/estimation meetings is usually like opening the Pandora's box. Over the years, I have been a part of deep discussions in meetings with only conclusion of carrying them to next meeting or scheduling separate meeting for that. Then again, special meetings for explaining those technical things to non-technical people were required. Does that not sound familiar to you?
I feel that spending so much time on deep discussions very often is unnecessary and it can be significantly controlled if all of us (not just programmers or testers) learn the skill of explaining things in short (and to the point) when needed without losing the substance of it or compromising with the impact an elaborated version would make. Same goes with explaining technical things to non-technical people. As techies, we can't expect the whole world to understand the language we speak (it would be nice if that happens though) but we can make things simple by learning the art of explaining those to other in language and context they would understand better.
Added advantage would be when you onboard new members in team. Regardless of what role they are hired for, person's IE skills would help them onboard much better and the skill will definitely help for better collaboration and communication. In fact, when testers and programmers both have great interactional expertise then sessions like pair testing or programming will be super productive. Imagine what value it can add for Mob Programming and Mob Testing sessions. I have worked with some programmers who were master of explaining technical things to non-technical members of team as if they were putting a child to sleep by telling a story. Short, sweet and yet satisfying. That's what I mean by Interactional Expertise.
Next time when you interview a programmer, look for these. It will help you. When I interview testers for it, I usually ask them to explain some technical concept in 50 words for example and again same concept in 100 words. It helps me analyse how good (or bad) they are with their Interactional Expertise. Asking programmers to write a technical bug report or user story can also be helpful trick to evaluate them for their IE.
3. Understanding of Testability
May be I am wrong about it, but I honestly feel our industry still lacks required seriousness (and awareness) for building testable products. This is not just about programmers being unaware of it but even testers.
The only times I hear of the word "test" in programmer interviews are when they talk about their unit tests or TDD or automated tests at the max. And it's a pity!
Building testable products is an important part of software development and it is important that programmers understand how to bake testability right from the beginning. Sure, skilled testers can certainly be advocates for testability but it won't hurt if programmers too understand what it means to them and how they can contribute to it.
While interviewing programmer, I suggest you pay special attention to their solution if that demonstrates at-least few aspects of Intrinsic Testability as explained in Heuristics of Software Testability by James Bach. If not, at-least make an attempt to discuss other aspects of software testability (listed in the heuristics) with candidate in general and gauge their fitness for your requirement.
For sure, the skills mentioned above are equally important for testers too but since "Whole Team Testing" thing is picking up, I wanted to make it explicit for traditional non-testers. Next time when you interview a programmer, please try and see if it helps.
If we need technical testers, we also need programmers who understand testing. And that is reasonable to ask for, isn't it?
Oh and by the way, I will be touching on some of the related topics in my talk for Online Testing Conference. Feel free to join if the topic interests you.
During my ET/SBTM workshops, I have been often asked if it is possible to perform Session Based Testing in typical Agile/DevOps environments.
I think if tester knows how to perform SBT (especially with different session types), there should be no trouble in doing it regardless of the development methodology their team follows. (If you are unaware of typical session types in SBTM, then I recommend you to checkout this informative series by my friend Simon ‘Peter’ Schrijver.) However, I understand that probably, the fast paced way of working through sprints, shipping small chunks of software regularly, keeps everyone mainly focused around achieving the sprint goal. And for testers, this often means focusing only on ‘acceptance criteria’ and moving on to the next ticket, which is waiting to be tested and deployed. And that might force some testers to just forget every awesome technique they know and run behind getting “the stuff done”.
If you are a tester (sorry, Agile Tester) stuck in such situation and feel bad about it, then this post is for you. For a while, I too was stuck in similar situation. Not that I was not getting to perform ET in SBTM way, but I constantly felt something was missing and that there was still more I could do. A couple of important things were getting skipped off my typical sessions, and all I could do was procrastinate them. And that was bad, very bad! (yeh)
When I was done with feeling guilty about it, here is what I did (and I strongly recommend you too try it out). I created some more session types for myself some of which I perform twice a week and some I perform daily. And they have been helping me greatly so far. Well, what are they?
1. Critical Thinking Session
I know, a good tester should think critically at every possible occasion. But there is no harm if you dedicate a special slot for it. How often do agile testers spend time on reading through backlog and preparing their notes in advance so that they know what to ask, know what extra information they would need for certain ticket or prepare a risk list associated with some ticket, to make everyone aware of it before it’s too late?
A dedicated ‘Critical Thinking’ session aims to solve this problem. Schedule at-least two of such sessions, one before beginning of new sprint and one before your feedback/refining sessions so that you spend enough time for ‘critically thinking’ about stuff that would be soon on your plate. The sooner you prepare yourself with it, the better you will be able to shape your further sessions around actual testing of those items.
Medium sessions of roughly 60 minutes works great for me. Sometimes a short session of 45 mins is enough. The more you practice with Critical Thinking sessions and the more time you spend with applications you test, I guess you will eventually need lesser time for CT sessions.
In case you are wondering what I really mean by “Critically Thinking” about backlog items then please checkout “Mary Had A Little Lamb” heuristic by Jerry Weinberg or “Huh?Really?So?” by James Bach.
2. Monitoring Session
Monitoring production logs especially after deployments has benefits of its own. And if tester does it regularly then there is a lot they can discover through those logs.
If as a tester, you are not yet doing production deployments then I suggest you start doing it and make sure to check production logs (or logs on other environments for that matter). If for some reason, you do not own deployments then try to spend at-least one short session for monitoring your application’s production logs, every day. That’s what I mean by “Monitoring session”.
Because of such dedicated monitoring sessions, I have come across some illusive bugs that were hard to catch on testing environment. Production logs are also very interesting means to learn about different ways in which you can test your application against disfavored use or extreme use (refer HTSM by James Bach – SFDIPOT – Operations). Not only that, it can also help you identify some illusive integration level bugs which might not be caught easily otherwise.
Monitoring sessions can also help you identify technical bugs, errors or warnings, which might not be directly affecting the end user but still warrant attention and fixing. It’s hard to identify such issues in regular functional testing which usually has big scope of its own.
3. Bug-visit Session
This session is about visiting the bugs in backlog and going through them carefully. Sometimes, over the period of time, some bugs become irrelevant or can also become important to be fixed on priority. Revisiting those bugs helps take appropriate action on time.
If there are bugs logged by other teams or customer care team for example, then they can also serve as a great ‘test ideas’ that can be extended to other features of application. Information gathered through such bug-visit sessions can help you create your own risk-list or project specific heuristic (I have explained that in detail with HEEENA).
4. Test for Testability Session
I can't stress enough on importance of this session, especially in this changing era of Software Development. One of the key role a skilled tester can play in modern software development is of "advocate of testability". Checkout this heuristic for Software Testability by James Bach. Once you understand the dynamics of testability you will realise that the sooner you care for it (and advice where needed) the better testable product your team will create. If I am to give you an example, please check "stats for nerds" on youtube videos you watch. Would such information not help you if your product starts storing such information, that you as a tester can access easily and shape your tests based on that?
I suggest you create the checklist for Intrinsic Testability in particular and test your design right from the beginning against it. Please dedicate a compulsory session (short should be good to begin with) for testability even before you get the build for testing against acceptance criteria. In fact, you could also pair with programmer or PO when they work on user stories that will latter come to you for independent testing. This in tern will help you as a tester to better test the product for session/charter you care for (and you won't get lost in figuring things out to help you test better).
So, that’s about four additional session types I have created for myself to do even more effective SBTM in agile environment. I am finding them very useful and hope you too will benefit from them.
Feel free to get in touch if you have any questions or would like to discuss it further.
Header image credit - blog.xebia.fr
A passionate & thinking tester. Trainer & student of the craft of testing. Chief Editor and Co-founder of Tea-time with Testers magazine.