TALES OF TESTING
  • BLOG
  • Tea-time with Testers
  • Press, Publications & Talks
  • Editor's Choice
  • About
    • Testimonials
    • Photos

TALES OF TESTING

​​  Testing, Leadership, Software and more...
Publications & Talks
About me

Mind, Matter, Testing and The Cargo Cult

22/1/2021

0 Comments

 
Picture
Photo by Lesly Juarez on Unsplash

​I recently stumbled upon an interesting article written by Alan Page where he has discussed some interesting ideas around the mindset,  tester's role, and how different software development teams approach testing. If you have not read his article yet, then I encourage you to read it first. 

First things first

In principle, I agree with most of the points that Alan has made. I believe that regardless of one's role, they can test effectively, provided that they are willing to invest in learning, practicing testing as well as thinking like a skilled tester. While making this happen is not impossible, I also believe there are reasons why it could be difficult or not as effective as it should be.   

What's the matter?

If I understood Alan correctly, he considers the idea of a testing mindset to be a myth. Alan proposes to consider thinking in terms of skills instead. In my opinion, skills can be easily acquired/developed as compared to acquiring/developing the mindset needed to do that job better. And for me (and testers who think like me) the mindset is an important matter when skilled testing is a matter of concern.

​I have been testing for over 12 years now and I can not imagine meaningful testing being done without the right state of mind. I do not know whether the "state of mind" and the "mindset" are the same things but I believe that it would not change Alan's opinion about testing mindset. 

What is testing? 

Testing means different things to different people. For me, it is about finding true knowledge, correct knowledge about the things we test. For me, testing is an empirical investigation of the product, people, and the project along with the relationship between them. And we do this investigation to obtain that true knowledge. 

Testing as an investigation for true knowledge is a far different thing than testing as asserting things we believe we know to be true. And this is where I believe the mindset makes a big difference. 

Knowledge work and the knowledge workers

I love what Alan has written about software development and testing being a knowledge-work. I would love to quote it here.

Software development and software testing are knowledge work. Both require multiple skillsets, constant learning, and problem-solving. While developing and testing software may require different problem-solving approaches or skill sets, they are both part of the same (growth) mindset, and the best people I know in software can switch between their development/builder and tester/investigator skill sets rapidly and fluidly. While I agree that it’s not easy – with deliberate and frequent practice shifting between these skill sets, most knowledge workers I’ve worked with can – and have become quite successful in developing this flow.
I can't agree more with this whole argument. However, the catch lies in how one perceives the knowledge to be, how one obtains that knowledge, and how one processes it. Though we all are knowledge workers, all knowledge workers are not made equal nor they can draw similar inferences despite working on the same source of knowledge. 

My pursuit to better understand the idea of knowledge lead me to Nyāya-sūtras (which further lead me to the work done by Matthew Dasti and Stephen Phillips, as well as Satishchandra Chatterjee.) which are believed to be the foundation of the modern theory of logic and epistemology. 

As the knowledge workers try to obtain the knowledge, the means with which they obtain it makes a big difference. And the state of mind with which these efforts are made can lead one to obtain the correct knowledge or the false one.  

According to Nyāya-sūtras, ordinary knowledge is considered to be true if our five senses can directly and clearly apprehend a reality. However, the knowledge work required to do skilled testing is intellectual in nature for most of the part. Which is where the state of mind plays a critical role. The mind is considered an internal sense and it can either lead one to correct or incorrect knowledge depending on how one includes, excludes, or integrates information.

Means of attaining valid knowledge according to Nyāya-sūtras are: 
  • Perception
  • Inference
  • Comparison and analogy
  • Testimony and reliable sources
Skilled testing requires one to be skilled in the above-mentioned means and to be skilled in that, the state of one's mind plays a very critical role. 

Why? Because, pre-judgmental or prejudicial state of mind, can be a source of doubt or false knowledge. Based on the nature of knowledge work one does on a primary basis, these pre-judgements and prejudices are difficult to avoid. It is hard for me to imagine skilled testing without the involvement of a mind that is trained, experienced, and capable of separating false knowledge from the true one. And therefore, eliminating the mindset and its role in performing skilled testing is like extracting essential elements from whole-milk but still calling it milk because it looks similar.

Consciousness is the key

Considering the arguments I made above, does it mean programmers can/should not test at all? I do not think so. I am big time supporter of Whole Team Quality, and I think programmers can test if they want to, but rather than expecting them to fight against their cognitive dissonance (if they are expected to do more than writing asserts in the name of testing, that is) in order to test, I would expect them to write the program and work with the team to deliver software, with a quality-conscious mindset instead. And learning about testing can help facilitate that. 

I do not mean that programmers are not quality-conscious when they write their code. But the notion and conation of quality in mind with which they work is not similar to the notion of quality testers work with. By being quality-consciousness with testing education as a tool, I am suggesting that they could have more aspects of product in their mind which usually mindful testers do e.g. broader product coverage, testability aspects and so on. 

I have written more about Quality-conscious Software Delivery and how it has helped my team (and some others that I know) but that is beyond the scope of this blog. 

The Cargo Cult

I do not know how credible this source is but when I tried to understand Microsoft Quality, this caught my attention and I believe it party still holds true :
MicrosoftCorporation aims for a different type of quality, GoodEnough software. Unfortunately, GoodEnough is defined as "the point where the market will marginally accept it", because BugFreeDoesntSell/BugFreeCostsMore. Being the market leader allows MicrosoftCorporation to lower the quality the market will accept. Although MicrosoftCorporation may not have improved quality of software doing this, they may have improved the quality of their business practice.
The point is, what works best for Microsoft does not have to work better for others. My concern is with the Cargo cult we end up in. I have seen teams ending up nowhere by blindly trying to follow what they do at Microsoft or Facebook or Google without critically assessing their own context. And most of the time it happens at the cost of testing and quality, unfortunately. 

​What is best for your context is best for your context, period. 

I have met and interacted with Alan in person and I have great admiration for his work (even though I do not agree with everything of it). I only fear that his article mentioned above may encourage the Cargo cult that I deeply resent.

As a professional tester who firmly believes in the power of a mind for skilled testing, I felt compelled to pen down a considerate response. 

Thank you for stopping by.  
0 Comments

Quality-conscious Software Delivery

13/9/2020

0 Comments

 
Picture
Considering my experience and what I have been observing in the industry, there seems to be an increasing interest in the idea of Whole Team Quality. The idea itself is not new as far as I know but certainly, there seems to be more awareness and eagerness towards its implementation, lately. 

Why is it needed and how does it help? 

Well, if you are delivering a product as a team, it is natural that everyone who helps build the product is responsible for its quality, or is supposed to be. And for more on this I would urge you to read  another article from me on this topic. 

Where is the problem?

When I tried to figure out how different organisations and teams are going about Whole Team Quality, I realised that asking everyone in the team to test (or asking programmers to test) and automating as much as possible is what they consider Whole Team Quality to be.  

I see several problems with that approach:
  1. The key problem in my opinion is confusing Testing with Quality
  2. Putting most of the efforts to achieve/improve/assess quality focusing only on the product
  3. Caring about quality way later in the process. 
  4. Reactive strategy i.e. to address quality concerns and risks after they are found in a product (which is too late and costly to fix usually)
  5. Over emphasise and reliance mostly on automated checks that do not discover hidden risks or find new information. These checks only assert the known information. 

Sure I do support the idea of Whole Team Testing to help achieve Whole Team Quality but  how do you go about it makes the big difference.

Over the last four years, I tried different ideas, did experiments in teams for succeeding with Whole Team Quality. I failed but I learned. I continued to try and eventually I would say I succeeded it in. Succeeded in achieving Whole Team Quality in a meaningful way. The way in which risks are found earlier (even before they manifest as a bug in the product) and the Quality is assessed/analysed/addressed/achieved on every level and by every individual in the team.

The solution that is working for us

Based on my experiments and learning, I would like to present the model and framework I have developed and am still experimenting with. It has given me and my team useful results so far and I would encourage you to try it too. 

 The model: QualiTri for three notions of Quality 

Having deep philosophical discussions with Michael Bolton when he peer-reviewed my paper on Whole Team Quality, helped me formulate/conceptualise QualiTri. And this model further guided me to create the framework for its implementation.  

Like I said before, focusing on the Product notion of the quality alone is not enough. To succeed with Whole Team Quality, it is equally important to understand Project and the People notion of quality. They are related and they do affect each other. That said, to deliver a quality product we got to be equally conscious about the project and the people notion of the quality. 
Picture
The framework: Quality-conscious Software Delivery 

The challenge was how to really go about implementing QualiTri model in a context. And thinking about it helped me formulate my goal to be to achieve the delivery of quality products by quality-conscious people using quality-empowering processes.  
Picture
How to implement the 4E structure of QCSD can vary from context to context but below is how we implemented it in our team which has worked great for us so far. 
Picture
Going further in detail of the implementation of 4E structure for QCSD framework would require a series of blog posts. It starts with creating awareness, convincing your team for the need of it, considering their inputs, evaluating the project context and creating the workflows/action items together with your team, and then committing for the efforts needed. It's a process that takes time. Plus it is highly subjective from project teams to project teams and their contexts. And hence I would rather stop here for now.  

How do we know it worked?

The Lead Time graph for our team before and during an experimentation phase of QCSD in our team (based on the improvements we did in the processes, consciousness with which all people worked with and keeping quality of the product in mind) reflected the positive impact.
Picture
I believe it was the first sprint in a long time, where we as a team finished all the tickets and pulled more, the so-called testing bottle-neck was minimal and the bugs reported that would make into the backlog or warrant some critical rework post-production were negligible. 

Sure, this graph did not remain ideal all the time. Teams change, business contexts change too which affects the overall delivery and quality of the product we end up shipping. But if you know how to go about delivering a quality product by quality-conscious people using quality-empowering processes, I am almost certain you will do way more good than bad. And it's a win in my opinion. 

See if you find it worth the shot. If you would like to borrow my help for consulting/implementing this idea for your team or organisation, it would be my pleasure. Just let me know. ​
0 Comments

Quality Experience(QX): Co-creating Quality Experience for everyone associated with the product.

3/7/2020

0 Comments

 
Picture
I have been talking about Whole Team Quality via Whole Team Testing for a couple of years now. During my workshops, I am often asked if testing can only be extended for programmers in a team. Pretty interesting question it is and my answer is obviously "no". Though I usually explain in my workshops on how to extend testing to roles beyond programmers in a team i.e. for UX or PO roles, I realized that I have not given deep thought to it and to how exactly testing could be extended to other disciplines in a meaningful way. 

I read books, discussed with my colleagues, did my research and the outcome has been what I would like to name as QX i.e. Quality Experience. If QA (read that as Quality Advocates) and UX professionals collaborate in a meaningful way, I firmly believe they can co-create a Quality Experience for everyone associated with the product. 

So, what is QX after all?

QX stands for Quality Experience. For sake of understanding, you can call it a marriage between QA (read it as Quality Advocacy please) and UX. After some need-based discussions and interactions with my UX colleagues, I realized that we can achieve a lot more if we work closely together regularly. The key idea of QX is about facilitating the collaboration between QA and UX so that they can contribute to what I would call a "Quality Experience" of the product. This is both for the end-user and for those who build the product itself. 

I believe that with some process optimizations, mindset enablers for testers as well as UX designers, and following some heuristics I have created, it is possible to kickstart the QX journey if that idea interests you. 
But what's the need? Is QA alone not enough to cater to product quality (or UX alone not enough for better user experience)? 

Well, I am afraid, it is not. Not at-least when you believe that Quality is value to someone (who matters) and when multiple stakeholders matter at the same time. 
Let me explain. Imagine that a new design change required in a product is a revenue booster for the company but it is also likely to impact the user experience. Testers often end up with oracle problem in such situations and can not decide what their quality criteria should look like. Of course, the Product Owner can be consulted here for a final decision but that's not the point. We testers are in the information business, after all (yes, even if you follow the Modern Testing). I find it important for testers to be able to make a comprehensive information gathering and present that information to decision-makers so that they can make an informed decision based on that.  

Now, if testers lack the tools and mindset to figure out how to go about solving such problems and gathering information that would matter, their job would be poorly done. And if you are still not convinced, I highly recommend you to read Weinberg's latest book around System Design Heuristics. This book could not have come at any other better time for me. I started reading it while working on the QX concept and it has given me some interesting insights to sharpen my thinking around this topic.

On the other hand, let's assume that a UX/Interaction designer has been given a problem to solve or has been asked to create a new solution for some product. How do they ensure they have gathered enough information to do that job right? How about historical incidents or say hidden technical challenges or simply put edge case scenarios, cross-functional dependencies, and so on? I believe that having this information at hand can greatly benefit how UX designers can approach the problem and solve that in a better way. 

Therefore I think that an engagement between UX and QA can help both of them to perform their job even better. And hence QX seems to be a good way to go about it. 

Well, how does it work?

Here are some ideas that I experienced to be working quite well. See if they work for you too?

1. Cross-discipline training for QA and UX 

For successful collaboration, it is important that QA and UX understand each other well and that they speak and understand each other's language. More important is to understand the mindset with which they both operate. 

I have experienced difficulties in understanding UX's point of view sometimes and my UX colleagues made the same experience. But a conscious effort made towards understanding each others' language helped us solve those issues and facilitating the collaboration thereby.  

That said, we recommend that testers and UX designers start from understanding each other's roles and mindset. Attending cross-discipline training should help but if that is not possible, try doing "pairing sessions" at least.   

2. Process changes or optimizations  

A great deal of it depends on what kind of team set up you have. Some teams may have dedicated UX designers, some organizations have UX teams as a "lateral" service provider for different teams and some teams simply don't have UX people. Their designs are usually outsourced or made by engineers themselves.  Some recommendations I would like to make here are:
1. Involve testers as early as possible and make them also part of the design process/discussions. UX will thank them for lots of useful info which might result in the faulty design if missed. 

2. Early and frequent communication between UX and QA would help. Try "brainstorming" sessions for early design stages. Ask tester for hidden scenarios or technical hacks to go around things. Ask them for user complaints or known production issues surrounding the design under discussion. 

3. Testers may perform focused UX testing and consult UX from time to time for their design-related findings. A "pair testing" session with UX expert can greatly benefit testers for more test ideas surrounding usability and human-computer interaction under different contexts. 

4. Instead of creating a misinformed bug report surrounding usability and UX, testers can always consult UX colleagues first for feedback around their findings and use them as their oracles. Most of the time, UX people have information and insights (from their interaction with real users, test sessions they perform, qualitative and quantitive data analysis they do, etc.) that explain why that design is made a certain way. 
5. Not every design change goes via the standard UX lead design process. Engineers sometimes have to make decisions that may result in a change in product design or impact certain product behaviors. Such changes can always be sent to UX designers for their feedback. The tester can play a big role in making this activity happen on behalf of the engineering team. 
 
3. UX testing heuristics for testers 

Mere exchange with UX colleagues without having proper knowledge of how to test for better user experience can be futile. Which is why I recommend testers to get good at UX testing too. By that, I don't mean typical usability testing or accessibility testing alone. After giving a deep thought to all possibilities involved, I have come up with the following heuristic for testers.

Keep in mind that it can also be used as a means to facilitate collaboration and have a better discussion with UX colleagues. Not everything might be applicable for all the contexts. Choose those that fit best in your context. Here you go: 

Problem -To come up with relevant test ideas, Testers and UX must be on the same page in terms of understanding the problem they want to solve with the proposed design. UX often get first-hand information from the PO about the problems, which testers sometimes are lacking. Trying to understand what problem UX wants to solve with their solution, can open up lots of possibilities for testers and confine them to think in the right direction. It would also spare them from coming up with the right questions and perform better impact analysis of the change. 
  • What product problem are you testing the UX solution against?
  • Is the problem simple to understand or has complexities within itself? Can you break it down further? Ask for more information if that helps to understand the problem better. 
  • Rule of Three (thank you Jerry Weinberg) - If you can’t think of at least three ways the design could fail, you haven’t tested the design enough. What to do about those identified potential failures is another thing. But it would be worth to identify them and at-least discuss those possibilities with UX or PO. (Pro tip - Don't overdo it)

User Needs - 
This is highly subjective and might vary from project to project. The key idea is to understand what User Needs are getting addressed through the design and if you as a tester can foresee any challenges/impact of that. This can also be very well made as a part of 'understanding the problem' part. It does not matter how you do it, but what matters most is that you know what aspects of user needs you are dealing with. 
  • What user needs have been used to design the solution?
  • Do you understand those user-needs in the context of a given problem and your product feature?
  • Based on your product knowledge, do you find the user-needs 'selected' suitable for the designed solution?
  • Is there anything (information around business/technical/cross-team constrains/special scenarios etc.) that invalidates or challenges the user-needs that are taken into account?

User vs Business Needs 
  • What is the goal of the solution provided? Is it for ease of business for products or for improving user experience?
  • Is the change in your product feature adversely impacting other parts of the application (other teams) in terms of User Experience or KPI? - Inform the stakeholders
  • Can you borrow supporting data/statistics from UX/PO to understand their rationale better? - e.g. Why this screen size and why not this? Because we have only so and so % of users using that. And thus the impact is minimal. 
  • Does an attempt of "ease of business" compromise "user experience"?
  • Does improving the "user experience" goal impact business/product KPIs?

Finding Balance - Solving the Oracle Problem 

When you as a tester are unable to decide if the proposed solution is good for the product but bad for users and vice versa, use some of the methods below to make your decision-making more concrete and practical. Based on the configurations of teams, there may be no dedicated UX expert to cater to these needs. It is therefore advisable for testers that they wear UX hats and find the balance by asking the right questions before it is too late. 

Plenty of efforts could be saved if testers too are involved in design/UX decisions early where dedicated UX expert is absent or designs have been made by third party services etc.

What Must Not Change? 

Whatever we ultimately do, what are the things you don’t want to be changed? (from 
System Design Heuristics by Jerry Weinberg)

Introducing design changes in an existing product that are targeted at a specific goal by UX designers, can harm things that are not meant to be affected by that. I highly doubt if there are deliberate efforts made to analyze this regression impact on the design level itself. This is why, if a tester asks this question right in the early phase of a design change, it is likely to save lots of rework for the later. 

“If we don’t start that way, it’s all too easy to lose track of the unchangeable.” - Jerry Weinberg. 

Impact Analysis 
​

What are the visible and invisible parts of the product that are impacted by this change? And what is the impact?
  • GUI process flow : For the end-user and for internal users
  • Impact on user's feelings - Happy? Overwhelming? Confusing? Irritating? Unexplainable?
  • Impact on cross-functional teams ( talk to them early and discuss for a possible solution)
  • When the design is independent of user-needs/user data
      • Basic Rules
      • Usability Ergonomics
      • Heuristic from Jacob Nielsen
      • FCC CUTS VIDS 
    • UX Checklist (chrome extension)

Creativity

Running out of test ideas to decide if the solution is perfect? Pour some creativity in. 
  • Consistency with Comparable products/competition - Is this solution consistent or competent with comparable products/solutions in the market?
  • Inspiration from various fields of study and domains for ideas (not correctness) - 
    • Philosophy 
    • Social Science 
    • Gender, Age and Geographical studies 
    • Medical domain
    • E-commerce Domain
    • Fashion Domain
    • Others?

Exactness, Intuitive and Counter-intuitive Design

Most of the time, testers are more focused on the technical and functional aspects of the product so much that they unknowingly tend to ignore looking for obvious problems. A deliberate attempt needs to be made to look at the product like an unexperienced user to see if they will understand what we expect them to. To do this:
  1. Check text/labels of your menu items, action buttons, information tables, and even terms and conditions - Are they obvious and intuitive? Are they confusing users? Is there more than one possible meaning? 
  2. Is the 'interaction' between components intuitive and obvious? What are the abnormalities? How are they different from common use conventions? 
  3. Are the terms you are using to explain things/menus/actions confusing? Can they be understood by the typical user easily? Are there any cultural constraints that might backfire? How do we avoid misunderstandings? 

Well, I wish I could work on this and develop this idea even further. But for the benefit of time, I would stop here. Maybe I would get back to this again when I have more ideas. But in the meantime, feel free to comment below and share more ideas if you like. And do not forget to tell me how you find the QX idea so far. 

And, I recommend you to read
 System Design Heuristics by Jerry Weinberg. The book gave me lots of ideas to ponder upon. 

Until then...
QX for Testers by Lalit Bhamare
0 Comments
<<Previous

    Author

    A passionate & thinking tester. Trainer & student of the craft of testing. Father, Foodie and dog lover. Chief Editor and Co-founder of Tea-time with Testers  magazine. 
    Setting up this blog to share my experiences whilst doing what I do... 

    Categories

    All
    Agile Testing
    Community Of Practice
    DevOps Testing
    Editorials
    Estimations
    Exploratory Testing
    Heuristics
    Hiring
    Interviews
    Leadership
    Lean Coffee
    Life
    Meetups
    Mindmapping
    People
    People And Processes
    Philosophy
    Problem Solving
    Problems Surrounding Testing
    Project Management
    Testing And Programming
    Testing CoP
    UX And Testing
    Whole Team Quality
    Whole Team Testing
    XING

    Archives

    January 2021
    September 2020
    July 2020
    October 2018
    May 2018
    October 2017
    June 2017
    May 2017
    April 2017
    December 2016
    September 2016
    August 2016
    June 2016
    January 2015
    September 2014

    FREE subscription

    Picture
    Tea-time with Testers
    Your Email:

    RSS Feed

© Copyright | Lalitkumar Bhamare | All rights Reserved.
Contact  About  Home
  • BLOG
  • Tea-time with Testers
  • Press, Publications & Talks
  • Editor's Choice
  • About
    • Testimonials
    • Photos