I am often asked why do I do what I do beyond my full time job. I could simply say it's my passion for software testing but I personally don't find that answer satisfying enough. It's hard to explain what exactly we get by doing certain things when we do it out of pure joy and satisfaction of simply doing it. But if I am to closely look at my journey with Association for Software Testing, I believe I have some answers to that.
Back in 2011, when we launched Tea-time with Testers, I started getting in touch with passionate testers and experts from across the globe mainly to collaborate with them for articles or work together with them to practice testing. Interesting enough, there was one thing common between most of these passionate testers (who have made significant contribution to the field of software testing) which was their association with AST. Knowing this was convincing enough for me to checkout what AST was all about and what it has been doing.
The fact that non-profit organisation for software testing like AST exists is itself a big relief. The purpose with which we started Tea-time with Testers and the mission and purpose of AST has some much in common. Knowing that back then was a great validation for me as a novice tester with passion for the craft of testing. The people connected with AST had already made big impression on me and there was no reason why I would not want to be part of it. Not only that but there are many benefits of being a member of AST which is why I did not have to think twice before signing up.
BBST student and instructor:
And from there my journey with AST started. Being a member of this great organisation brought me much closer to the craft of testing, the advancements, the principles, practices and ideologies. Everything has been too fascinating and convincing to ignore even today. I participated in BBST course and I passed. Needless to mention the learning experience was worth it and it never hurt to revisit the course content once in a while. I have always learned something new from BBST each time I revisited. Apart from the course lessons, the most important part was learning from brilliant testers from across the globe who took the class with me. And the connections I made back then are still serving me to become great at testing or in efforts towards contributing to the craft.
Later on I took BBST Instructor course and passed that too. Since then I have been teaching BBST classes with AST for once in a while. It makes me immensely proud when I see some bright minds, and star testers in the industry today who have shared their BBST journey with me. Some of them are my colleagues, some were fellow students, some of them have been co-instructors and some of them were students in the classes I taught. What more a passionate tester can ask for if he gets to experience this joy and happiness by giving back to the community in some way?
AST is a gold-mine for number of projects aimed at taking craft of testing ahead and for advancing the understanding of the science and practice of software testing. If you are passionate about testing, you will never run out of programs and ways in which you can contribute to it via AST. SiGs (Special Interest Groups) that existed back then in AST caught my attention and I participated in some of them with whatever contribution I could make.
Then came the time to attend CAST, the conference by AST. I had only heard and seen it from far away how great this conference was and it had been my dream to be at CAST one day. Volunteering for CAST conference and helping the organisers with conference efforts helped me be there, in person. Yes, I got to attend the CAST-2014 in New York and that experience will stay with me forever. The atmosphere, the energy, the discussions, the talks, the format of conference itself... all of it was worth all the efforts I made.
CAST conference has been the place where I got to meet many great testers in person, whom I had only known or interacted with via internet before. I have so many fond memories of meeting these people and getting a chance to confer with them, learn from them and get inspired from them.
My association with AST only grew stronger over the years since there was no reason why would I stay away from such a happening group of great minds. And nobody can deny the role AST has played in advancing the software testing profession in more than many ways.
Vice President of Education:
To extend my contribution to AST and benefit the organisation with whatever little I felt capable of doing, I decided to run for the elections and got an opportunity to look after AST's Education program as VP, and I am currently serving my term.
This has been challenging yet rewarding work and I am committed to give my best.
One may wonder, why did I write this all? Well, mostly to give a meaningful answer when they ask me why I like AST and what it has given me. From Member to Mentor and Volunteer to VP, AST has stood by me in this journey and has blessed me with more than what I could have asked for. All it asks of you is to have passion for testing and sincere desire to give something back to the community.
If you have it in you, and if my journey is anything to go by, think no further and be part of this great organisation.
Let your journey begin!
I recently stumbled upon an interesting article written by Alan Page where he has discussed some interesting ideas around the mindset, tester's role, and how different software development teams approach testing. If you have not read his article yet, then I encourage you to read it first.
First things first
In principle, I agree with most of the points that Alan has made. I believe that regardless of one's role, they can test effectively, provided that they are willing to invest in learning, practicing testing as well as thinking like a skilled tester. While making this happen is not impossible, I also believe there are reasons why it could be difficult or not as effective as it should be.
What's the matter?
If I understood Alan correctly, he considers the idea of a testing mindset to be a myth. Alan proposes to consider thinking in terms of skills instead. In my opinion, skills can be easily acquired/developed as compared to acquiring/developing the mindset needed to do that job better. And for me (and testers who think like me) the mindset is an important matter when skilled testing is a matter of concern.
I have been testing for over 12 years now and I can not imagine meaningful testing being done without the right state of mind. I do not know whether the "state of mind" and the "mindset" are the same things but I believe that it would not change Alan's opinion about testing mindset.
What is testing?
Testing means different things to different people. For me, it is about finding true knowledge, correct knowledge about the things we test. For me, testing is an empirical investigation of the product, people, and the project along with the relationship between them. And we do this investigation to obtain that true knowledge.
Testing as an investigation for true knowledge is a far different thing than testing as asserting things we believe we know to be true. And this is where I believe the mindset makes a big difference.
Knowledge work and the knowledge workers
I love what Alan has written about software development and testing being a knowledge-work. I would love to quote it here.
Software development and software testing are knowledge work. Both require multiple skillsets, constant learning, and problem-solving. While developing and testing software may require different problem-solving approaches or skill sets, they are both part of the same (growth) mindset, and the best people I know in software can switch between their development/builder and tester/investigator skill sets rapidly and fluidly. While I agree that it’s not easy – with deliberate and frequent practice shifting between these skill sets, most knowledge workers I’ve worked with can – and have become quite successful in developing this flow.
I can't agree more with this whole argument. However, the catch lies in how one perceives the knowledge to be, how one obtains that knowledge, and how one processes it. Though we all are knowledge workers, all knowledge workers are not made equal nor they can draw similar inferences despite working on the same source of knowledge.
My pursuit to better understand the idea of knowledge lead me to Nyāya-sūtras (which further lead me to the work done by Matthew Dasti and Stephen Phillips, as well as Satishchandra Chatterjee.) which are believed to be the foundation of the modern theory of logic and epistemology.
As the knowledge workers try to obtain the knowledge, the means with which they obtain it makes a big difference. And the state of mind with which these efforts are made can lead one to obtain the correct knowledge or the false one.
According to Nyāya-sūtras, ordinary knowledge is considered to be true if our five senses can directly and clearly apprehend a reality. However, the knowledge work required to do skilled testing is intellectual in nature for most of the part. Which is where the state of mind plays a critical role. The mind is considered an internal sense and it can either lead one to correct or incorrect knowledge depending on how one includes, excludes, or integrates information.
Means of attaining valid knowledge according to Nyāya-sūtras are:
Why? Because, pre-judgmental or prejudicial state of mind, can be a source of doubt or false knowledge. Based on the nature of knowledge work one does on a primary basis, these pre-judgements and prejudices are difficult to avoid. It is hard for me to imagine skilled testing without the involvement of a mind that is trained, experienced, and capable of separating false knowledge from the true one. And therefore, eliminating the mindset and its role in performing skilled testing is like extracting essential elements from whole-milk but still calling it milk because it looks similar.
Consciousness is the key
Considering the arguments I made above, does it mean programmers can/should not test at all? I do not think so. I am big time supporter of Whole Team Quality, and I think programmers can test if they want to, but rather than expecting them to fight against their cognitive dissonance (if they are expected to do more than writing asserts in the name of testing, that is) in order to test, I would expect them to write the program and work with the team to deliver software, with a quality-conscious mindset instead. And learning about testing can help facilitate that.
I do not mean that programmers are not quality-conscious when they write their code. But the notion and conation of quality in mind with which they work is not similar to the notion of quality testers work with. By being quality-consciousness with testing education as a tool, I am suggesting that they could have more aspects of product in their mind which usually mindful testers do e.g. broader product coverage, testability aspects and so on.
I have written more about Quality-conscious Software Delivery and how it has helped my team (and some others that I know) but that is beyond the scope of this blog. (However, I am doing a three day workshop on it in case you are interested. More information here.)
The Cargo Cult
I do not know how credible this source is but when I tried to understand Microsoft Quality, this caught my attention and I believe it party still holds true :
MicrosoftCorporation aims for a different type of quality, GoodEnough software. Unfortunately, GoodEnough is defined as "the point where the market will marginally accept it", because BugFreeDoesntSell/BugFreeCostsMore. Being the market leader allows MicrosoftCorporation to lower the quality the market will accept. Although MicrosoftCorporation may not have improved quality of software doing this, they may have improved the quality of their business practice.
The point is, what works best for Microsoft does not have to work better for others. My concern is with the Cargo cult we end up in. I have seen teams ending up nowhere by blindly trying to follow what they do at Microsoft or Facebook or Google without critically assessing their own context. And most of the time it happens at the cost of testing and quality, unfortunately.
What is best for your context is best for your context, period.
I have met and interacted with Alan in person and I have great admiration for his work (even though I do not agree with everything of it). I only fear that his article mentioned above may encourage the Cargo cult that I deeply resent.
As a professional tester who firmly believes in the power of a mind for skilled testing, I felt compelled to pen down a considerate response.
Thank you for stopping by.
Considering my experience and what I have been observing in the industry, there seems to be an increasing interest in the idea of Whole Team Quality. The idea itself is not new as far as I know but certainly, there seems to be more awareness and eagerness towards its implementation, lately.
Why is it needed and how does it help?
Well, if you are delivering a product as a team, it is natural that everyone who helps build the product is responsible for its quality, or is supposed to be. And for more on this I would urge you to read another article from me on this topic.
Where is the problem?
When I tried to figure out how different organisations and teams are going about Whole Team Quality, I realised that asking everyone in the team to test (or asking programmers to test) and automating as much as possible is what they consider Whole Team Quality to be.
I see several problems with that approach:
Sure I do support the idea of Whole Team Testing to help achieve Whole Team Quality but how do you go about it makes the big difference.
Over the last four years, I tried different ideas, did experiments in teams for succeeding with Whole Team Quality. I failed but I learned. I continued to try and eventually I would say I succeeded it in. Succeeded in achieving Whole Team Quality in a meaningful way. The way in which risks are found earlier (even before they manifest as a bug in the product) and the Quality is assessed/analysed/addressed/achieved on every level and by every individual in the team.
The solution that is working for us
Based on my experiments and learning, I would like to present the model and framework I have developed and am still experimenting with. It has given me and my team useful results so far and I would encourage you to try it too.
The model: QualiTri for three notions of Quality
Having deep philosophical discussions with Michael Bolton when he peer-reviewed my paper on Whole Team Quality, helped me formulate/conceptualise QualiTri. And this model further guided me to create the framework for its implementation.
Like I said before, focusing on the Product notion of the quality alone is not enough. To succeed with Whole Team Quality, it is equally important to understand Project and the People notion of quality. They are related and they do affect each other. That said, to deliver a quality product we got to be equally conscious about the project and the people notion of the quality.
The framework: Quality-conscious Software Delivery
The challenge was how to really go about implementing QualiTri model in a context. And thinking about it helped me formulate my goal to be to achieve the delivery of quality products by quality-conscious people using quality-empowering processes.
How to implement the 4E structure of QCSD can vary from context to context but below is how we implemented it in our team which has worked great for us so far.
Going further in detail of the implementation of 4E structure for QCSD framework would require a series of blog posts. It starts with creating awareness, convincing your team for the need of it, considering their inputs, evaluating the project context and creating the workflows/action items together with your team, and then committing for the efforts needed. It's a process that takes time. Plus it is highly subjective from project teams to project teams and their contexts. And hence I would rather stop here for now.
How do we know it worked?
The Lead Time graph for our team before and during an experimentation phase of QCSD in our team (based on the improvements we did in the processes, consciousness with which all people worked with and keeping quality of the product in mind) reflected the positive impact.
I believe it was the first sprint in a long time, where we as a team finished all the tickets and pulled more, the so-called testing bottle-neck was minimal and the bugs reported that would make into the backlog or warrant some critical rework post-production were negligible.
Sure, this graph did not remain ideal all the time. Teams change, business contexts change too which affects the overall delivery and quality of the product we end up shipping. But if you know how to go about delivering a quality product by quality-conscious people using quality-empowering processes, I am almost certain you will do way more good than bad. And it's a win in my opinion.
See if you find it worth the shot. If you would like to borrow my help for consulting/implementing this idea for your team or organisation, it would be my pleasure. Just let me know.
I have been talking about Whole Team Quality via Whole Team Testing for a couple of years now. During my workshops, I am often asked if testing can only be extended for programmers in a team. Pretty interesting question it is and my answer is obviously "no". Though I usually explain in my workshops on how to extend testing to roles beyond programmers in a team i.e. for UX or PO roles, I realized that I have not given deep thought to it and to how exactly testing could be extended to other disciplines in a meaningful way.
I read books, discussed with my colleagues, did my research and the outcome has been what I would like to name as QX i.e. Quality Experience. If QA (read that as Quality Advocates) and UX professionals collaborate in a meaningful way, I firmly believe they can co-create a Quality Experience for everyone associated with the product.
So, what is QX after all?
QX stands for Quality Experience. For sake of understanding, you can call it a marriage between QA (read it as Quality Advocacy please) and UX. After some need-based discussions and interactions with my UX colleagues, I realized that we can achieve a lot more if we work closely together regularly. The key idea of QX is about facilitating the collaboration between QA and UX so that they can contribute to what I would call a "Quality Experience" of the product. This is both for the end-user and for those who build the product itself.
I believe that with some process optimizations, mindset enablers for testers as well as UX designers, and following some heuristics I have created, it is possible to kickstart the QX journey if that idea interests you.
But what's the need? Is QA alone not enough to cater to product quality (or UX alone not enough for better user experience)?
Well, I am afraid, it is not. Not at-least when you believe that Quality is value to someone (who matters) and when multiple stakeholders matter at the same time.
Let me explain. Imagine that a new design change required in a product is a revenue booster for the company but it is also likely to impact the user experience. Testers often end up with oracle problem in such situations and can not decide what their quality criteria should look like. Of course, the Product Owner can be consulted here for a final decision but that's not the point. We testers are in the information business, after all (yes, even if you follow the Modern Testing). I find it important for testers to be able to make a comprehensive information gathering and present that information to decision-makers so that they can make an informed decision based on that.
Now, if testers lack the tools and mindset to figure out how to go about solving such problems and gathering information that would matter, their job would be poorly done. And if you are still not convinced, I highly recommend you to read Weinberg's latest book around System Design Heuristics. This book could not have come at any other better time for me. I started reading it while working on the QX concept and it has given me some interesting insights to sharpen my thinking around this topic.
On the other hand, let's assume that a UX/Interaction designer has been given a problem to solve or has been asked to create a new solution for some product. How do they ensure they have gathered enough information to do that job right? How about historical incidents or say hidden technical challenges or simply put edge case scenarios, cross-functional dependencies, and so on? I believe that having this information at hand can greatly benefit how UX designers can approach the problem and solve that in a better way.
Therefore I think that an engagement between UX and QA can help both of them to perform their job even better. And hence QX seems to be a good way to go about it.
Well, how does it work?
Here are some ideas that I experienced to be working quite well. See if they work for you too?
1. Cross-discipline training for QA and UX
For successful collaboration, it is important that QA and UX understand each other well and that they speak and understand each other's language. More important is to understand the mindset with which they both operate.
I have experienced difficulties in understanding UX's point of view sometimes and my UX colleagues made the same experience. But a conscious effort made towards understanding each others' language helped us solve those issues and facilitating the collaboration thereby.
That said, we recommend that testers and UX designers start from understanding each other's roles and mindset. Attending cross-discipline training should help but if that is not possible, try doing "pairing sessions" at least.
2. Process changes or optimizations
A great deal of it depends on what kind of team set up you have. Some teams may have dedicated UX designers, some organizations have UX teams as a "lateral" service provider for different teams and some teams simply don't have UX people. Their designs are usually outsourced or made by engineers themselves. Some recommendations I would like to make here are:
1. Involve testers as early as possible and make them also part of the design process/discussions. UX will thank them for lots of useful info which might result in the faulty design if missed.
2. Early and frequent communication between UX and QA would help. Try "brainstorming" sessions for early design stages. Ask tester for hidden scenarios or technical hacks to go around things. Ask them for user complaints or known production issues surrounding the design under discussion.
3. Testers may perform focused UX testing and consult UX from time to time for their design-related findings. A "pair testing" session with UX expert can greatly benefit testers for more test ideas surrounding usability and human-computer interaction under different contexts.
4. Instead of creating a misinformed bug report surrounding usability and UX, testers can always consult UX colleagues first for feedback around their findings and use them as their oracles. Most of the time, UX people have information and insights (from their interaction with real users, test sessions they perform, qualitative and quantitive data analysis they do, etc.) that explain why that design is made a certain way.
5. Not every design change goes via the standard UX lead design process. Engineers sometimes have to make decisions that may result in a change in product design or impact certain product behaviors. Such changes can always be sent to UX designers for their feedback. The tester can play a big role in making this activity happen on behalf of the engineering team.
3. UX testing heuristics for testers
Mere exchange with UX colleagues without having proper knowledge of how to test for better user experience can be futile. Which is why I recommend testers to get good at UX testing too. By that, I don't mean typical usability testing or accessibility testing alone. After giving a deep thought to all possibilities involved, I have come up with the following heuristic for testers.
Keep in mind that it can also be used as a means to facilitate collaboration and have a better discussion with UX colleagues. Not everything might be applicable for all the contexts. Choose those that fit best in your context. Here you go:
Problem -To come up with relevant test ideas, Testers and UX must be on the same page in terms of understanding the problem they want to solve with the proposed design. UX often get first-hand information from the PO about the problems, which testers sometimes are lacking. Trying to understand what problem UX wants to solve with their solution, can open up lots of possibilities for testers and confine them to think in the right direction. It would also spare them from coming up with the right questions and perform better impact analysis of the change.
User Needs -
This is highly subjective and might vary from project to project. The key idea is to understand what User Needs are getting addressed through the design and if you as a tester can foresee any challenges/impact of that. This can also be very well made as a part of 'understanding the problem' part. It does not matter how you do it, but what matters most is that you know what aspects of user needs you are dealing with.
User vs Business Needs
Finding Balance - Solving the Oracle Problem
When you as a tester are unable to decide if the proposed solution is good for the product but bad for users and vice versa, use some of the methods below to make your decision-making more concrete and practical. Based on the configurations of teams, there may be no dedicated UX expert to cater to these needs. It is therefore advisable for testers that they wear UX hats and find the balance by asking the right questions before it is too late.
Plenty of efforts could be saved if testers too are involved in design/UX decisions early where dedicated UX expert is absent or designs have been made by third party services etc.
What Must Not Change?
Whatever we ultimately do, what are the things you don’t want to be changed? (from System Design Heuristics by Jerry Weinberg)
Introducing design changes in an existing product that are targeted at a specific goal by UX designers, can harm things that are not meant to be affected by that. I highly doubt if there are deliberate efforts made to analyze this regression impact on the design level itself. This is why, if a tester asks this question right in the early phase of a design change, it is likely to save lots of rework for the later.
“If we don’t start that way, it’s all too easy to lose track of the unchangeable.” - Jerry Weinberg.
What are the visible and invisible parts of the product that are impacted by this change? And what is the impact?
Running out of test ideas to decide if the solution is perfect? Pour some creativity in.
Exactness, Intuitive and Counter-intuitive Design
Most of the time, testers are more focused on the technical and functional aspects of the product so much that they unknowingly tend to ignore looking for obvious problems. A deliberate attempt needs to be made to look at the product like an unexperienced user to see if they will understand what we expect them to. To do this:
Well, I wish I could work on this and develop this idea even further. But for the benefit of time, I would stop here. Maybe I would get back to this again when I have more ideas. But in the meantime, feel free to comment below and share more ideas if you like. And do not forget to tell me how you find the QX idea so far.
And, I recommend you to read System Design Heuristics by Jerry Weinberg. The book gave me lots of ideas to ponder upon.
The more I think about events and people around me, the more I have started to believe in higher calling and celestial hints. I have been wanting to write about Jerry and as I started to pen my heart down finally, I learned that it’s Jerry’s birthday today. Could that be a coincidence?
Happy birthday Jerry! This time, accept it with tears in my eyes and lots of gratitude.
2018 has been a year of losses and realisations for me. I lost people that I once held close to my heart and spirit, I lost opportunities that could have helped me contribute better to the community. But the realisation that has come after these losses has been an eye opening experience.
One of such persons that was very close to my heart from years, that I lost this year has been Jerry Weinberg. Yes, The Jerry Weinberg.
It’s been almost four months now and I am still finding it hard to believe that Jerry is no longer there with us in this world, in his physical form. After Jerry’s demise, a lot of my friends and colleagues have written about his impact on their lives and how he helped them transform their lives. Reading that all has made me feel even more emotional. The realisation of being close to someone for years, who has transformed many lives in many ways, makes me feel so very special and sad at the same time.
I really don’t know if anyone else has received so much from Jerry, as much as I feel I have received from him. But I consider myself truly fortunate for him coming in my life and transforming me bit by bit, month by month and year by year into a better person and software professional.
I still remember the day when I wrote Jerry asking for his permission to publish one of his article in Tea-time with Testers’ earlier edition. It was February of 2011 when I wrote him for the first time. I shared our first issue with Jerry so that he could make a decision about his contribution.
To my surprise, Jerry admired the work we had done with our first edition and happily agreed to contribute his work. ‘Testing without Testing’ was his first article that we published.
Jerry liked our project for multiple reasons. He admired our will to make meaningful contribution to software testing community, the efforts we were taking by collaborating with many testing experts in the world and the value that he saw we were creating with this project. Moreover, Jerry liked that we mentioned our ‘team’ as our ‘family’ and when he said he would love to be part of it, my joy had no bounds.
And since then, Jerry Weinberg became part of Tea-time with Testers family. We felt like we got an angel to guide us in times to come. And Jerry proved every bit of that feeling to be true.
I have lost the count of how many times we communicated with other, over these seven years. But each time, I communicated with Jerry, my respect, admiration and love for him grew more and more. With each issue we published, Jerry gave us feedback and helped us become better and better. His feedback around my editorial that sited Kipling’s ‘If you can….’, is still fresh in my mind. He told me that had that poem hung on his wall for quite some years of his life. Don’t know why but I felt more connected with Jerry after that.
Our collaboration brought me very close to Jerry and I never had to think twice asking for his opinion and support in different initiatives we took over years. State of Testing survey has been one of such project where we closely worked with Jerry. The webinar we did with him and Fiona Charles after that is still fresh in my memories. To check if everything is fine before the webinar, I made Jerry a phone call and that was for the first time, I got to hear his voice. Firm, but it had warmth of its own kind. Darn, just felt like I heard him again. And I have goosebumps on my hand as I am writing this further.
Interesting part of my exchanges with Jerry has been that, once in a while, along with professional work that we were doing together, he shared about himself, his choices, his likes, anecdotes, his take on things happening around and of course his advice on variety of topics we discussed. It never felt like I had not met Jerry in person. There was different pleasure in getting to know him on personal level, bit by bit… as if I was reading story he was writing about himself and there were so many interesting chapters waiting to be shared. I kept wanting for more and Jerry never disappointed me there.
We used to talk about dogs sometimes, German Shepherds especially. The dog breed I suppose Jerry liked the most and me too. He once gave me pleasant surprise by adding picture of me and Victor in his Pinterest collection around GSDs. It’s wonderful collection. Don’t forget to checkout if you are GSD fan like both of us.
I think I can keep writing about Jerry that I got know for days to come. Through our personal collaboration and amazing books Jerry has written, he stays with me every now and then. Sometimes when I get stuck on some concept from his book, I feel like writing him an email and there he will be, explaining me things in a way I will never forget. I wish I could do that forever. This realisation is hard. Feels like I am waking up from some dream, for it never happened that I emailed Jerry and he did not reply.
Meeting Jerry in person was on my list of ‘things to do at-least once in a life’. We were planning to meet in person this year. Every time in past when Jerry wrote me that he was keeping unwell or he had doctor’s visit planned, my heart used to beat faster. And then he used to make me very happy again telling he was doing good and there was nothing to worry so much.
It was bit different this year. I had my flights booked to meet him June and Jerry suggested that I don’t make any reservations for he was unsure if he would be around by then. I did not know what say and even how to respond to that. I cancelled my flight tickets for I never disobeyed Jerry. Not sure why but with that particular exchange, Jerry left an everlasting impression on me. In Hinduism, we have this belief that great people can know of their time to leave the earth, well in advance. I think Jerry was one of such great persons.
His outlook and acceptance towards all phases of life made me feel awaken and enlightened. My that exchange with Jerry made a very positive spiritual impact on me.
I admit, I am yet to meet someone like Jerry who has been so brave and open about all possibilities of life, including the time to say good bye. I’m yet to meet someone so legendary yet so down to earth, someone who can explain mysteries of galaxy as simply and effortlessly as if it is nursery rhyme meant for kids, someone so generous and kind and someone who has transformed lives without even meeting people in person.
I wish I could meet you at least once in my life, Jerry. But not getting to meet you physically makes me firmly believe that you were an angel sent for many of us from heavens. You have touched me and transformed me in many ways and have made me realise my true potential. Moreover, your belief in my abilities and guidance you offered me over years, has made me feel so proud and special about myself. You taught me the power of compassion, you taught me how to stay unaffected by situations beyond our control, you taught me the importance of paying it forward. All through your life you have led us all by your example. I wish I could ask you what did you see in me that made you invest in me. But for whatever reasons you chose me, I will be grateful to you for that forever. And I will try my best to ‘pay it forward’.
Good bye and happy birthday once again, legend!
Disclaimer: Purpose of this blog is not to undermine the importance of automation skills for testers but rather to highlight the lack of awareness in our industry for tester's usefulness in teams beyond typical testing and automation tasks.
Recently, I was invited to conduct a workshop around "Whole Team Testing" at Test Leadership Congress in New York. Participating in this conference has been a great experience, especially because I truly believe in the need for conferences dedicated to test leadership and career path for testers in changing era.
The discussions I had there with fellow testers, managers and directors of engineering/testing were interesting. Well, not only interesting but insightful and with lots of ideas, concerns, observations to ponder upon. Out of the things we discussed, what held my attention the most was the growing concern of evaluation of testers in Agile teams. The evaluation can be for hiring or for their overall performance in team.
What is the problem exactly?
If you analyse the trend from State of Testing report, it is evident that with increasing Agile adoption, centralised testing units are mostly getting dissolved and testers are now reporting to Dev/Team leads. With this change of structure, hiring new testers and evaluating those in teams naturally becomes the responsibility of the Dev/Team leads. And this is where things start to get interesting. How? Let's take a look:
But this situation is dragging us into bigger problem. We are giving too much importance to things that are just a part of big scheme of things that matter and contribute to software quality and teams' overall ability to ship quality software, faster and frequently.
Of course automation skills and other technical skills are helpful, rather important I would say. But that has become a norm now and what we really need at this point of time are tools to see beyond this norm. If we want excellent testers in team who can really add value to software quality, it's high time that hiring managers come out of their obsession of hiring testers based on the norms that they can most comfortably evaluate. Please, it is not about what we are most comfortable with but what Agile team really needs from a tester to ship a quality software.
Heavy emphasis on technical skills for testers is not my big concern, but lack of awareness around other things that testers must get evaluated against, concerns me the most. Purpose of this article is to highlight some of those areas (beyond typical technical skills) that I think managers/leads may want to consider while hiring new tester or evaluating those in their teams.
Why do we need testers at first place?
One may argue that in an era of AI and advanced automation where every check can be easily automated, why do we need human testers at all? That's interesting argument and before I explain why we are talking about it, I want you to have a look over the diagram below:
The diagram represents a system that we as a team operate into. I created it based on my experience as a tester so far and by interviewing some programmers I have closely worked with.
Looking at bunch of things mentioned there, you can figure out what all can happen (rather typically happens) when tester in team is not available. If you read the diagram carefully, you will notice that impact on finding and reporting bugs is just one segment that gets highlighted when tester is missing in team. Impact on writing new tests or automating them is another segment. But is that all you need testers for in a team? Certainly not!
The purpose of having testers in team is way beyond finding bugs, writing tests or automated scripts for that matter. If utilised to their full potential, testers in team can very well serve as a mode to protect your system from running into collapse or explosion mode (please read more about that in "How Software is Built" by Jerry Weinberg). How? It's simple. By providing system related feedback to the controller which is typically team lead or test manager in typical team set-up. The better you utilise tester for their abilities the better feedback you get from them which in turn can help you command better control over your system and protect it from collapse/explosion. You need tester in team to provide you with information that you as a stakeholder can use for making better decisions.
And when I say feedback, it can be anything from information about bugs, risks highlighting, asking context revealing questions, questioning user stories or decisions made, finding out historical data, highlighting third-party dependancies, sharing news about decisions made by cross functional teams, collaborating with other disciplines to create valuable assets, their observation around team dynamics, their predictions about possible failures and so on. And if you are lucky, they can also tell you some great things about quality of your product.
The point is, tester in your team can add far more value beyond finding bugs, if you allow them to contribute beyond their typical role. If not, you can always empower them to do so. Here are couple of things that in my experience testers can contribute to and against which you could evaluate them (for hiring or performance reviews etc):
1. Primary analyser of production logs and alerts
Production deployments are typically done by programmers who then closely follow the logs to see if there are any obvious issues created by latest build. Consider handing over this responsibility to testers. It is likely to serve multiple purpose:
2. Enhancers of your product's coverage for quality
It has very little value if tester in your team just sticks with usual acceptance criteria for the ticket and automates the stuff for you to top it. The real value a skilled tester can add is when they question the very product coverage you have in place and help your team see elements of product that matter from quality aspects.
A lot of elements matter when it comes to your product's quality. Functional acceptance criteria is just tiny part of it. A skilled tester would know about these elements and they would educate the team about same.
Check out SFDIPOT from HTSM by James Bach or simply have a look on mindmap below (thanks to Albert Gareev)
3. Advocates for Testability
Testability is one of the key area I would expect skilled tester in my team to help programmers and designers get it right. A skilled tester would know what makes product testable, how to evaluate it for testability and also advocate the team as in where and how they can improve it. Nothing beats the pleasure of testing better testable product.
Here is a heuristic for testability if that makes you curious. Out of various types of testability mentioned there, I would expect the tester to at-least know Intrinsic Testability and how to help their team improve it.
4. Better allies for UX peeps
Ever wondered what would happen if System Thinking meets Design Thinking? I firmly believe that challenges faced by testers and UX professionals are more or less same when it comes to ensuring better quality and better user experience.
A regular and close exchange between these two disciplines has tremendous potential to create a better user experience with enhanced product quality which I would call as "Quality Experience". If UX designer comes with one best solution for some product problem, a skilled tester with their insights, product knowledge, awareness around cross-functional dependancies can point out variety of ways in which it may fail. This does not mean a tester has to criticise UX solutions as such but early collaboration and exchange between two can help avoid lots of unnecessary research and rework.
On other hand, testers can borrow realistic information from UX's research which can help them design tests that matter and prevent themselves from straying into unwanted scope. Testers can borrow statistical data or interaction/persona based information from UX that can help them shape better scope for their tests. Well, this is indeed interesting topic and deserves deep dive. I will stop here on this one for now.
A skilled tester would make this collaboration happen and would bring out the best from both the worlds.
5. Friend in need for marketing and user care
This is another area of collaboration where skilled testers can add great value to solve team's problems beyond finding bugs. I have listed some of the possibilities where testers can help user care and get benefited in return and I believe same applies to their collaboration with marketing teams:
6. Alert mechanism when system is on the verge of collapse
Well, this is bit tricky but I would mention it anyway. By nature of their work, skilled testers can use their sharp observation skills to observe and understand people, situations and events around them and can draw inferences that can help identify potential risks, before it is too late. Retrospective meeting is a great place for skilled tester to raise flags for potential issues surrounding team dynamics or people issues they observed, in constructive way of course which is where their communication skills can come handy.
Testers are usually in touch with fellow testers in organisation from where they get information around what other teams are working on, their future plans, blocking issues etc and this information can be very well used to analyse the impact on their respective team's road-map , work in progress or work items that share dependancies with other teams.
A tester who is skilled enough to foster network and relations across teams, can certainly help bypass blockages when team needs it most.
So, those are some aspects in my opinion and experience, a skilled tester can contribute value to project team beyond their traditional tasks.
If you are to a hire new tester or would like to fairly evaluate/mentor tester in your team, I would suggest give these ideas some consideration. It's high time that we look at tester as someone beyond bug finder. They can do wonders for your team and can help to accelerate shipping of quality product, if allowed to use their full potential.
I shared what I think can be helpful. Feel free to chip in your ideas ....
Evenings... I have started to fall in love with them.
After a tiring day at work, as I walk back towards my little nest ... I meet them on my way.
“Them who?”, you may ask. They are these little birds that do this blissful chirping... and some of them blow that sweet whistle. It feels like they now know me very well and are happy to see me around. Makes me feel so special I tell you. And then there are these beautiful trees. Some small and some of them being huge. They all are now lush, fresh and blooming with colourful blossoms. A couple of them even greet me with maddening fragrance of their flowers... And again... there is this tiny but cutely flowing canal. A bit mild but yet lively sound of its water when it splashes against little rocks in its way... it hypnotises me, I must say. That very old wooden bridge right above this naughty canal...it asks me to talk with it for a while, everyday. Sometimes it tells me the sweet old stories of old couple living in a big farm-house nearby and also about those naughty deers that come there for drinking water and run away as fast as they come. The bridge says he gets angry on them for not even saying “hello” but never forgets to mention how much he loves them.
Then I meet these big farms, proudly showing their bright yellow Rapseed. It looks as if whole land is wearing yellow blanket with lush green skirting. And what should I say about this saffron Sun? I have never seen something as magnificent as him. He is brightest of them all in the sky but still down to earth. I guess I would hardly find something as humble as a setting Sun. Until last minute of his goodbye for the day, his grace and elegance do not fade by a tinge.
Today I stopped by the bridge and asked them all, “hey, can I become your friend? I feel so lonely.”
They all bursted into a laughter as if they knew I would ask it some day. And then one of them said, “Why not? But there is one condition!”
“What condition?”, I asked hurriedly.
“You need to be your own friend first, young man”, said the old bridge and everyone else nodded.
The confused me looked at the giant oak tree there. “Let me explain”, said the kind guy.
“All of us here are in harmony with each other. But before that, we are in harmony with ourselves. If you want to be one of us, you need to become one with your inner-self. The day you do that, you will find yourself among us.”
I believe I got it. What I have been seeking outside, must be deep down inside me too. Quietly thinking about it for a while, I smiled, felt like I got an answer I have been looking for. I thanked them and we parted our ways... with a promise to meet again tomorrow.
I can’t wait to become one of them someday! And be in harmony with myself first!
"I am software tester. My job is to break software." , said one student in my Exploratory Testing workshop. I asked him to elaborate and explain me his techniques to break the software. He was silent for moment and then said he did not know how exactly to answer that.
I further asked about the last defect he found and how did he find it. He could explain that to my satisfaction. Then I asked if the defect was already there or something that he did introduced it. Student realised where I was coming from and admitted that he did not break software, he only helped to uncover the software which was already broken. Then I asked him once again to explain his techniques to uncover those already broken points and he explained that to my satisfaction again without getting frozen in between.
One of the important lessons I have learned from James Bach and something that I make sure to propagate in my discussions with testers is that, we (testers) need to be careful of the vocabulary we use to describe our work because it indeed makes big difference. With this little change, I have experienced the change in me in terms of how I perceived things before with the use of established vocabulary and after starting to choose my vocabulary carefully. That's not it, I have witnessed that when you help people to realise the same in a constructive way, they also make sure they help others to realise it. And this "chain reaction" of propagating that gesture is in my opinion, an important part of contributing to the craft.
I came across this thought-provoking post written by Maaret Pyhäjärvi (influential colleague in testing community I respect and admire ) and some of the arguments she has made, made me ponder upon my own attempts and experiences.
In her blog Maaret says:
Instead of changing the vocabulary, I prefer changing people's perceptions. And the people who matter are not random people on twitter, but the ones I work with, create with, every office day.
I totally support the idea of helping to change peoples' perceptions. I have made those efforts and have seen that taking effect. The approach is very much in line with Weinberg's idea of influence and that has always been my first approach towards changing something. However, the results in this particular case in my experience have been short-lived. I found people to be coming back to their initial understanding which was primarily shaped by established vocabulary and every once in a while I had to discuss the same thing with them again. Can I say my efforts were paying off? I guess not really.
What was and is the problem?
I realised that people that I had helped change their perception (mostly programmers and non-testers) got back to old vocabulary because other testers and stakeholders they were working with, were totally unaware of what they were talking about and why. Those people found it very frustrating to explain others their rationale behind using different vocabulary and eventually they gave up. I remember of one programmer friend coming to me and saying that he felt silly and stupid because the tester he was talking to was totally clueless of what he meant. And he finally said if testers themselves don't care about what their vocabulary should mean, why should he? And he was right!
The problem is, testers who understand the problem with established vocabulary are very less in number as compared to an entire lot of project stakeholders who use it. And testers who make an effort and help others to change their perception are even less.
And this is why I personally see the problem with "living with" some established norms which need revision. I think it's a high time that we strongly disapprove of what we do not believe in. Because it badly affects all the efforts made by people who care. When we know about problem with things and still decide to live with them, our awareness about those problems becomes pointless or less effective if not.
It is not just about people around us
The other day I watched this humours video by AIB on mass technical recruitments where they pick two gardeners towards end of recruitment to fill their quota and say, "Let's put these two on manual testing. Who requires talent for that anyway?"
That was very difficult to digest for me but at the same time, I could not blame the producers of that video because our established vocabulary is not their problem. They just presented the widely established (and mistaken) perception of our established vocabulary.
If we as testers don't care enough about changing something wrong just because it is established, we are letting others shape wrong perception about our profession and that is a silent killer. One of the leading testing tool company recently tried to showcase "manual testing" as outdated, bad testing and proposed their tool that supports Exploratory Testing as a solution. Major part of our industry still considers testing = manual testing = bottle neck and hence thinks of eliminating testing all together. But in reality what they want to get rid of is bad testing only and not manual testing or testing as such. If it is not us then who else is supposed to care about these problems and make efforts to solve them? And I don't know how best we can stop it other than getting rid of the labels and classification which is adding a lot to confusion.
In my opinion, we are responsible for how we let others shape their understanding of us. And no matter how hard we try to do it with people around us, there will always be people beyond our scope who will undo what we do. If we keep collecting the karma of living with established vocabulary and do not make deliberate efforts to change it, it is most likely going to haunt us and our generations (if at all we survive).
It is now up to us whether to collect that karma or to cleanse it. Cleansing sounds reasonable to me but I am still looking for more options . What if we do both?
Lately, I happened to have an interesting discussion with my colleague Dirk Meißner on whether programmers should have reasonable understanding of testing or not. A lot has been talked and written about how testers need to be great with their technical skills so that they can contribute effectively and remain valuable. Sure, that's helpful and I too insist that it's high time that testers get over with their traditional way of working (and thinking). However, what surprises me is that there is not enough awareness or enough discussions happening around programmers learning to understand testing to amplify their effectiveness.
Does it matter? Why?
It absolutely does. At least now, if it did not before. "Whole Team Testing" is new cool (again) especially in DevOps contexts. And it has it's own reasons to be that way.
Let me explain. Agile teams typically have one tester dedicatedly looking after testing and related activities in team. This tester is usually busy testing (and automating often) stories for each sprint with primary focus on acceptance criteria. If the tester is "cool kid" then they go over the board and test things beyond acceptance criteria too. Cool! Let's park this thought here for a moment. Okay?
James Bach, in his interesting article "Seven Kinds of Testers“ beautifully explains the key patterns surrounding testing styles and how testers typically fit into one pattern or the other or combination of more some times (or checkout this thought-provoking tweet series by Michael Bolton). In over eight years of hands-on experience of testing, I have found myself to be of one kind (or maximum two) at a time and by the time I wish to change my hat (or style) it is usually almost the time to deploy the feature in production. Pity!
The point is, there is limit on how much versatile a tester can be in limited period of time for each story they test. Sure, it's not impossible but I would say it's not very easy either, given the time constraints. Now, imagine that we add "programmers" from team as other kinds of tester (based on their skills, expertise and experiences) working on same story. Do you not think it will most likely add more coverage to the quality of that feature, without having to spend really additional time on it? Do you not think that testing wisdom of a programmer would help tester and the team to ship quality product? I'm sure, now you do!
When I say programmers should contribute to testing, it does not have to be only in a way that they will need to test the software like testers do. Even if they develop the required mindset, it's already a good start. Indeed it will be great if they can test it but I feel that if they could at-least understand modern testing, it will greatly benefit the project teams.
How exactly programmers can learn to test and start off with it, or how testers can help them to onboard with testing is another topic. It requires a dedicated post (more on that later).
This post is about identifying programmers with testing mindset or skills that can help them test better, while you interview them. I recommend watching out for these skills/traits in interview:
1. Quest for Context
The scholar John von Neumann once said, "There's no sense being exact about something if you don't even know what you're talking about." In a world that is growing increasingly dependent on highly complex, computer-based systems, the importance of defining what you want to make before making it -- that is, knowing what you're talking about -- cannot be stressed enough. (Exploring Requirements: Quality before Design by Weinberg & Gause).
Developing software is not just about writing a program that will do the stuff but it is more about building a product that your customer would like to use. In past, I have come across programmers who were excellent coders but failed to care enough about purpose of the program they used to write. I rarely see programmers questioning the user story beyond acceptance criteria and technical implementations if any (unfortunately, it's not very different for majority of testers either).
If I were to hire a programmer, I would expect him to ask Context Revealing questions. By that, I don't mean just questioning the business value of the user-story. There are things beyond that which matter. What if we find out that other team has worked on similar solution before? What if we could re-use some components developed by other teams? What happens when particular feature development requires specific technology expertise and team does not have it? What happens when stakeholders' understanding of technical details defer from engineering team's? What if implementation of some solution requires tools not available with team or required access levels for that matter?
Sure, one can eventually find these things out when they start working on the ticket but what's the point in findings things with accident and when it's already late?
Programming interviews typically include coding challenge that typically gets assessed for candidate's technical skills, familiarity with known technological issues, understanding of best programming practices, problem solving skills etc. which are indeed important. However, I am yet to see a programmer being assessed for the kind of questions they asked before jumping on to the coding challenge itself. See if they are questioning the very purpose of challenge, see if they question the business value, check if they ask about other elements of Project Environment and Product Elements for that matter. And, please check if they ask questions about testing and Quality Criteria if nothing else. Most of the programmers I got to interview usually assumed that that there will be a tester in team who will QA their code. They just have to write the code and throw it in tester's bucket. You better watch for such kind if your team does not have a tester.
Just like testing, good software development should be treated like an intellectual activity. The better one understands it the more ways one can contribute to product quality. And it all begins with asking questions.
2. Interactional Expertise
If you are unfamiliar with the idea of Interactional Expertise, I suggest you start from understanding it first. Even better if you could read Tacit and Explicit Knowledge by Harry Collins. I personally found it to be very useful learning when I was introduced to it by my friend Iain McCowatt.
The purpose of mentioning Interactional Expertise as a skill here is that, I find it to be very important skill when it comes to have technical discussions with non-technical people. Or, even when it is about having meaningful technical discussions in short period of time.
Bringing up technical topic for discussion in planning meetings or grooming/estimation meetings is usually like opening the Pandora's box. Over the years, I have been a part of deep discussions in meetings with only conclusion of carrying them to next meeting or scheduling separate meeting for that. Then again, special meetings for explaining those technical things to non-technical people were required. Does that not sound familiar to you?
I feel that spending so much time on deep discussions very often is unnecessary and it can be significantly controlled if all of us (not just programmers or testers) learn the skill of explaining things in short (and to the point) when needed without losing the substance of it or compromising with the impact an elaborated version would make. Same goes with explaining technical things to non-technical people. As techies, we can't expect the whole world to understand the language we speak (it would be nice if that happens though) but we can make things simple by learning the art of explaining those to other in language and context they would understand better.
Added advantage would be when you onboard new members in team. Regardless of what role they are hired for, person's IE skills would help them onboard much better and the skill will definitely help for better collaboration and communication. In fact, when testers and programmers both have great interactional expertise then sessions like pair testing or programming will be super productive. Imagine what value it can add for Mob Programming and Mob Testing sessions. I have worked with some programmers who were master of explaining technical things to non-technical members of team as if they were putting a child to sleep by telling a story. Short, sweet and yet satisfying. That's what I mean by Interactional Expertise.
Next time when you interview a programmer, look for these. It will help you. When I interview testers for it, I usually ask them to explain some technical concept in 50 words for example and again same concept in 100 words. It helps me analyse how good (or bad) they are with their Interactional Expertise. Asking programmers to write a technical bug report or user story can also be helpful trick to evaluate them for their IE.
3. Understanding of Testability
May be I am wrong about it, but I honestly feel our industry still lacks required seriousness (and awareness) for building testable products. This is not just about programmers being unaware of it but even testers.
The only times I hear of the word "test" in programmer interviews are when they talk about their unit tests or TDD or automated tests at the max. And it's a pity!
Building testable products is an important part of software development and it is important that programmers understand how to bake testability right from the beginning. Sure, skilled testers can certainly be advocates for testability but it won't hurt if programmers too understand what it means to them and how they can contribute to it.
While interviewing programmer, I suggest you pay special attention to their solution if that demonstrates at-least few aspects of Intrinsic Testability as explained in Heuristics of Software Testability by James Bach. If not, at-least make an attempt to discuss other aspects of software testability (listed in the heuristics) with candidate in general and gauge their fitness for your requirement.
For sure, the skills mentioned above are equally important for testers too but since "Whole Team Testing" thing is picking up, I wanted to make it explicit for traditional non-testers. Next time when you interview a programmer, please try and see if it helps.
If we need technical testers, we also need programmers who understand testing. And that is reasonable to ask for, isn't it?
Oh and by the way, I will be touching on some of the related topics in my talk for Online Testing Conference. Feel free to join if the topic interests you.
During my ET/SBTM workshops, I have been often asked if it is possible to perform Session Based Testing in typical Agile/DevOps environments.
I think if tester knows how to perform SBT (especially with different session types), there should be no trouble in doing it regardless of the development methodology their team follows. (If you are unaware of typical session types in SBTM, then I recommend you to checkout this informative series by my friend Simon ‘Peter’ Schrijver.) However, I understand that probably, the fast paced way of working through sprints, shipping small chunks of software regularly, keeps everyone mainly focused around achieving the sprint goal. And for testers, this often means focusing only on ‘acceptance criteria’ and moving on to the next ticket, which is waiting to be tested and deployed. And that might force some testers to just forget every awesome technique they know and run behind getting “the stuff done”.
If you are a tester (sorry, Agile Tester) stuck in such situation and feel bad about it, then this post is for you. For a while, I too was stuck in similar situation. Not that I was not getting to perform ET in SBTM way, but I constantly felt something was missing and that there was still more I could do. A couple of important things were getting skipped off my typical sessions, and all I could do was procrastinate them. And that was bad, very bad! (yeh)
When I was done with feeling guilty about it, here is what I did (and I strongly recommend you too try it out). I created some more session types for myself some of which I perform twice a week and some I perform daily. And they have been helping me greatly so far. Well, what are they?
1. Critical Thinking Session
I know, a good tester should think critically at every possible occasion. But there is no harm if you dedicate a special slot for it. How often do agile testers spend time on reading through backlog and preparing their notes in advance so that they know what to ask, know what extra information they would need for certain ticket or prepare a risk list associated with some ticket, to make everyone aware of it before it’s too late?
A dedicated ‘Critical Thinking’ session aims to solve this problem. Schedule at-least two of such sessions, one before beginning of new sprint and one before your feedback/refining sessions so that you spend enough time for ‘critically thinking’ about stuff that would be soon on your plate. The sooner you prepare yourself with it, the better you will be able to shape your further sessions around actual testing of those items.
Medium sessions of roughly 60 minutes works great for me. Sometimes a short session of 45 mins is enough. The more you practice with Critical Thinking sessions and the more time you spend with applications you test, I guess you will eventually need lesser time for CT sessions.
In case you are wondering what I really mean by “Critically Thinking” about backlog items then please checkout “Mary Had A Little Lamb” heuristic by Jerry Weinberg or “Huh?Really?So?” by James Bach.
2. Monitoring Session
Monitoring production logs especially after deployments has benefits of its own. And if tester does it regularly then there is a lot they can discover through those logs.
If as a tester, you are not yet doing production deployments then I suggest you start doing it and make sure to check production logs (or logs on other environments for that matter). If for some reason, you do not own deployments then try to spend at-least one short session for monitoring your application’s production logs, every day. That’s what I mean by “Monitoring session”.
Because of such dedicated monitoring sessions, I have come across some elusive bugs that were hard to catch on testing environment. Production logs are also very interesting means to learn about different ways in which you can test your application against disfavored use or extreme use (refer HTSM by James Bach – SFDIPOT – Operations). Not only that, it can also help you identify some illusive integration level bugs which might not be caught easily otherwise.
Monitoring sessions can also help you identify technical bugs, errors or warnings, which might not be directly affecting the end user but still warrant attention and fixing. It’s hard to identify such issues in regular functional testing which usually has big scope of its own.
3. Bug-visit Session
This session is about visiting the bugs in backlog and going through them carefully. Sometimes, over the period of time, some bugs become irrelevant or can also become important to be fixed on priority. Revisiting those bugs helps take appropriate action on time.
If there are bugs logged by other teams or customer care team for example, then they can also serve as a great ‘test ideas’ that can be extended to other features of application. Information gathered through such bug-visit sessions can help you create your own risk-list or project specific heuristic (I have explained that in detail with HEEENA).
4. Test for Testability Session
I can't stress enough on importance of this session, especially in this changing era of Software Development. One of the key role a skilled tester can play in modern software development is of "advocate of testability". Checkout this heuristic for Software Testability by James Bach. Once you understand the dynamics of testability you will realise that the sooner you care for it (and advice where needed) the better testable product your team will create. If I am to give you an example, please check "stats for nerds" on youtube videos you watch. Would such information not help you if your product starts storing such information, that you as a tester can access easily and shape your tests based on that?
I suggest you create the checklist for Intrinsic Testability in particular and test your design right from the beginning against it. Please dedicate a compulsory session (short should be good to begin with) for testability even before you get the build for testing against acceptance criteria. In fact, you could also pair with programmer or PO when they work on user stories that will latter come to you for independent testing. This in tern will help you as a tester to better test the product for session/charter you care for (and you won't get lost in figuring things out to help you test better).
So, that’s about four additional session types I have created for myself to do even more effective SBTM in agile environment. I am finding them very useful and hope you too will benefit from them.
Feel free to get in touch if you have any questions or would like to discuss it further.
Header image credit - blog.xebia.fr
Before some of you start freaking out, let me make this clear - no, it’s not the pot some of you might be thinking of :) It’s an abbreviation of “Problem on Table”, an activity we have recently started doing at XING AG for its testers.
As facilitator, I have experienced the benefits of POT method of problem sharing (and solving). I was first introduced with the idea at ITB tester meet-ups I participated in Pune/Mumbai and in order to facilitate such exchange for testers at XING, we decided to give it a try with some twist. And guess what? It worked really well!
How does it work?
The key idea of POT session is that testers come together and share their testing problems. This is done in order to get support from fellow testers who might have faced similar problems before or you find more testers facing similar problems and then work together to solve them.
The format of POT sessions I did in past used to turn out chaotic soon, because of lack of moderator control over discussions, too many topics being discussed at a time etc. And that sometimes used to confuse the problem presenter instead of getting solutions to the problems they shared.
Presenting "POT – the Lean Coffee style"
To make the exchange less chaotic and inclusive for everyone, we decided to do it in Lean Coffee way. And boy, it did wonders!
If you are aware of Lean Coffee and now POT then it should be rather easy to guess the format.
Set-up notes for facilitators:
Here are some simple rules we followed that worked great for us:
And that’s it. You are ready roll!
Our first POT- Lean Coffee style session went on for two hours with five minutes break in between and I am super glad that we had some much cool stuff to discuss and everyone had some or the other idea to solve their respective problems. That’s the power of teamwork, isn’t it?
The key takeaways from our first session:
In two hours of highly engaging discussions, below are some problems we discussed and the solutions we felt could help solve those. I strongly feel that many testers would face those challenges regardless of the organizations they work for and hence I am sharing the solutions we discussed, hoping that others with similar problems might find them helpful.
Problem 1: What happens when Tester goes on vacation?
A classic problem many Agile teams face in my opinion. So, when tester (who is generally only one person doing testing in team) goes on vacation or falls sick, the team usually suffers and tends to rely on automated tests or performs minimal testing which often results into big problem. What a tester can do to solve it? Or if I may say, what teams can do to solve this problem? Here are some ideas that have worked with some of us:
Problem 2: How to manage integration testing with different teams/apps?
If the app your team builds is just one part of several applications working together for your one business product as a whole, it often becomes challenging to plan out and maintain hassle-free deliveries throughout. And end-to-end testing becomes even more challenging under some circumstances.
Different teams are likely to have different notion of quality, different development methodologies, different business plans and thus, naturally different release plans. Things become chaotic when testers are left alone to solve such problems, or when such problems are identified very late in development of some feature with rigid deadline.
And who else can understand such problems any better than fellow tester from other team? Do you see some solution there? We did and I believe it helps. Here are some ways to facilitate that:
Some of the other issues we discussed were around pushing for bug-fixing, fixing high-bug-tolerance problem of agile teams, common pitfalls of agile testing, what real agile testing should look like etc. One of the suggestions from our experienced tester Dirk Meißner was to hire programmers who take testing seriously and have quality mindset. I feel that’s very good approach. Precaution is always better than cure, isn’t it?
So that was about some key things we discussed in our first POT-Lean Coffee style meet-up. And we are already excited about the next one. How about creating one in your organization and sharing your learning with us, just like I did? Let there be some coffee, some delicious cake and of course some Problems on Table. Enjoy!
Header image credit - wall.alphacoders.com
Nothing beats the pleasure of talking testing with old friends on cozy afternoon of weekend. Even better if it is long weekend and weather is awesome (I must tell you I have fallen in love with Spring in Germany).
I met Pratik last weekend after quite some time. Apart from working together on Tea-time with Testers, Pratik and I have also worked on setting up and running Testing CoP under TCS-Cisco relationship. Pratik is now setting up Testing CoP for his new team and wanted to brainstorm around things we did in past. Instead of keeping our notes just with ourselves, I thought I would rather share them in this blog so that others can benefit too.
Before I jump onto the outcome of our brainstorming, I would like to talk a bit about my personal experience with Testing CoP.
After TCS, I took up an opportunity with Barclays Investment Bank (GTC lead by Keith Klain) and there I was privileged to work with awesome testing leaders like Leah Stockley , Kshitij Sathe, Shrini Kulkarni and Sudhanshu Bodoni. During my tenure at Barclays, I worked as Test Strategy CoP Lead and then Testing L&D CoP Lead under Innovo8 program. Apart from that, I was teaching RST full day workshops and running Lean Coffee style testing meetups with awesome bunch of testers in the organisation.
All above activities that I got to perform have helped me understand testing even better and most importantly, the experience has taught me the skills and lessons for successfully leading CoP. I value those lessons a lot since CoP set-up is generally different from typical project team set-up and each of them require sort of different skills to lead.
That said, below are some lessons learned we feel could help one working with Testing CoPs. However, I must say that they are based on personal experience and others' experience can differ.
1. Start with having a clear goal in mind
Working with CoPs can be tricky if you do not have a clear goal defined. The big part of it also depends on why your organisation or unit wants to start CoPs at first place. Is the purpose to stimulate communication and collaboration? Is it being done to foster innovation? Is the end goal to enhance productivity and building skill sets? Is it about creating reusable components? Or is it as broad as establishing an intellectual work/testing culture? Also keep in mind that an organisation-wide CoP goals may not clearly tell you what they mean for the individual CoP you are looking after. If you are to lead CoP you must understand the bigger goals and be able to define the goals for your own CoP that are aligned with them. At the same time, keep in mind the dynamic nature of CoPs and be open for shift in focus as community evolves. The challenging part is not to let the group get too much dragged away from original goal but keeping them motivated at the same time. There is a way to address it. Let's talk about it bit later.
Indeed it is possible to start from no clear agenda and then letting the CoP evolve. But I personally feel the danger of running into chaos and getting into situations that are difficult to manage with that approach. Of course, you can work with other members of the CoP and come to the common conclusion of the goals for your CoP. Again, this is not as easy as it may sound. There might be conflicting situations, differences in opinions, disagreements on action items, lack of participation, motivation issues and what not. More on that part later...
2. Know your success criteria
Another important part is to be able to measure the success or effectiveness of your CoP. One may argue that CoPs are meant to be informal, ever evolving and all but it is hard, rather pointless in my opinion to run CoP without any measures for success. Wenger-Trayner in their book on CoP mention that, "It may be difficult to attribute with 100% certainty the activities of a community of practice to a particular outcome. You can, however, build a good case using quantitative and qualitative data to measure different types of value created by the community and trace how members are changing their practice and improving performance as a result." And I totally agree with it.
As a Testing CoP lead or member, you may want to understand what practices testers are currently following, what are current challenges and what changes they did from inception of your CoP. That will tell you what impact your CoP is having and the value it is creating. A classic example from my experience with Barclays' CoP was that we saw testers adopting Context Driven Testing and RST practices in their work. Switching from traditional excel based format of test case writing to mind-maps and ET charters was noteworthy. With CoP in TCS-Cisco, we noticed more and more testers taking active part in online testing forums, writing testing blogs, participating in competitions, creating reusable components and coming up with solutions that were never discussed before. As a matter of fact, we created "Tea-time with Testers" magazine for global testing community by taking inspiration from the work we (me and Pratik) did there as CoP leads.
3. Be the change you want to see
From personal experience, I can say that motivating others and inspiring actions is key factor for the success of CoPs. Throwing ideas is other thing and proposing ideas with practical experience around those is a different thing. The latter is more important for CoP since "Community of Practice" is essentially about people with practical experience of what they come together for as apposed to "Community of Interest".
To be able to lead CoP or to be an effective member of it warrants you to have hands on experience of solutions you propose or ideas you pitch in. The more experience you have the better it is for you to help others. You can not just expect someone to present at a conference or write a testing article or implementing some new testing practice without you having done it yourself first.
Most importantly, people in general don't like to be told what to do (thanks Leah for this important tip) as they have intelligence, experience and opinions of their own. And that's quite natural. If you want people to buy your ideas, you better demonstrate them how you did things, what benefits you gained and how you solved problems on the way. That way, most of the open minded people would at least try your suggestions out before turning deaf to your proposals. Remember, it's about also about people, community and not only about your own personal choices.
4. Be a People Person (and not a person liked by just few heavy-weight people)
Even though it is of Practice, in the end it is a Community and it's about people. As mentioned earlier, a lead of a CoP may often need to deal with conflicting situations that may prevent members from contributing their best . Some of the reasons for these barriers as per (Wasko & Faraj 2000) are egos and personal attacks, large overwhelming CoPs, and time constraints. I would add petty politics and favouritism to this list.
For running a successful and healthy CoP it is important that one connects with people very well, understands their preferences, listens and values their opinions and rejections alike. The better connected with people you are the more likely you are able to affect their participation in CoP in positive way. Mapping knowledge and identifying gaps is one key problem CoPs can solve and a 'people person' is likely to be well equipped to make it happen for they will know "who knows what" or "who can solve xyz problem" or "who can be the right person to make something happen". The best thing you can do as a lead is to give right opportunity to right person and not getting tempted to give it to someone just because you like them more. Be transparent and care for everybody's contribution, your people with love you.
Sometimes leads also fall into trap of keeping only heavy-weight people happy and with that pressure they expect other members to support them. This causes more bad than good and is a great recipe for stopping evolution from happening. There is very good reason why Kipling says, "walk with Kings—nor lose the common touch" . Listen to the wise men :)
5. Care to connect with the Community outside
Just knowing about people and practices within organisations is not going to help much if you want your CoP to truly create some difference.
Be it for leading a CoP or becoming its influential member, your connect with outside community (the global community if possible) is going to empower you like nothing else. Connect with people/communities outside, see how they are solving similar problems, learn from their mistakes, see how they are making something happen, know what is their criteria for success, what resources they are using, can you borrow something from them? Can you make use of their expertise to serve your CoP's interests? Perhaps you can invite someone from outside world to conduct a workshop or deliver a guest lecture? There are endless possibilities here....all you need to do is caring about your own learning and community connect.
If you know outside people in person then even better but knowing about work done by some expert can also help you make a stronger case for your proposals. Read articles, participate in community forums/chat groups, attend conferences, join webinars..there are endless possibilities here too. If I need to give examples, my experience with Weekend Testing sessions gave me great insights for running test sessions and introducing new testing concepts. Because of our personal connect, James Bach did some guest appearances over skype for our teams and local meet-ups and that inspired testers to great extent. My voluntary experience with teaching BBST Foundations through AST connected me with several passionate testers worldwide and we still help each other with ideas when needed.
The global testing community is the best community of practitioners I have ever seen and you will never run out of help should you ever need it, that is assured.
6. Make it Bottom-up but also a Top-down thing
If your intensions are good, efforts are sincere and have powerful friends in your organisation then there is nothing that can stop you from doing good. Understand what big people sponsoring CoPs want to achieve with it, especially with CoP that you work with. Ask them how you can help them solve their problems together with awesome bunch of people in your CoP. This would not only help you to align and priorities the goals for your CoP but also it will help people understand that their contribution (no matter how big or less) matters to the organisation. And this motivation is far more powerful in my experience.
On the other hand, try to bring big people in your CoP meet-ups or important event sometimes. A small pat on back of your group will make miracles to happen, I tell you. I still remember some occasions when Keith Klain (who was GTC's Global Head then) found time from his busy schedule and spent time with our Lean Coffee group. The impact of that interaction lasted for longer period and it helped us gain better clarity of what was expected from us. Needless to mention that such interactions help you crack some hard nuts without having to behave tough on your part. Try not to burn the bridge with people because you are going to have to work with them in the end. Leave such problems for big people to handle. Their attention to your work is enough to solve some tricky conflicts.
These things appear small in nature but I recommend you to try it and you'll thank me later for this tip.
7. Don't make promises you can't fulfil in your capacity
It's hard to motivate people but not entirely impossible. What matters is your ability as a lead to understand what an individual takes motivation from.
In order to keep people engaged and motivated, leads sometimes make promises for tangible returns such as promotion, pay raise or bonus. Don't make such promises unless you can really make such decisions or influence them any way. Instead, find out what motivates people by spending more time with them. Extend your engagement with them beyond formal meetings. Build personal connect. Seek for people who are motivated by intangible returns like reputation, self-satisfaction and self-esteem, networking opportunities aimed at interactions, learning and sharing. And try to make them visible so that others also take inspiration from them.
8. Identify your own motivation and be true to yourself
On top of this everything, make sure to identify what motivates 'you' first. An early access to information sometimes makes people want to do things and their interest/passion usually fades up once they get what they wanted (pay raise, promotion etc.) or when they realise it's not going to happen for them. There is nothing wrong in getting motivated with tangible returns but please also make sure that your passion does not die regardless of returns you get.
In my opinion, it's passion that makes a big difference. Keep doing your work sincerely and good things will happen to you sooner or later.
Well, that's all I could recreate from the notes of our discussion.
From my experience with leading CoP for over five years I can say that one needs to be jack of all trades and master of many to be successful here. Some of those skills are relatively easy to acquire but none of them can be acquired over night. Certain things can be learned only through experience and over the period of time.
You got to be passionate and be patient at the same time, and that has been my biggest take away whilst working with CoPs. I hope you find these tips useful in your adventures around CoPs. Don't hesitate to reach out to me if I can be of any help or would like to discuss. Good luck!
With my experience as an Agile tester so far, one thing I see organisations still struggling with or trying to get better at are the estimations.
While thinking deeply about what makes our estimations go wrong I realised that there is still a lot we do in projects that we do not record, measure and consider as factors that matter. And that is probably why we are still trying to get better at estimates. And that is also probably why the ideas like #NoEstimates make sense and are getting popular.
A few thoughts on #NoEstimates first
At Agile Testing Days 2016 conference, Vasco Duarte made an interesting keynote around #NoEstimates. I admit, I do feel bit brainwashed by his ideas but I believe that NoEstimates is not about "not estimating at all". It's about doing estimates sensibly and without blindly following the tools that are being widely practiced.
And, in my opinion there is still some way to go till industry understands and starts practicing the key idea behind NoEstimates. Does that mean we should stop estimating right now and wait for an entire industry to on board with it? Absolutely not!
By the time this idea develops further, I think we should continue to make efforts towards doing better estimates. After all, the fact is that business needs some date to baseline their business plans with and I find it totally reasonable.
What's wrong with how we currently estimate?
I think the problem is not just with how we estimate (story points, T-shirt sizes for example) but also with what all we measure, how we measure and what we take into account from all those measurements.
I strongly feel that there is cause and effect relationship between things we measure and our estimations based on those. If we do not measure what all matters, our estimations are likely to be flawed. And honestly, it's high time that our industry stops measuring things that are easy (and cheap) to measure rather than those that matter but are difficult to measure.
What are those things that (also) contribute to poor estimates?
I can only talk about things that I have seen making an impact in my experience but I feel they can be very much present in your project environments too. Generally, these are short-lived impediments/side-tasks that we forget to record, measure and consider such as :
It's not that we totally ignore the efforts spent on all of the above but typically those cards remain on the dashboard only for the duration of the sprint and then are thrown in dustbin when sprint is over. What if we start recording, measuring and considering them for future references? Would it not help? Where is the problem if yes?
Where is the problem?
The problem, in the way I see it is that, there aren't enough techniques that would help people identify what to measure that matters and how to measure it. In order to measure things, first we need a mechanism to observe them, and record those observations in order to measure them. I feel that for a project-team as a whole, we currently don't have any effective mechanism to address this. This same problem had been haunting testing community badly (and caused great damage too) but I'm glad that in the form of Session Based Testing (and Management) the community found a sensible way to do it right.
Hey, but that's just about testing and test management. What about measuring things (like above) equally effectively for the work that programmers do? After all, estimations are for and from whole team (and not just programmers or testers alone) unless context demands it to be otherwise.
Thinking about the solution
I am wondering, what if we extend the key idea of Session Based Testing to programming as well? Yes, something like Session Based Programming?
If you are not from testing background and don't know about SBT(M) then I encourage you to read about it first. If you are a tester and still don't know it yet then please do yourself a favour and read it NOW!
Well, what do I really mean by Session Based Programming? Here is my proposal:
1. Development to be done in focused, un-interrupted and time-boxed 'programming sessions', typically of the length of 30 mins or 45 mins (short sessions) to of 80 mins or 90 mins (long sessions).
2. The way testers define mission and create charters for their test sessions (in SBT space), programmers may pick stories or tasks and can work on them in time-boxed way.
3. Typically, testers perform different types of session in SBT such as Analysis session, Survey session, PCO session, Deep testing session etc. Similarly, programmers may also classify the type of session they would be working on. Right off the top of the head, I would propose the session types like below :
These types can vary from project to project. (May be you think of new ones and let me know too?)
4. Keep the record of actual time spent on what type of session with challenges faced if any and store them to some central location.
And how do we estimate with these?
Once programmers spend enough time working in this way, over the period of time, they are most likely to develop realistic understanding of how many sessions of what type are likely to be needed to complete some story or task. This is because it is going to be based on actual experience and by keeping the actual time spent on so and so type of session in mind.
Let me give a small example. Assume that for some XYZ story, particular complex feature in your application roughly requires some back-end programming efforts, some front-end changes and testing. A programmer (and tester) who has spent enough time working on respective area in Session Based way may come up with estimates like:
BE programmer: 3 short back-end coding sessions
FE programmer: 1 long FE+BE pair programming session
Tester: 1 short Analysis and 1 long deep testing sessions
Assuming that short session in your team corresponds to 30 mins of time and long session to 80 mins then by calculating and combining above inputs, we can estimate the story XYZ to be taking roughly 280 ( 90+80+30+80) mins of work.
But wait, assume that historic record of time spent by team on Unplanned activities sessions, Bug fixing sessions, Deployment hiccups per sprint (of 10 stories on average) amounts to be around 200 mins (that is 20 mins for one story) . I would add this as buffer to initial estimate of 280 mins and would count the final one as 280+20=300 mins.
What is the benefit?
I think there are more than just one such as:
I feel that testing community has mostly been at receiving end whenever some new and shiny, cool thing happens (e.g. DevOps) and they are usually left to figure out how to "fit in". The efforts testers spend on these "fitting in" attempts are usually so high that they hardly get to contribute to the advancements beyond their own craft.
If testers are to evaluate and to contribute to software quality, then they should also evaluate and contribute to the quality of processes that affect them (and everyone else). Session Based Programming is my humble attempt to accomplish that.
I look forward to know how you find it. Feedback and criticism welcome :)
[On side note, I presented this idea in my Lightning Talks at Agile Testing Days conference- 2016 and got some good feedback, especially from Janet Gregory. We discussed to work on it together to take it further and I would like to thank Janet for encouraging me to do so.]
Hmm.... I was sort of in a cave after day 5th and now it's action time for me again.
Not that I was totally out of the challenge but could not manage to blog about it. So... here is the list of tasks on my plate and what I did about them (updates for the pending tasks are in brown)
Day 6. PAIR WITH SOMEONE OUTSIDE YOUR BUSINESS UNIT OR OUTSIDE QA
I paired with a free-lance programmer who was helping us with our ongoing project on-the fly. Time was critical and he needed someone with enough knowledge about stuff he was working on. We paired up to develop it and test it on the fly.
I used the MIPing technique (Mention in Passing) to report my bugs/findings and re-tested the things as he kept on fixing them. It was indeed fun pairing with a programmer and contributing right from the coding phase for new feature getting built. We benefited from each other's expertise. My programmer counterpart got the on-boarding that he needed and also learned about the dependencies we had to resolve. I got benefited by watching closely how he quickly developed the components, wrote his unit tests and ways to resolve conflicts.
Day 7. GIVE SOMEONE POSITIVE FEEDBACK OUTSIDE OF QA
The last few weeks have been crazy for us at work since we have been working on some serious stuff and the fun part is the complexity of the solution we have. At times, we can't escape from dealing with the complexities of the software/projects and this whole thing makes testing and bug-investigations even more challenging. The reasons are obvious. The interfaces, dependencies, various states of the systems involved, pre-requisites for use cases and schedule, new features added , subsequent impact and regression add a lot more to already existing complexities.
Among other things we found and fixed, we were struggling to resolve one bug which was hidden deep under the interfaces and lost in the complexities of interfaces. Finally Lars Schirrmeister (our Senior Back-end Engineer) jumped in and decided to get to the root of it.
After spending quite some hours on fixes and trials (Try -- and See -- If it works) he finally found the problem. And fixed it of course. But what was so special about it? It indeed was. The problem Lars identified was found out because of his own understanding of the technology, algorithms and syntaxes. The issue he found out was 'never raised as an error or exception by the system itself' and finding out hidden problem in a code laden with complexity, that does not show up in compilation/debugging at all is indeed an intelligent task. Good job Lars!
It's an honour for me to be working with a such highly competent team and managers. There has always been something or the other I could mention in their admiration. This particular case is one example of it. I'm glad that I could talk about my team with the world because of this challenge. :)
Day 8. HAVE LUNCH TOGETHER AND POST A PICTURE
Who else could be a better date for this day of challenge other than one from fellow participants :). It was great meeting Daniel Knott over lunch to discuss testing and this 20days challenge among other cool things . Feel free to check Daniel's updates on this challenge.
There goes our picture together:
Day 9. AUTOMATE ONE WORKFLOW FROM ANOTHER BUSINESS UNIT (Pairing allowed, but be the driver)
Could not mange to complete this task in time. It's WIP though.
Day 10. PERSONAL CHOICE (Testing related, surprise us!)
I'm conducting a workshop on Impact Mapping for my colleagues, on 13th Oct. The idea is to implement the concept on team level (rather than keeping it limited to Senior Managers, Product Managers, SCRUM masters etc).
My workshop would mostly talk about implementing IM on Story level and writing them with information, enough to build testable features with wider coverage. Won't say it is only related to testing but it would definitely benefit testers working in Agile teams.
Day 11. LISTEN TO A TESTING PODCAST
I listened to DevOps and Technical Testers podcasts featuring Noah Sussman, Michael Larsen, Perze Ababa and Justin Rohrman ( yeh, all cool people :) )
I feel this discussion has come on the time when testers needed it most. If you are wondering about what DevOps is and what roles testers can play in it then please consider listening to it. Highly recommend!
Day 12. PERFORM A CRAZY TEST
Hmmm...this is my kinda topic. Well, honestly I don't remember the last time when I did not perform a crazy test :) The degree of craziness of a test might vary from a tester to tester based on what they perceive it to be.
For me, a test performed with minimal knowledge about application and acting like a 'first time user' is equally crazy test. With minimal knowledge I mean not getting blinded by the specifications and testing things with open expectations. Paul Holland has written an interesting piece around it, please do read it.
On contrary, while testing the recent build of our product, I came across many ways to perform crazy tests with enough knowledge about software, dependencies and solution implemented. And I learned very important thing that a crazy test does not always have to be around the 'disfavoured use' or 'extreme use' of software (please check Product Elements in HTSM by James Bach). Playing around with the configurations, states of a test data, teasing the pre-requisite states of system meant for some desired results or challenging the the implemented solution itself via variety of tests can also help uncover a lot of interesting things. Defects won't always be the outcome here but knowing in advance how system behaves against all such adversities can help build lots of useful tests that can find hidden and illusive bugs. And it may also help to be prepared with fall-back solutions.
It's hard to explain without enough details but for example one can perform crazy test by trying to break the protocol/steps meant to be followed to reach certain state of the system. What happens if state 1 is left buggy and state 2 is still turned ON? What happens if one tries to enter the state 2 without fulfilling the pre-requisites from state 1 and so on...
More on that with publishable examples later...
Day 13. DOWNLOAD A MOBILE APP, FIND 5 BUGS AND SEND THE FEEDBACK TO THE CREATOR
It's last day today and I doubt if I would be able to make it. However, I have a try for testing recent release of IOS 10 and noticed that the spell checker wasn't working as expected. That is, when you select the incorrectly spelled word for correction, the options that you would get would only be for formatting, copy paste etc and there was no 'suggestions' for the possible correct word.
I do have some plans to test some interesting apps though...but time demands me to do something else, more important at this time :)
Day 14. FIND AND SHARE A QUOTE THAT INSPIRES YOU
There isn't just one that inspires me. I have bunch of them in my collection. Sharing some of my favourite though
1. Endow your will with such power that at every turn of fate it so be, That God himself asks of his slave, "What is it that pleases thee?" - Allama Iqbal
2. Your ideal form of influence is to help people see their world more clearly and then to let them decide what to do next - Jerry Weinberg
3. There is no test for ALWAYS - James Bach
Day 15. CONNECT WITH A TESTER WHO YOU HAVEN’T PREVIOUSLY CONNECTED WITH
Connected with Cassandra H. Leung (Tweet_Cassandra) and Abby Bangster (@a_bangser) on twitter. It's always pleasure to connect with like minded testers.
Day 16. SUMMARISE AN ISSUE IN 140 CHARACTERS OR LESS
Take 1: An "issue" can be anything that concerns the stakeholders. (Char 59 + Whitespace 9)
Take 2: The real issue is that I can't give more details about because it is an internal thing. (Char 88 + Whitespace 17)
Day 17. FIND A USER EXPERIENCE PROBLEM
I came across some websites that are offered in multiple languages (e.g. English and Deutsche) the image illustrations were updated in chosen language. For example if the adverts and campaigns and some "How-to"s are shown in pictures and they are only made in one language then it adds to bad user experience of other set of audience who don't understand that language.
Day 18. SHARE YOUR FAVOURITE TESTING TOOL
I like bunch of them meant for different purpose. By the way testing tool for me, is any tool that assists me test better.
Day 19. SAY SOMETHING NICE ABOUT THE THING YOU JUST TESTED
We successfully rolled out a very complicated and critical release to production. Because of the nature of complexity of the solutions and new things we added on the fly, I was bit skeptical about completion of testing but I'm glad that we did manage it well and the code was robust enough to handle my all sort of crazy (and not-crazy) tests gracefully.
Day 20. TEST YOUR PRODUCT FOR A QUALITY CRITERIA, WHICH NORMALLY IS NOT A FOCUS IN YOUR BUSINESS UNIT
I do this all the time to be honest. My checklist for Quality Criteria is pretty comprehensive and we keep adding new aspects to it when we see enough problems of the kind, to count. However, I have added some items to "Compliance" testing aspect recently which we were not doing with special focus before.
Day 5 is about "Coming out of your comfort zone".
Well, it took me a while to figure out my own comfort zone since I have never thought so deeply about it. Whatever has been thrown at me a 'testing task' so far, I tried to get through it like a go-getter. But hey, we all have special likings and comforts towards something or the other.
Thinking about it made me realise that, I sort of feel satisfied with the testing philosophy and knowledge I have received through James Bach, Dr. Cem Kaner, Jerry Weinberg and Michael Bolton (to name a few). My expertise are mainly with helping teams/organisations implement this knowledge/philosophy (especially around RST) in a way that fits better with their contexts. Indeed, it's not as easy as 'one idea that fits all' thing and rather requires me to constantly read, re-learn and understand the ideas so that I can find context appropriate solutions for different problems. But I too want to have something that I can claim to be my own thing, theory and work. Yes, I do teach my classes and workshops with my own methods and ways of explaining things but the key concepts are primarily inspired by the work that has been already done by experts mentioned above.
While discussing testing with James Bach the other day, he told me that as his student, he wants me to "innovate" and bring fresh ideas, perspectives to testing. And that's what I am currently working on. I am reading and researching around ancient Vedic scriptures that I feel would help me bring new ideas to testing or may be to re-learn existing ideas in different light (which is equally important).
That's the kind of stepping out of my comfort zone for me. Unfortunately, it's a long process but I'm sure that it will be worth it. I am excited about what I'm currently working on and am equally excited to bring it on the table for the community to see,comment and feedback.
Thanks to #20DaysofTesting@XING challenge for making me serious about it once again. Until tomorrow then folks...
On day four of the challenge, I am expected to find a testing event (online or in person) to attend.
I have signed up for New Model Testing: A New Test Process And Tool webinar by Paul Gerrard and organised by TestHuddle team of Eurostar conference. And it is happening today :)
Looking forward to attend it. It's always nice experience to learn from Paul and new interesting things he keeps working on.
Oh, and by the way Paul did one interesting webinar on Internet of Things with Tea-time with Testers in past. Feel free to check it out if you like...
That's all for today. Stay tuned for more amazing stuff ...
As a part of day 3 of #20DaysofTesting at XING AG challenge, I was expected to read one blog and comment on it. But the blog I have chosen to comment on has rather encouraged me to write a blog instead of just leaving a comment.
While reading some interesting discussion around testing and “100% test automation” thing, I stumbled upon “No Testing” blog by flowchainsensei (aka Bob Marshall) shared by one of the participants in discussion (thanks Nilanjan).
This is an interesting post I must say, particularly because the arguments and ideas presented by Bob made me ponder and encouraged me to think critically about the value of my own profession i.e. Software Testing, yet again.
The blog is bit old and I don’t know if Bob’s opinions have changed but I would like to discuss around some ideas he has expressed in his blog. I must mention that this blogpost is not a response to Bob in person, I have great respect for his work and I admire his writing around Agile/Software delivery. I decided to discuss the ideas from his blog mainly because I still come across people who share similar opinions. This post is my humble attempt to explain what I think about those opinions as a “passionate tester”.
If I have understood Bob correctly, his key arguments are something like below and I believe that majority of the fans of “no testing” idea share them (more or less):
Alternative to testing:
While I tried to think deeply about the above argument in the list, I felt that it is indeed possible and is being already done by testers who understand what testing really is, who know how to test software rapidly, inexpensively and in a way that would still stand up to scrutiny. It is already being done by testers who know how to do “more with less”. The only difference is, they still call that strategy as ‘testing’ but hey, what’s in the name as long as it does what matters?
I would not hold it against Bob (or anyone else with similar opinions) if they want to find alternative for “testing” (whatever it might mean to them) because it’s possible that what they saw someone doing as “testing” was not more than filling excel-sheets with Pass/Fail and being a bottle-neck for deliveries. If that makes them want to find alternative to ‘testing’ or getting rid of it completely then I can totally understand it.
The good part is, testing is far beyond just a process, profession, phase or strategy for most of the thinking testers today. For us (thinking testers) Testing is a performance as James Bach rightly says. And the number of testers who think this way is growing like never before.
Testing is needed so that testers can earn their living:
This can’t be really true anymore, at-least for the testers who know how to add value with their work and what their worth is in software development. I completely understand the humanitarian aspect here but with due respect I deny that ‘testing’ is needed so that “testers” continue to earn their living. This holds equally true for all sort of professions in software field for you never know what would science discover tomorrow.
Today, earning the respect of peers appears an overrated issue to me, to be honest. If non-testers understand how “testing” helps their work then there is not any reason why someone would disrespect testers. How can someone’s ignorance be someone else’s problem? As a matter of fact, I have seen programmers feeling envy of their testing/QA counterparts for the kind of weight they hold in a team and the way testers get treated as project-trusted advisors. I have seen programmers who do not understand the difference between writing a code “that does what specification says” and to “develop a product that customer would like to buy/use”. Does that mean “programmers” have a respect issue? I don’t think so. Respect issue with testers probably gets highlighted because they are far less in numbers compared to their programmer counterparts (1:4 I guess?) and unfortunately, bad testers continue to outnumber the better ones. Slowly but surely, this is changing for better.
Testing adds little or no value from customer’s point of view
The value testing can add to the project can be best summarized by Bob’s excellent definitions Quality and core purpose of Software Development.
According to Bob, the core purpose of Software Development is, “to create elements of complete products or services. Complete products or services, which attend to – and hopefully meet – the emotional, human needs of a variety of interested people”
Bob’s definition of Quality says, “It is meeting the needs of some several collections of people” and I agree with that in principle.
Meaningful testing caters to ensure exactly those and if that’s not a value then what else is? And most importantly, customer or anyone for that matter would understand this value provided by “testing” only if they know that “testing” has a role to play in it. Do we give up on better practices, making better technological choices just because customer fails to understand them? No, instead we help them understand what they really want and how best we (as service providers) can do that for them. Why to give up on testing then if customer does not know its worth?
Customers are unkeen to pay for testing
I feel that customers are unkeen to pay more for “entire cost of software development” and not just for testing in particular. Customers want a quality product at a price they can afford. Whether to do it by cutting the testing short, with no testing at all or by employing testers who know their job very well, is service provider’s problem. And I feel there is a lot of other waste we need to get rid of apart from bad testing. I would highly recommend checking out “Lean Software Delivery” by Matt Heusser if you are curious to know more about this subject.
It’s possible to deliver software that "just works" without having to “test” it
I admit it is possible as long as our "it works" matches with customer's expectations and user experience. But ensuring that they match is hardly possible without doing any testing.
Some years ago, I had asked same question to Jerry Weinberg and his response was detailed enough for me, not to think about that possibility anymore.
I asked Jerry if there is any method/strategy that can eliminate the need of testing or testers as a role. Jerry’s response below:
You're kidding, right?
This is a pipe dream of managers who do not know how to manage s/w development, just like their dream that some magic method will allow them to get rid of developers. "Tester" might not be a job description, but "testing" had better be part of somebody's job description--and taken seriously.
Over more than 50 years, I have watched managers fall for these illusions. Sure, it would be nice to be able to develop s/w without testing, or without developers. Indeed, it would be nice to become a billionaire without leaving your easy chair in front of the TV. But just because it would be nice, that doesn't mean we know how to do it.
The only times when I've seen s/w developed w/o testing, the s/w produced was not usable, and nobody wanted it. So, I guess if nobody wants the s/w, it could be developed w/o testing.
Heck, I could cook a fabulous meal as long as nobody wanted to eat it and live through the experience.
The times are changing fast and the pace with which science is advancing, I sometimes feel, "Would humans be ever needed in future to build software for customers? What if they build a computer that reads the requirements and delivers the software just like a pizza fresh from oven?"
“No Testing, No Agile and No Software Delivery” attempts to flag up these questions. No soapbox. Just open enquiry.
Some awesome colleagues at workplace (Maik Nogens and Ionut Oancea) have come up with this cool idea of 20DaysOfTesting@XING, on the lines of what Software Testing Club did some days back.
I regret for missing the 30daysOfTesting challenge by Rosie and so have decided to participate in this another opportunity at the workplace itself. I'm usually engaged in such cool ideas from other side of the table (say organisers) and am really excited to get the feel of being a "participant" this time :)
Coming to the point, the tasks for each day of this 20 day challenge are:
Sounds like an interesting bunch, isn't it? Well, given that this challenge starts from today, let me do the fist task without any further delay:
Day 1: TAKE A PHOTO OF SOMETHING YOU ARE DOING AT WORK
That's me ... having great time with my 3 awesome screens. Usually, I read interesting technical stuff in morning hours. And I was reading some cool shares from our "Software Quality" page on XING. Have you checked it out, by the way? You should!
Naturally, I won't be able to share specific details of the internal things I would do for further tasks in this challenge but I can at least blog about the stuff and the fun I had.
So, I am looking forward to upcoming tasks for next 19 days and I'm sure it is going to be fun. How about starting some similar cool campaign at your workplace and let us all know what 'fun' you had? I look forward to it.
The task for day of the challenge was to share a testing blog with non-testers.
I admit that I did sort of a cheating here. I recommended some interesting blogs to my programmer friend yesterday itself but it was really a co-incidence.
Before I share what blogs I shared, here is little background of why I did so. We were having an interesting discussion around how can we improve our deployment process, make it more reliable, how far can we rely on automated checks and whereall we can spend more time on testing (not just checking) so that we establish a better safty-net and improve upon app quality too. Because of repeated and deliberate seperate mentions of testing and checking, my curious programmer friend asked me to explain the diferrence, give some examples and good part was, he found the differentiation meaningful.
For further reading, I recommended him to read "Testing vs Checking" literature by Bach-Bolton and "Testing vs Checking Redefined" blogpost in particular. He found the material interesting and I am glad that with that, we'll now have a common understanding of terminologies and ideas behind them. And that's indeed an important thing for a tester i.e. speaking in a langague that programmer would understand.
That is it for day two of this challenge. I admit that it is bit on the top of what I am doing for our sprint but I am enjoying doing it and hope I'll manage to last till its last day :)
I have been avoiding to write on this topic from quite some years but looks like I should really speak my mind about it now. Before you confuse yourself with the title any further, let me provide some context.
This post is inspired (rather provoked) by bunch of incidents of similar kind that made me upset. Let's briefly talk about those incidents first.
A long time programmer friend looking for change of job met me and asked for guiding him get a tester's job. Out of curiosity I asked why he wanted to be a tester and he said he wanted some easy job. He found testing ideal for his requirement of 'click here and there' sort of tasks he would love to perform.
My wife has been working as a test engineer and has a programmer friend who took a career break after her marriage. After spending some years as 'stay at home' wife she decided to work again. In her discussion with us she said , "I have worked hard when I was a programmer. Now that I am married, I would rather work as a tester and live relaxed, happy married life." (Yes, you read that right)
A friend of mine who happens to be from non-software field called me up for career guidance (in testing) for his wife and they have a kid of 3. When I asked, "why testing?" he told me that another friend of him (who works for some IT firm) suggested them to try for testing career as she thinks that's easy job with less learning headache (?) and a best fit for a married women.
Good thing is that my friend and his wife both are wise and after our discussion they still decided to try for testing career but for all good reasons.
An uncle of my friend called me for career guidance for his son. His son has recently completed his Bachelors in Engineering and looking for a job in IT. Uncle asked me if I can help him find job in testing as he finds it easy to do job for a fresher and that it does not require extra-ordinary talent. Uncle also mentioned that since his son was bit weak in studies he would like him to take career in testing.
Now you might have understood what's the thing from all these incidents that makes me upset. In fact, it's not just about these four incidents but about many of such kind that I have kept on ignoring so far. Mainly because I felt they had more to do with peoples' choices than software testing field. But looks like some people's wrong perception about testing has also contributed to such thinking. After giving some deep thought to such incidents, I realised that there are two key perspectives that must change:
1. Some peoples' (males and females alike) perception about women
2. Some peoples' perception about software testing profession
What disturbs me more out of these two is this mentality of associating women with weakness, with low calibre and wanting to give them easy-to-do jobs. May be such problems can be experienced vividly in societies where women look after typical tasks at home (which is unfortunate and highly condemnable) but even in that case, I am afraid that choosing something as a career because it sounds 'easy, relaxed and with less learning curve' is poorest choice one can make.
Well, this topic is beyond just testing and better left with experts. As far as women in testing are concerned, let me please tell you about some extra-ordinary women testers I have personally known. They have been doing remarkable job in our community. Dr. Meeta Prakash, Smita Mishra, Parimala Hariprasad , Faiza Yousuf, Jyothi Rangiaah are just to name a few. I am sure they did not choose testing career because they found it easy. And if you are interested in seeing a bigger picture then please have a look on what women testers across the globe are doing. Ask Fiona Charles, Anne-Marie Charrett, Leah Stockley, Anna Royzman, JeanAnn Harrison, Katrina Clokie, Kim Engel, Ru Cindrea or Oana Casapu, how easy (or difficult) it is to become an extra-ordinary tester. Alexandra Casapu is probably the youngest tester I have known who has made a name for herself in the field.
I have deliberately mentioned all these names from different geographies so that one does not come with an excuse of socio-cultural problems from their region to blame it on. The closer look on the work all my mentioned (and not mentioned) female colleagues have been doing would tell you how challenging the testing field is and that it's not a job only hard-working, talented males can do. Well...never mind!
Is testing an easy job (anymore)?
It never was. It's just that some bad testers and test-case selling factories have managed to survive this long, so much so that it has resulted in spreading wrong notion about testing field. Unfortunately, we still have significant portion of bad testers in our community who appear to be the main reason why some people think about testing this way. With below average skills, one may very well secure a job in some STLC (Sell Test-cases and Loot Customers) model based organisations but be informed that they are likely to run out of business soon (if they don't change).
As a matter of fact, software testing is way too beyond just filling excel sheets with pass/fail, checking actual vs expected, raising defects and calling it a day. It's a discipline that requires an individual to have excellent exploratory skills, critical thinking ability, analytical skills, communication skills, technical expertise, a right mind set and every other thing that would make someone a successful programmer, analyst or people manager for that matter. I have been testing software for over 7 years now and I have experienced this field becoming more challenging every other day.
If you are unable to relate the happenings around you with the products you are building then your any role in software field has a zero significance, forget about the role of a tester alone. A skilled tester knows how to play the role of all potential users of the product, a skilled tester is able to visualise how the failure or new feature of one software product she uses daily, is likely to impact the product you are building. And this is not an easy job. Interesting thing is, your gender does not make any difference here. You will rock if you have what it takes to be a kickass tester.
It's unfortunate that despite of some great things happening in testing world, I am compelled to make the post explaining the reality of testing field. But even in these changing times, I have come across too many of mentioned incidents to ignore them anymore. If this means "just another post about what is testing?", so be it . Being a passionate tester and active member of testing community, I get deeply hurt when someone (who does not understand testing) makes such reckless remarks about testing profession.
If you are curious to know the current state of testing field then please have a look on what we have been finding via State of Testing survey every year. The bad testing is getting exposed and eliminated, skilled testers are replacing the bad ones already. Role and significance of testing field is changing drastically and that means only the best would survive. Our community needs more awesome testers (men and women alike), please think of it as a career option only if you think you can become one. If you have the courage, curiosity and conviction as skills in you then I promise you'll rock this field.
On other hand, if you are choosing testing as a career just because you think it's an easy job then I am afraid, it's not an option for you. It's not an ideal job in that case. At least not anymore!
Once upon a time in jungle, there was a Tiger and he had his own factory.
An Ant used to work there. Yes, one single Ant. She used to work as per her own schedule and methods. And used to leave for home after finishing her work, daily. Tiger’s business too was running smooth.
One day Tiger asked himself, “If this Ant performs so well without anyone supervising her work, how great she would perform if I appoint someone to supervise her?” And with this purpose, Tiger appointed Honeybee in his factory as Production Manager.
Honeybee had rich experience of her work and she was great at writing reports. She said the tiger, “First of all, we’ll need to fix Ant’s office timings. And to keep the record of that I would need one secretary.” Tiger then appointed one Rabbit as secretory for Honeybee. Tiger was extremely happy to see Honeybee’s passion that reflected from her demands. And one day he asked Honeybee to show him all the work done so far and the graph of production progress. For that, Honeybee demanded a computer, projector and laser printer. Tiger bought all those and he then had to appoint one Computer Head to look after all such requirements. He appointed a Cat for this purpose.
Over the period of time, Ant got overloaded with all those form fillings, repetitive report creations, creating so much of metrics, documents over and again. And with all those ‘work related’ things which were far less important than her actual work. And that in turn adversely impacted her productivity. Production of Tiger’s factory got reduced to considerable extent. Then Tiger thought, “I must bring in some technical person who will better explain Honeybee’s ideas to Ant in language she understands”. And then he appointed one Monkey as Technical Instructor.
An Ant, who used to work with her own expert methods earlier didn't like all that new burden, additional processes and she got angrier. Thus, production of Tiger’s factory got reduced even more. Realising that he was facing heavy losses, Tiger appointed an Owl to find out the root cause. After three months of survey and investigations, Owl sent his report to Tiger.
The report said, “There is much more staff than needed. You must lay them off”. And who went home with pink slip then? Any guesses?
That’s correct! That Ant is still in shock, totally clueless of what was her mistake. My special thanks to original author of this story, whoever has written it.
It’s 2015 already but testers in our industry at large, are still getting overloaded with non-productive activities, mostly in the name of standards, best practices and creating auditable artifacts. And in the end, they are held responsible for lack of productivity. Wish Tiger could know how to do more with less.
With State of Testing Survey 2013 that we did in collaboration with PractiTest, we could find out some interesting facts about our profession and its state last year. It will be interesting to see where we are heading in near future. Please help us figure it out better by participating in it and spreading it to the world.
Ever since I have realised the importance of heuristics and oracles, I don't remember a single day I tested something without using them. Be it for critical thinking, recognising a problem, identifying and analysing risks, test planning or test design; there is heuristic for almost every testing problem.
I thank all those brilliant people in software testing field who have created heuristics for solving problems. Those heuristics are coming handy for many testers like myself who use them regularly.
I don't deny that using multiple heuristics and oracles to solve/recognize some problem helps; but applicability of some ideas from different heuristics is subjective to context of each testing assignment i.e. not all ideas from all heuristics/oracles will help you solve/recognize problems. What has always helped me, is 'knowing the failure patterns' of systems that I test. To this end I am saying failure patterns not specific defects only.
These failure patterns are likely to change from project to project, or from module to module within the same project, as project and product elements change with it (read HTSM by James Bach) . However, if you are well aware of failure patterns of products you test; it helps you in multiple ways such as:
1. To find obvious as well as predicted problems quickly i.e. risk for analysis and predictions
2. To find more problems that are likely to be created because of those obvious problems
3. To plan your testing missions
4. To strategies your testing based on foreseen/predicted risks
When it comes to learning, my experience with people and products is similar. We can't know one's nature just in few interactions, can we? Similarly with fewer tests; we can't conclude something as a consistent or prevailing failure pattern with that product.
Be it about people or products, we have to spend enough time to understand them well. If systems/products are bigger and complex (which are most of the times) finding out failure patterns with them becomes tough. What helps in this case is; one's skill to recognize. Don't you know people who are very good at 'knowing others' in very less time?
So what are those things that can help you find prevailing failure patterns in your own system? What is that oracle heuristic to create your project specific heuristic? Over the period of time, I realised that it's HEEENA that helped me create heuristic for my own project. And I hope it will help you too.
Here is how I would explain HEEENA!
H - History
Try to learn history of your product/project. If you know its history well, you stand fair chance to be able to predict its future. You can know its history by:
E - Explore
Practice focused exploratory testing, apply different testing heuristics and see if there is any failure that occurs consistently. Does that appear only in one area? Does it occur somewhere else too? Explore, explore and explore until you come to any conclusion.
So you noticed some problem? What caused it? What test did you perform? Can modifying that test uncover another problem like that? Is it happening on other machines too? How about testing with different users and user permissions? Something is wrong somewhere.... what is it and where is it? Tried focus/defocus, creep and leap or galumphing?
Make best use of your experience.
- System/Product experience
- What does your experience with your product tell you?
- What were typical failures with them?
- Do you see that happening in your product too?
E.g. Any add-ins/pre-requisites that are must for your product to work, HTML5 not supported on all web browsers, flash not supported on iOS etc.
- Do you know those common/repeated mistakes that your programmers, fellow testers or BAs do or may be their weak areas? Newly joined programmers or BAs are likely to make mistakes or they can be weak in some areas. Can you figure out which areas are likely to be affected because of those mistakes or weakness?
N- Note Taking
Make a habit of taking notes while you test. Note down everything that you observe. Not every behaviour shown by product will be catchy enough to tell you there is a problem. But your notes can help you recognize if there are any hidden patterns and failures. Remember, your notes can be your best guides if you write them well.
If you recognize a problem, try analysing it in all possible ways. Root Cause Analysis of your findings can give you more information that you can refer in future. Your analysis can become important contribution to 'history' files of your project.
So, that was HEEENA which helped me find out heuristic for my own project. Try it and let me know if it worked for you.
You can download the mindmap (in xmind format) from here. And if you see any benefits/problems with this approach; I will be happy to learn them.
This story is from Mahabharata. Pitamaha Bhishma was very angry with Arjuna because he eloped Subhadra. This was against the tradition because Subhadra’s brother Balrama had already promised for Subhadra’s marriage to Duryodhana.
Krishna tried to convince Bhishma about how Arjuna was right but Bhishma was adamant. The conversation that then happened between Krishna and Bhishma is what I want to talk about today. I will let you read their conversation first.
Bhishma: I am not happy with this marriage because this is against the tradition. A person can trust on another person only when traditions are followed and respected. Without following traditions, a trust cannot be established.
Krishna: (Giggles) Traditions…! Traditions are like mango fruit. In the beginning it tastes bitter. Some days later it becomes sour. People, who like sour taste, eat that fruit with great pleasure. After some days, mango becomes sweet and thus becomes everybody’s favorite.
But, after some more days, it gets rotten. It starts smelling bad. People fall ill who still eat that rotten mango. And as time passes on, that fruit remains of no use. Not even its pit.
I don’t have any problem with traditions as such, but when same traditions become reason for one’s exploitation, cause more harm than good and become hurdle in the change for the better, then such rotten traditions should be destroyed and new traditions should take birth.
Now you must be wondering, why am I talking about this? I read and re-read this conversation and felt that what Krishna is talking about traditions, is very similar to what standards and best practices are doing to our profession today.
Followed by excellent presentation by James Christie at CAST, some of our colleagues in community have come up with petitions to stop ISO 29119 testing standard. I feel that they are right in what they are fighting for. I would urge you to read this, this, this, this or this post if this whole controversy makes you curious about it.
People who are against this opposition may question, “Who decides if standards are good or bad for our profession? You guys?”
Coincidently, Bhishma had asked the same question to Krishna. I will let you read their remaining conversation.
Bhishma: Who decides which traditions are rotten? You, Lord Krishna???
Krishna: No! Time will decide it! And no one can disobey time’s order.
This is the period of solstice for entire Aryavarta. The way Sun changes its path in solstice; whole Aryavarta too is going to change its path. Old traditions will break, old dynasties will get destroyed. New dharma will be established and new era will begin.
And a time will come when you will have to decide on which side you want to be. Because, everyone will be caught in this trap of time. People who promote lies will become cause of this transformation and people who walk on path of truth will establish the new era.
And this is decided!
Krishna’s answer sums it all. I have signed the petition and I’m done with my part. It’s your turn to decide on which side you want to be.
Note: This blog is formed after my editorial for July/August 2014 issue of Tea-time with Testers.
A passionate & thinking tester. Trainer & student of the craft of testing. Father, Foodie and dog lover. Chief Editor and Co-founder of Tea-time with Testers magazine.