With my experience as an Agile tester so far, one thing I see organisations still struggling with or trying to get better at are the estimations.
While thinking deeply about what makes our estimations go wrong I realised that there is still a lot we do in projects that we do not record, measure and consider as factors that matter. And that is probably why we are still trying to get better at estimates. And that is also probably why the ideas like #NoEstimates make sense and are getting popular.
A few thoughts on #NoEstimates first
At Agile Testing Days 2016 conference, Vasco Duarte made an interesting keynote around #NoEstimates. I admit, I do feel bit brainwashed by his ideas but I believe that NoEstimates is not about "not estimating at all". It's about doing estimates sensibly and without blindly following the tools that are being widely practiced.
And, in my opinion there is still some way to go till industry understands and starts practicing the key idea behind NoEstimates. Does that mean we should stop estimating right now and wait for an entire industry to on board with it? Absolutely not!
By the time this idea develops further, I think we should continue to make efforts towards doing better estimates. After all, the fact is that business needs some date to baseline their business plans with and I find it totally reasonable.
What's wrong with how we currently estimate?
I think the problem is not just with how we estimate (story points, T-shirt sizes for example) but also with what all we measure, how we measure and what we take into account from all those measurements.
I strongly feel that there is cause and effect relationship between things we measure and our estimations based on those. If we do not measure what all matters, our estimations are likely to be flawed. And honestly, it's high time that our industry stops measuring things that are easy (and cheap) to measure rather than those that matter but are difficult to measure.
What are those things that (also) contribute to poor estimates?
I can only talk about things that I have seen making an impact in my experience but I feel they can be very much present in your project environments too. Generally, these are short-lived impediments/side-tasks that we forget to record, measure and consider such as :
It's not that we totally ignore the efforts spent on all of the above but typically those cards remain on the dashboard only for the duration of the sprint and then are thrown in dustbin when sprint is over. What if we start recording, measuring and considering them for future references? Would it not help? Where is the problem if yes?
Where is the problem?
The problem, in the way I see it is that, there aren't enough techniques that would help people identify what to measure that matters and how to measure it. In order to measure things, first we need a mechanism to observe them, and record those observations in order to measure them. I feel that for a project-team as a whole, we currently don't have any effective mechanism to address this. This same problem had been haunting testing community badly (and caused great damage too) but I'm glad that in the form of Session Based Testing (and Management) the community found a sensible way to do it right.
Hey, but that's just about testing and test management. What about measuring things (like above) equally effectively for the work that programmers do? After all, estimations are for and from whole team (and not just programmers or testers alone) unless context demands it to be otherwise.
Thinking about the solution
I am wondering, what if we extend the key idea of Session Based Testing to programming as well? Yes, something like Session Based Programming?
If you are not from testing background and don't know about SBT(M) then I encourage you to read about it first. If you are a tester and still don't know it yet then please do yourself a favour and read it NOW!
Well, what do I really mean by Session Based Programming? Here is my proposal:
1. Development to be done in focused, un-interrupted and time-boxed 'programming sessions', typically of the length of 30 mins or 45 mins (short sessions) to of 80 mins or 90 mins (long sessions).
2. The way testers define mission and create charters for their test sessions (in SBT space), programmers may pick stories or tasks and can work on them in time-boxed way.
3. Typically, testers perform different types of session in SBT such as Analysis session, Survey session, PCO session, Deep testing session etc. Similarly, programmers may also classify the type of session they would be working on. Right off the top of the head, I would propose the session types like below :
These types can vary from project to project. (May be you think of new ones and let me know too?)
4. Keep the record of actual time spent on what type of session with challenges faced if any and store them to some central location.
And how do we estimate with these?
Once programmers spend enough time working in this way, over the period of time, they are most likely to develop realistic understanding of how many sessions of what type are likely to be needed to complete some story or task. This is because it is going to be based on actual experience and by keeping the actual time spent on so and so type of session in mind.
Let me give a small example. Assume that for some XYZ story, particular complex feature in your application roughly requires some back-end programming efforts, some front-end changes and testing. A programmer (and tester) who has spent enough time working on respective area in Session Based way may come up with estimates like:
BE programmer: 3 short back-end coding sessions
FE programmer: 1 long FE+BE pair programming session
Tester: 1 short Analysis and 1 long deep testing sessions
Assuming that short session in your team corresponds to 30 mins of time and long session to 80 mins then by calculating and combining above inputs, we can estimate the story XYZ to be taking roughly 280 ( 90+80+30+80) mins of work.
But wait, assume that historic record of time spent by team on Unplanned activities sessions, Bug fixing sessions, Deployment hiccups per sprint (of 10 stories on average) amounts to be around 200 mins (that is 20 mins for one story) . I would add this as buffer to initial estimate of 280 mins and would count the final one as 280+20=300 mins.
What is the benefit?
I think there are more than just one such as:
I feel that testing community has mostly been at receiving end whenever some new and shiny, cool thing happens (e.g. DevOps) and they are usually left to figure out how to "fit in". The efforts testers spend on these "fitting in" attempts are usually so high that they hardly get to contribute to the advancements beyond their own craft.
If testers are to evaluate and to contribute to software quality, then they should also evaluate and contribute to the quality of processes that affect them (and everyone else). Session Based Programming is my humble attempt to accomplish that.
I look forward to know how you find it. Feedback and criticism welcome :)
[On side note, I presented this idea in my Lightning Talks at Agile Testing Days conference- 2016 and got some good feedback, especially from Janet Gregory. We discussed to work on it together to take it further and I would like to thank Janet for encouraging me to do so.]
Hmm.... I was sort of in a cave after day 5th and now it's action time for me again.
Not that I was totally out of the challenge but could not manage to blog about it. So... here is the list of tasks on my plate and what I did about them (updates for the pending tasks are in brown)
Day 6. PAIR WITH SOMEONE OUTSIDE YOUR BUSINESS UNIT OR OUTSIDE QA
I paired with a free-lance programmer who was helping us with our ongoing project on-the fly. Time was critical and he needed someone with enough knowledge about stuff he was working on. We paired up to develop it and test it on the fly.
I used the MIPing technique (Mention in Passing) to report my bugs/findings and re-tested the things as he kept on fixing them. It was indeed fun pairing with a programmer and contributing right from the coding phase for new feature getting built. We benefited from each other's expertise. My programmer counterpart got the on-boarding that he needed and also learned about the dependencies we had to resolve. I got benefited by watching closely how he quickly developed the components, wrote his unit tests and ways to resolve conflicts.
Day 7. GIVE SOMEONE POSITIVE FEEDBACK OUTSIDE OF QA
The last few weeks have been crazy for us at work since we have been working on some serious stuff and the fun part is the complexity of the solution we have. At times, we can't escape from dealing with the complexities of the software/projects and this whole thing makes testing and bug-investigations even more challenging. The reasons are obvious. The interfaces, dependencies, various states of the systems involved, pre-requisites for use cases and schedule, new features added , subsequent impact and regression add a lot more to already existing complexities.
Among other things we found and fixed, we were struggling to resolve one bug which was hidden deep under the interfaces and lost in the complexities of interfaces. Finally Lars Schirrmeister (our Senior Back-end Engineer) jumped in and decided to get to the root of it.
After spending quite some hours on fixes and trials (Try -- and See -- If it works) he finally found the problem. And fixed it of course. But what was so special about it? It indeed was. The problem Lars identified was found out because of his own understanding of the technology, algorithms and syntaxes. The issue he found out was 'never raised as an error or exception by the system itself' and finding out hidden problem in a code laden with complexity, that does not show up in compilation/debugging at all is indeed an intelligent task. Good job Lars!
It's an honour for me to be working with a such highly competent team and managers. There has always been something or the other I could mention in their admiration. This particular case is one example of it. I'm glad that I could talk about my team with the world because of this challenge. :)
Day 8. HAVE LUNCH TOGETHER AND POST A PICTURE
Who else could be a better date for this day of challenge other than one from fellow participants :). It was great meeting Daniel Knott over lunch to discuss testing and this 20days challenge among other cool things . Feel free to check Daniel's updates on this challenge.
There goes our picture together:
Day 9. AUTOMATE ONE WORKFLOW FROM ANOTHER BUSINESS UNIT (Pairing allowed, but be the driver)
Could not mange to complete this task in time. It's WIP though.
Day 10. PERSONAL CHOICE (Testing related, surprise us!)
I'm conducting a workshop on Impact Mapping for my colleagues, on 13th Oct. The idea is to implement the concept on team level (rather than keeping it limited to Senior Managers, Product Managers, SCRUM masters etc).
My workshop would mostly talk about implementing IM on Story level and writing them with information, enough to build testable features with wider coverage. Won't say it is only related to testing but it would definitely benefit testers working in Agile teams.
Day 11. LISTEN TO A TESTING PODCAST
I listened to DevOps and Technical Testers podcasts featuring Noah Sussman, Michael Larsen, Perze Ababa and Justin Rohrman ( yeh, all cool people :) )
I feel this discussion has come on the time when testers needed it most. If you are wondering about what DevOps is and what roles testers can play in it then please consider listening to it. Highly recommend!
Day 12. PERFORM A CRAZY TEST
Hmmm...this is my kinda topic. Well, honestly I don't remember the last time when I did not perform a crazy test :) The degree of craziness of a test might vary from a tester to tester based on what they perceive it to be.
For me, a test performed with minimal knowledge about application and acting like a 'first time user' is equally crazy test. With minimal knowledge I mean not getting blinded by the specifications and testing things with open expectations. Paul Holland has written an interesting piece around it, please do read it.
On contrary, while testing the recent build of our product, I came across many ways to perform crazy tests with enough knowledge about software, dependencies and solution implemented. And I learned very important thing that a crazy test does not always have to be around the 'disfavoured use' or 'extreme use' of software (please check Product Elements in HTSM by James Bach). Playing around with the configurations, states of a test data, teasing the pre-requisite states of system meant for some desired results or challenging the the implemented solution itself via variety of tests can also help uncover a lot of interesting things. Defects won't always be the outcome here but knowing in advance how system behaves against all such adversities can help build lots of useful tests that can find hidden and illusive bugs. And it may also help to be prepared with fall-back solutions.
It's hard to explain without enough details but for example one can perform crazy test by trying to break the protocol/steps meant to be followed to reach certain state of the system. What happens if state 1 is left buggy and state 2 is still turned ON? What happens if one tries to enter the state 2 without fulfilling the pre-requisites from state 1 and so on...
More on that with publishable examples later...
Day 13. DOWNLOAD A MOBILE APP, FIND 5 BUGS AND SEND THE FEEDBACK TO THE CREATOR
It's last day today and I doubt if I would be able to make it. However, I have a try for testing recent release of IOS 10 and noticed that the spell checker wasn't working as expected. That is, when you select the incorrectly spelled word for correction, the options that you would get would only be for formatting, copy paste etc and there was no 'suggestions' for the possible correct word.
I do have some plans to test some interesting apps though...but time demands me to do something else, more important at this time :)
Day 14. FIND AND SHARE A QUOTE THAT INSPIRES YOU
There isn't just one that inspires me. I have bunch of them in my collection. Sharing some of my favourite though
1. Endow your will with such power that at every turn of fate it so be, That God himself asks of his slave, "What is it that pleases thee?" - Allama Iqbal
2. Your ideal form of influence is to help people see their world more clearly and then to let them decide what to do next - Jerry Weinberg
3. There is no test for ALWAYS - James Bach
Day 15. CONNECT WITH A TESTER WHO YOU HAVEN’T PREVIOUSLY CONNECTED WITH
Connected with Cassandra H. Leung (Tweet_Cassandra) and Abby Bangster (@a_bangser) on twitter. It's always pleasure to connect with like minded testers.
Day 16. SUMMARISE AN ISSUE IN 140 CHARACTERS OR LESS
Take 1: An "issue" can be anything that concerns the stakeholders. (Char 59 + Whitespace 9)
Take 2: The real issue is that I can't give more details about because it is an internal thing. (Char 88 + Whitespace 17)
Day 17. FIND A USER EXPERIENCE PROBLEM
I came across some websites that are offered in multiple languages (e.g. English and Deutsche) the image illustrations were updated in chosen language. For example if the adverts and campaigns and some "How-to"s are shown in pictures and they are only made in one language then it adds to bad user experience of other set of audience who don't understand that language.
Day 18. SHARE YOUR FAVOURITE TESTING TOOL
I like bunch of them meant for different purpose. By the way testing tool for me, is any tool that assists me test better.
Day 19. SAY SOMETHING NICE ABOUT THE THING YOU JUST TESTED
We successfully rolled out a very complicated and critical release to production. Because of the nature of complexity of the solutions and new things we added on the fly, I was bit skeptical about completion of testing but I'm glad that we did manage it well and the code was robust enough to handle my all sort of crazy (and not-crazy) tests gracefully.
Day 20. TEST YOUR PRODUCT FOR A QUALITY CRITERIA, WHICH NORMALLY IS NOT A FOCUS IN YOUR BUSINESS UNIT
I do this all the time to be honest. My checklist for Quality Criteria is pretty comprehensive and we keep adding new aspects to it when we see enough problems of the kind, to count. However, I have added some items to "Compliance" testing aspect recently which we were not doing with special focus before.
Day 5 is about "Coming out of your comfort zone".
Well, it took me a while to figure out my own comfort zone since I have never thought so deeply about it. Whatever has been thrown at me a 'testing task' so far, I tried to get through it like a go-getter. But hey, we all have special likings and comforts towards something or the other.
Thinking about it made me realise that, I sort of feel satisfied with the testing philosophy and knowledge I have received through James Bach, Dr. Cem Kaner, Jerry Weinberg and Michael Bolton (to name a few). My expertise are mainly with helping teams/organisations implement this knowledge/philosophy (especially around RST) in a way that fits better with their contexts. Indeed, it's not as easy as 'one idea that fits all' thing and rather requires me to constantly read, re-learn and understand the ideas so that I can find context appropriate solutions for different problems. But I too want to have something that I can claim to be my own thing, theory and work. Yes, I do teach my classes and workshops with my own methods and ways of explaining things but the key concepts are primarily inspired by the work that has been already done by experts mentioned above.
While discussing testing with James Bach the other day, he told me that as his student, he wants me to "innovate" and bring fresh ideas, perspectives to testing. And that's what I am currently working on. I am reading and researching around ancient Vedic scriptures that I feel would help me bring new ideas to testing or may be to re-learn existing ideas in different light (which is equally important).
That's the kind of stepping out of my comfort zone for me. Unfortunately, it's a long process but I'm sure that it will be worth it. I am excited about what I'm currently working on and am equally excited to bring it on the table for the community to see,comment and feedback.
Thanks to #20DaysofTesting@XING challenge for making me serious about it once again. Until tomorrow then folks...
A passionate & thinking tester. Trainer & student of the craft of testing. Chief Editor and Co-founder of Tea-time with Testers magazine.