The MEWT 5 participants

MEWT 5 Conference Day Roundup

The 5th MEWT Peer Conference took place on the 9th April with the by now to-be-expected excellent cast of talks, discussions and presentations. Please find them all listed below – in order of presentation on the day, along with links to blogposts, slides etc where they were used and have been made available.

Iain McCowatt – Professional Testing, not Professional Testers

I recently got handed the keys to a couple of development teams, and a handful of major change programmes. With this came the realization that I am no longer a tester: I’m a customer of testers. Whilst I’m still trying to process all the implications, I HAVE become aware of an interesting shift in perspective:

I don’t give a flying expletive deleted about Professional Testers.

Don’t get me wrong: I care deeply about testing – the insight it gives me is critical. But I can’t help wondering, so long as I have professional (adjective) testING, do I need testERS at all, let alone professional (noun) ones?

So testers, consider me a potential customer. Pick up the challenge and justify yourselves, and justify why you might want to claim to be a profession.

Adam Knight – The Oldest Profession in the World

The oldest profession in the world. We all know what it is, don’t we?

Except that until the late 19th century the now common understanding had not been established, and there were a number of professions that potentially laid claim to that title – farmers, cattle drovers, horticulturalists, engineers, landscape gardeners the military, doctors, nurses, teachers,priests and even lawyers. But when we mention the ‘oldest profession in the world’ most people will now assume a knowledge of the profession that is involved.

And so it is with “professional testing”. Mention “professional testing” and many familiar with software development will have assumptions around what is involved. In my early career professional testing was defined as ‘having ISEB certification’, nowadays there are different expectations that cover broader areas, but can be just as restricted in their criteria.

I believe that the number of roles that can lay claim to undertake ‘professional testing’ as an activity is far wider than is included in the assumptions of many. As well as a huge variety of roles with the moniker of tester – web testers, mobile testers, hardware and embedded software testers, test software developers, database testers, performance testers and penetration testers to name a few, there are also many other roles that don’t include the word “test” in their title yet still have a professional responsibility for testing implicit in their role, such as developers, product owners, technical support, business project owners and often even contractually some customers. All of these will be involved in testing in a professional capacity, and so must therefore be undertaking ‘professional testing’.

In this talk I will look at the assumptions that may come with a tag of ‘professional testing’, what those assumptions look like and how they differ from the people doing professional testing that I’ve encountered. I’ll then go on to discuss some of the roles that could lay claim to the title of ‘professional tester’ and how the role of testing fits in with their profession. I’ll raise some questions for discussion around how the various individuals involved in testing as part of their profession might benefit from identifying the testing elements to their roles, and how those folks who are called testers might end up being more professional themselves through helping others to be more professional in their testing.

James Thomas – What is What Is Professional Testing?

In this talk I’ll use the MEWT topic as a lens through which to view itself and our instincts about it, asking what we think a professional tester (whatever that is) might do when presented with the MEWT 5 brief. I’ll explore areas such as inconsistencies, ambiguities and underspecification in the brief, the relationship between the tester and stakeholders (the MEWT content owner for the purpose of this exercise), and the uncertainty that’s always present in any project that requires novel work. As I do this, I’ll try to tease out a set of “reasonable expectations” for a professional tester and then test them in turn against a broader context.

James has written a couple of posts about the event: The notes for his talk here, and his further thoughts on the events of the day here.

Dan Caseley – Perceptions of Professional Testing – fixing it with what we teach our juniors

I’ve gathered a collection of thoughts from devs, stakeholders and recruiters on who we are and what we do. Being awesome generates the good feedback, but it isn’t enough to fix the bad stuff that pervades our industry. We do that with what we teach others which they take on to their next role, making better testing viral.

Abby Bangser – Introduction of the “Full Stack” tester

One model for looking at a person’s skills is the T shaped model. This is the idea that someone will have a breadth of knowledge in their field, but then a depth of knowledge in one particular area. To me, the breadth piece is as necessary as the depth to define a professional outlook. How would you feel if you fell and possibly broke your arm, and the doctor near by said they could not help because they are a cardiologist, not an Orthopaedic. Now you would expect to need a specialist to be able to perform any necessary surgery, but you would hope the near by doctor could at least know what steps must be taken and how to mitigate the current situation. To bring this back to MEWT and software testing, I think that to be software professional, we cannot segregate into “technical” and “non-technical” testers. Let’s talk about a “full stack” tester as a possible definition of a professional tester.

Abby’s thoughts are soon to be published on James Thomas’ blog as a guest post. Watch here for further news.

Mohinder Khosla – Being professional is Not an Accident

In one of my favourite books, The Art of War, Steven Pressfield states that being a professional is all about showing up, doing your work and not letting adversity defeat you. Being a professional requires you to overcome your vices so that you can sit down and produce the best work possible.

Being a software tester is about a whole lot more than just testing. If you want to be a better tester-a better anything really-you need to focus on the entire person not just one or two areas of your life. It is about career, mind, body and spirit if you believe in such things.

Being a professional tester requires even more commitments that focus on forming good habits of time management and well prepared at all time, setting goals and planning, standing your ground and sticking to your principles. Professional tester should have the ability to walk away from situations when lower quality standards are forced upon them. That means they are expected to be not only consistent but also quality and self-improvement seekers, making correct choices in both technically and ethnically and playing to their strengths and improving weaknesses.

I will briefly discuss them with examples where appropriate to convince that being a professional tester is no freaking accident. It requires commitment, dedication and continuous learning.

Doug Buck – Growing into Testing

The talk is about my perceptions of the changes that took place during a period of professional growth when I transitioned into a full time testing role, covering (hopefully in detail!) the points below:

  • professional responsibility
  • self-directed learning
  • questioning
  • professional development
  • key changes in my attitude before and after

Doug’s thoughts on the day can be found here.

Danny Dainton – Being a Professional Tester is not the same as Being in a Professional Job

I have always believed that the job that you do has nothing to do with how professional you are in that job. For me, professionalism is a set of core values or beliefs that are internal and specific to each individual person.

I currently find myself as the sole tester working within a feature team but also with the added pressure of being a remote worker. My previous experiences have given me an excellent set of core values that has enabled me to be an effective remote employee and a highly motivated, focused and disciplined individual.

At 17, I joined the British Army, one of the most professional organisations in the world and through my 11 years of service, I lived by these 6 core values:

  • Courage
  • Discipline
  • Respect
  • Integrity
  • Loyalty
  • Selfless Commitment

I have kept these with me since leaving the forces and conduct my daily testing, aligned to these values.

My talk will be a brief look back on the last 10 years of my life, touching on the key areas where I have applied these professional values during my very short testing career and how these have shaped the direction of where I want to be.

Mike Loundes – The Contractor Attitude

Also slated to speak but unable to in the end due to a family emergency. Mike’s abstract is below and he has written up the content of his proposed talk here.

Having been freelance for over a decade I’d like to discuss the perception of professional testers and how the term professional is perceived by the not only the different hierarchical levels within companies but how its perceived by peers and colleagues within those companies, on many of my engagements I’ve surprised people when they learn that I’m ‘a contractor’ and that I’m not ‘a permanent member of staff’ because I don’t have the ‘contractor attitude’, what exactly is the contractor attitude and how is that perceived differently from professionalism?

Also in attendance – Bill Matthews [organising], Vernon Richards [facilitating] & Simon Knight [content owner].

Many thanks to our kind sponsors, The Association for Software Testing. new_ast_logo_white_204x102

MEWT5

What is Professional Testing?

Wikipedia has this to say about what a professional is:

A professional is a member of a profession or any person who earns their living from a specified activity.

Hmm. So what’s a profession then?

A profession is a vocation founded upon specialized educational training, the purpose of which is to supply disinterested objective counsel and service to others, for a direct and definite compensation, wholly apart from expectation of other business gain. (also from Wikipedia.)

That’s all settled then, right? Wrong. Because in MEWT 5 we’ll be asking – what exactly does it mean to be a Professional Tester? And, since I already cited the quotes above. You won’t be allowed to. In fact, we’ll expect you to delve deep and reach far and wide into the many and varied facets of what it means to demonstrate professionalism in the face of a rapidly changing technological and sociological landscape.

We will be looking for presentations, questions, strategies, models, insights and experience reports – lasting in the region of 10-15 minutes – covering important aspects of professional testing. Such as:

  • Engagement with and use of a body of testing knowledge, tools, models, approaches and heuristics.
  • Effective decision-making within changing, uncertain and unpredictable situations
    Making sense of and managing risk when dealing with potentially incomplete and conflicting information
  • Considerations when working as a lone tester, or as part of a team – both wide and small
  • Thriving in the midst of the unknown and the unexpected, in addition to the routine and predictable
  • Heuristics and principles to aid in the resolution of complex ethical and moral matters
  • Personal and team-wide learning and development strategies
  • Working within regulatory frameworks
  • And many more besides.

MEWT 5 will take place on the 9th April at the fantastic Attenborough Nature Reserve venue (checkout photos from previous events!)

Invites have been sent, but if you’re interested in attending – please contact the organisers (Bill, Richard, Vernon or myself) and we’ll be happy to try to accommodate you. If you’ve already received an invite, please send your abstract (title, brief outline, bullet-point summary of talk) to me by the 27th March, and we look forward to engaging with you and your specialist subject area on the day.

The MEWT 4 Attendees

MEWT 4

The 4th Midlands Exploratory Workshop on Testing took (#MEWT) took place on the 10th October. The attendees were (from left to right):

Speakers and abstracts in order of presentation:

Ash Winter – A Coaching Model for Model Recognition

As part of my role, I often coach testers through the early part of their career. In this context I have noted a pattern in the application and interpretation of models. They are generated internally through various stimuli (learning, influence of others, organisational culture) and then applied subconsciously for the most part, until there is sufficient external scrutiny to recognise them. To this end, I have created a model of questions to help testers to elevate their internal models to a conscious level and begin to articulate them.

To this end I hope to articulate at MEWT:

  • Presentation of the model of questions to determine internal models in use, without introducing models explicitly.
  • Use of Blooms Taxonomy to visualise a coachees modelling paradigm and the steps towards modelling consciously.
  • Practical examples of using this model to assist early career consulting testers to cope with new client information saturation.

Slides for Ash’s talk can be found here.

Duncan Nisbet – The Single Source of Truth is a lie

In some development circles, the automated test suite serves a single source of truth for the behaviour of the software.

I too held this belief until very recently when it was challenged in the ISST Skype forum.
The conversation I had in that forum helped me to realise several (obvious in hindsight) cognitive biases I had succumbed to & traps I had fallen into.

This experience report will outline how I came to hold my beliefs about the single source of truth & how they are now on their way to be altered.

I think in models, but on several occasions it has been demonstrated to me that I haven’t thought critically about those models.

I’m hoping this report & the subsequent conversation will really cement in me the idea that I need to think more critically about the models I choose to hold in high regard.

I also hope that the report & conversation might have some impact on the other attendees of MEWT.

Slides for Duncan’s talk can be found here.

Richard Bradshaw – Sigh, It’s That Pyramid Again

Richard Bradshaw's new Automation in Testing in Pyramid

Richard Bradshaw’s new Automation in Testing Pyramid

Earlier on in my career, I used to follow this pyramid, encouraging tests lower and lower down it. I was all over this model. When my understanding of automation began to improve, I started to struggle with the model more and more.

I want to explore why and discuss with the group, what could a new model look like?

Ard Kramer – Old models as an excuse?

In my current assignment at a major insurance company we are in a full transformation from waterfall to scrum. As well in waterfall as in scrum software development the testers cling to traditional phases (as a model) of testing, such as functional acceptance testing and user acceptance testing. What is in a name or better what is in a model?

The major question I have and the challenge I am facing is that I want my testers in the scrum team develop their own (shared?) model(s). The only difference which is made is between testing and acceptance as more or less a way to make the difference between (real) testing and checking.

Most of my testers (which I want to send to RST) in the scrum teams have the opinion that they are doing a good job, while I try to convince them that the tests their doing can be done and even must be done much better. This also means that they are able to make their own models of what needs to be tested. I am coaching them in thinking more visual.  These visualization must lead to more session based testing/test management.

I have a preference for models which are mostly visualizations of the landscape of the applications which are in scope for a change. Besides these kinds of models, the testers are going to need heuristics to test the different application and because different teams are developing the same application I think that they must develop a common heuristics. The heuristics and models must be the starting point for a test approach for the user stories, sprints (or even the software releases we are working with).

In my presentation of this case I want to present the situation I am dealing with at the moment and some ideas I have to seduce my testers for making (common?) models and heuristics. I am very curious if we can have a discussion what for approaches other MEWT testers are using for using a model or heuristic as a way of thinking and improving themselves and their colleagues. In my presentation I also like to talk about how changes –in general- can be accomplished so that testers can become better testers, using models, heuristics and how can use it during their test sessions. So a presentation with more questions than information but maybe the information can lead to an interesting model😉

Three major points:

  • How to make a major shift towards (independent) thinking in Models and heuristics
  • How to make a tester better using models and heuristics
  • How can models help in making visual the value of a change testers are testing?

Slides for Ard’s talk can be found here.

Geir Gulbrandsen – Discovering My Models

Not having a lot of experience in thinking about my mental models, and even less discussing them, it probably goes without saying that I don’t have an example of “how we implemented a certain model we thought would be useful”. Instead I had to go through my career step by step and see if I could recognise the different types of models that were in play and how these were helpful or useful… or not. How did we benefit from these models, what problems did they make us susceptible to, and how can I learn from this to develop my own models?

Main points:

  • Just because you think about your thinking models doesn’t mean everybody else does.
  • Don’t assume everybody understands the same model in the same way.
  • Develop (or adapt) your own models/frameworks in order to truly own them.

Slides for Geir’s talk can be found here.

Mike Loundes – Chunking

Format goes along the lines of:

  • Some definitions of chunking
  • My interpretation

Then a look into how I applied the thinking into a team I was managing with the state of play when I started with a simplified example and what was done utilising chunking. Some thoughts into considerations for implementing chunking at different levels

Slides for Mike’s talk can be found here.

Del Dewar – The Mobile Traffic Meta/Model

Data-driven testing is a relatively common phenomenon in software testing. This talk is an  experience based report about a data-driven approach that gave birth to a highly complex  testing meta-model for a software product that was tasked with monitoring signalling messages  in mobile networks.

The talk will explain the constituent parts of the meta-model and what made it so complex and will touch upon:

  • The theoretical challenges involved in creating and evolving the meta-model and how this could provide value to the business.
  • The physical, procedural and collaborative aspects that wowed people to begin with that quickly became a crutch and impediment to the testing team and the wider business.
  • How in retrospect, we could have done things differently given the experience we amassed throughout the lifespan of the model

Slides for Del’s talk can be found here.

John Stevenson – Model fatigue and how to break it

John Stevenson's SCAMPER Mnemonic Used to Disrupt Stale Testing Models

John Stevenson’s SCAMPER Mnemonic Used to Disrupt Stale Testing Models

Many of us are familiar with the various testing models from RST such as FEW HICCUPPS, SFDPOT and others.  Within my organization we use these models extensively for forming coverage maps and creating testing missions. To do this we use mind maps.

However over the past year or so I have noticed that templates have appeared with the same details and the same type of thinking or in some cases no thinking.   To me many including myself have followed the path of least resistance and used what others have done without engaging our creativity. This has in some cases lead to basis etc. To try to resolved this I have over the last 9-10 months introduced some creativity models to try to overcome what I have come to call Model fatigue.

This experience report looks at these creativity models and discussed their success and failures.

Main points:

  • Models can become stale
  • We suffer model fatigue by not revisiting our models
  • Some creativity models can help (SCAMPER/ThinkPAK)

MEWT 4 was sponsored by the Association for Software Testing.

MEWT4 Call For Papers

We wanted to try something a little different for MEWT4. As with the majority of peer workshops, the organisers invite people they believe will fit the theme, bring some great experiences and generate some great discussions. However this is limiting, as we can only invite people we know, but also people who we think would be interested in the topic. So for MEWT4 we have opened up four places for people to apply to attend. So what would you be applying for? MEWT is an exploratory workshop, a format you can read more about here. The theme of MEWT4 is “Models for Thinking”, you can read more about the theme at the bottom of this post. We are interested in experience based reports (lasting between 15-20 minutes) covering any aspect of Models and Modelling that you believe is applicable to our work as professional testers. We are less interested in talks that simply present a model in isolation but rather your experiences of using such models so a recommendation would be to spend no more than 5 minutes introducing a model and the remaining time presenting your experiences of using the model. We are also interested in talks that discuss the importance of modelling to testing and how we can improve these skills. MEWT4 is taking place on the 10th October 2015 at the stunning Attenborough Nature Reserve, Nottingham, UK. The cost of attending MEWT4 is £30, this covers the cost of the venue, refreshments throughout the day and a buffet lunch. They’re no limitations on who can apply, just please ensure you can make the date and the location, MEWT doesn’t contribute towards any of your travel costs. They’re always people who stay in Nottingham the night before, so if you are travelling from further a field, there is usually some social activities on the friday night and post the workshop. You can read about previous MEWTs on this blog. So if you are interested in attending, please complete this CFP form by giving the title, brief outline and the main points (usually no more than 3) that will be covered in your report by August 9th 2015. Once the CFP has ended, MEWTs will review the submissions and inform applicants of the result. Really looking forward to reading your submissions. If you submit, but later realise you cannot attend please let one of the organisers know.

Regards,

Richard Bradshaw (Conference Organiser) Simon Knight (Conference Organiser) Bill Matthews (Content Owner) Vernon Richards (Facilitator)

Theme: Models for Thinking. In almost all human endeavours we use mental models to simplify the complex, separate the signals from the noise, organise and classify information, and to act as lenses through which we observe the world. Testing is no exception to this; as testers we are part of a complex adaptive system which is often too complex to comprehend and understand in its entirety so we rely upon different models to better enable us to think and operate within this system. Some examples of modelling within testing are:

  • Designing tests is complex with almost limitless possibilities so we use test design techniques to simplify the task; many of these techniques are based on a models of how and where software fails. This applies to the formal test techniques (e.g. boundary analysis, state transition, domain testing etc.) as well as ideas such as mind-maps and heuristics.
  • In larger projects and organisations, to organise the task of testing a common model for structuring testing is as a series of Test Phases each with a specific focus that builds on the previous phase.
  • The ISO 29119 Standard presents a series of models that omit much of the details, variety and complexity of testing in order to convey and idea of how they believe testing should be structured, organised and flow.
  • A common and pervasive model within testing is that of different test levels (e.g. unit testing, system testing, integration testing etc.) which can act like lenses through which we focus on specific elements of the system without being overwhelmed by the totality of what we may need to test.

To help guide your ideas the following questions may help:

  • Do you think Modelling is a key skill for testers? How did you develop your skills in modelling? How do you teach modelling as a skill to others?
  • Do you have meta-models that help you decide which models are appropriate for your context?
  • How do you make sense of complexity in your context? Are there specific models you’ve found helpful? If so what are they and how do they help?
  • What are the popular/established testing models that you think are no longer applicable and why?
  • Which popular/established models of testing do you find most useful and why?
  • Do you have thoughts for a new model related to testing that you want to share, discuss and expand?
  • Do you think there is a relationship between model thinking and biases?

MEWT 3 Resources

Various folk have made their slides available or written blogs since MEWT 3. We’ll add more to this post as they become available:

Neil Studd

Dorothy Graham

  • Talk: Criticism and Communication
  • Slides: Download here.

Dan Billing

Duncan Nisbet

Duncan spoke about the aftermath of having presented the linked slides at an internal BBC conference. The linked slides are therefore NOT representative of his MEWT presentation, but were used in support.

  • Talk: Failing to Communicate a Communication Model
  • Supporting Slides: Download here, Slideshare here.

Adam Knight

  • Talk: Testing, Support and Documentation – Lessons Learned from a Combined Role
  • Slides: Slideshare here.
  • Related Blog: A Cultural Fit

Raji Bhamidipati

  • Talk: Effective Communication in Remote Teams
  • Slides: Download here.
Only the names

The MEWT 3 Lineup

The abstracts are (mostly) all in. Not long now…

On Saturday 18th April, the MEWT 3 invitees will disrupt the convene at the tranquil, and picturesque Attenborough Nature Centre, Nottingham to discuss Software Testing and Communication Skills.

Paul Watzlawick is quoted as saying “You cannot NOT communicate” so at MEWT 3 we want to explore the different ways we communicate our ideas and views with others, especially in those crucial situations such as:

  • Going against the prevailing consensus
  • Broaching difficult or sensitive subjects
  • Attempting to change entrenched views
  • Dealing with so called “difficult people”
  • Communicating in culturally diverse team

Attendees and their chosen specialist subjects are listed below:

Speaker

Title

Abstract

Adam Knight Testing, Support and Documentation – Lessons learned from a combined role For years I’ve been performing a mixed role, managing not only Testing but also Technical Support and Documentation for a data product company. These three disciplines – whilst distinct, for me have one strong connection and that is a focus the customer. This might be through communicating with customers directly, or by representing the customer interests within the development process. In this experience report I’ll examine and compare the different means of communicating information, both to and from the customer, that exist across these connected teams. I’ll look at the differences in information sources, media and interfaces involved between each role and the customer, and how these differences present distinct challenges for each role. From looking at examples from my experience I hope to explore the following key points:- – What principles of good communication apply when working in technical documentation and technical support
– What anti-patterns in communication I have experienced when working in these other disciplines
– If there are any lessons that we might take from these to apply in our testing work
– The different information that is readily available to each discipline that we might look to share between teams for mutual benefit
Anna Baik Credibilty Whatever communication we receive, the source always affects how we evaluate it. As testers, our credibility affects how people receive the information we provide. If we aren’t credible, then nobody will listen to us. It also affects how we evaluate the information we get. Let’s not kid ourselves, we are not infallible beings just because we are aware of the possibility of bias. That doesn’t make us unbiased, and we’re stupid if we think we are “above all that”. We make judgements, and some of them are wrong, and a lot of those judgements we’re entirely unaware of even making. I think this raises the following points for discussion: How do you work effectively as a provider of information in an environment where *what you are* immediately puts you into negative figures in terms of “credibility points”? How do you get the information heard? How do you work around problems even getting access to information you need – “you don’t need to know that”? How do you not go crazy? And whatever your frustrations at being misjudged by people who don’t understand what you do or how you do it – aren’t you doing the same every day?
Can you even build credibility with people who have such strong preconceptions about your capabilities that they can’t even let themselves see any evidence to the contrary?
Bill Matthews Dealing with conflict I will recount an exeprience from early in my professional career when communication turned to conflict (nearly resulted in fistfights in the carpark) and how some simple ideas and language patterns defused tensions and brought the team back together with a renewed understanding and purpose. Since that then I’ve encountered and used the same patterns in similar and related situations (e.g. challenging the status quo, dealing with the so-called awkward team member etc) to good effect and wanted to share these with you all. These are:
– The power of acknowledgement
– Redirecting unproductive thinking towards the real “problem”
– Refining what the “problem” is
Christopher Chant Lessons learned from customer services The experiences and techniques developed during my career in sales and customer service and how they can be applied to software development. Most jobs require a great deal of communication and collaboration and software development is no different. It’s common in Sales and customer service jobs for employees to receive training in effectively communicating to satisfy both the customer’s and company’s needs but isn’t the case in software development. The increase in the number of distributed development teams makes developing these skills even more important.
I’d like to share some of the lessons learned from a career which started by trying to provide excellent customer service in challenging circumstances and has equipped me with the skills to prosper in collocated and distributed teams.
Daniel Billing Testing Influence and the Geek One of the key issues I have as a tester is communicating my ideas, thoughts and needs, and therefore influencing the people around me. Those things I need other people to understand, so that they can be taken forward and used in other peoples testing. Part of the problem here is that I am a geek. I identify as a geek, as I suppose that is the label that best fits me culturally, beyond my race or gender. I feel that being a geek is a culture in itself, which anyone can be a part of. Being a geek, more often than not, can often impair your ability to communicate and influence others, within families, social circles and definitely in the workplace. The term invites stereotyping, which I want to avoid and break out of in the discussion. Geeks are generally seen as obsessively knowledgeable about a single subject or small range of topics, and it is normally in the technology or popular culture space. Geek personal behaviours and communications can be interpreted sometimes as being either unprofessional, inconsequential or irrelevant. Sometimes these behaviours are labelled as being on a spectrum of some special learning and educational needs. Influence is a by product of good communication. If you aren’t communicating well, then your influence stemming from that communication will almost certainly be limited. The main points of this talk will be to elicit discussion around: ● Supporting teams and individuals to develop communication confidence and the ability to influence others
● Developing methods to filter the personal white noise out of professional communication
● Developing presentation and visual aids which allow the personality of the presenter to shine through, rather than dry content
Dorothy Graham Criticism and Communication What do testers do? They are critics, often of other people’s work. The way in which criticism is communicated is key to good and effective testing. In this talk, we will look at what criticism is, and its different types. How do we respond to being criticised? This is important to know how to be a good critic. There are different types of communication. We will look at the difference between push and pull style, and I will outline Virginia Satir’s communication interaction model with an example. criticism is what tester’s do; it is important to understand how it feels to be criticised and how to criticise well the way in which we communicate is crucial to effective interpersonal interactions; we will look at two models of communication criticism and communication are critical skills for testers
Duncan Nisbet Failing to communicate a model on communication I gave a talk at a conference about the Satir Interaction Model. Unfortunately, the examples I used in the talk offended one of the guys I work with so much that we no longer talk. (he’s now moved onto another team) In this experience report I will outline the examples I gave & why I think my colleague took so much offence. I’d be interested in hearing your thoughts. The talk was meant to demonstrate how hard effective communication can be. I guess that point has been proven, though not quite the way I expected…
Neil Studd Testing in the Dark: Lessons in cross-site communication The nuances of body language and intonation are a critical part of effective communication, so how do you adjust when these are stripped away? With offshoring and multi-site development projects on the increase, testers are often asked to receive and deliver information over email and IM. Failing to dodge the pitfalls of misinterpreted communication can quickly lead to confusion, teams working at cross purposes, and potentially project failure. In this report, Neil will share his experiences from a decade of working with remote teams, including:
• Real-world scenarios where the medium was responsible for obscuring the message, and how these were remedied
• How to break down communication barriers between sites, without appearing to be a disruptive influence
• Looking at Albert Mehrabian’s often-misinterpreted “7%-38%-55% rule” and how it can be applied to communicating remotely
Raji Bhamidipati Effective communication in remote teams In the last few years we are seeing a lot of change in working patterns within the IT industry. One such change is the change in working patterns. Geographical limitations no longer stop people from working at awesome places of work. Gone are the days of long commute to work! Remote working is taking the industry by a storm with some very positive and encouraging results. Whatever maybe the reason for the commute, new remote working practices are here to solve many problems. Although some big well-known companies have put an end their remote working opportunities, there are many more companies that are embracing this practice with a zeal. In this session I would like to start by talking about Remote Working in general and in the latter part talk about this from a tester’s point of view. I have been a remote worker for more than a year and will draw heavily from my experiences to furnish the session. By the end of the session I hope to cover: Remote working – what it means
Pros and cons of remote working
Impact on testing within a team when some or all of its members are remote workers
Tips and suggestions on making remote working easy
Ranjit Shringarpure My experiments with communications In this talk I discuss the outcome of an experiment I deliberately set or to study the impact of the different ways n which I communicated with the different teams on the current project; and how it impacted me, my team, them or anyone around us.
In particular my focus was limited to the ‘Daft’ idea of Information richness theory combined with Berne’s theory of Transaction Analysis.
Richard Bradshaw The Negative One My brief experience report will be on a discussion I had with a current colleague, where I attacked an idea of theirs having just heard it. In return, they labelled me Negative, not the first time this has happened. I will share my reflection on this discussion and subsequent learning, to ensure that I don’t take the same approach again. How it’s important to pay attention to far more than the idea its self, such as the person, the problem there idea is trying to solve and what is the problem with just letting it go?
Simon Knight Applying the Golden Rules of Improvisation in a Testing Context
What are the golden rules of improvisation? How might they be useful when working with project teams? Explore and experience them for yourself in this short talk.

Also attending but not speaking – Stephen Blower, Neil McCarthy, Christian Legget.

About MEWT

MEWT is an invite only peer conference, organised and facilitated by:

Proudly sponsored by Equal Experts.

MEWT 2 was sponsored by Equal Experts

MEWT 3 is sponsored by Equal Experts

MEWT 2 Experience Report

The second MEWT peer conference took place on the 13th September and, yes I’m biased but still – it rocked! A big thank-you is in order to Equal Experts for sponsoring the event, Richard Bradshaw for sourcing the venue and taking care of most of the organisational detail, and Bill Matthews for facilitating on the day.

Great job guys!

My day started pretty early with a drive to the Attenborough Nature Centre in Nottingham. As a co-organiser of the event, I had good feeling about the idea of holding the conference in a nature reserve since it seemed to me to provide exactly the right kind of atmosphere for something entirely voluntary, but with quite a high bar in terms of the kind of professionalism and content we expected from the participants. I was not disappointed.

Attenborough Nature Centre - our MEWT venue for the day

The Attenborough Nature Centre – our MEWT venue for the day

Inside the venue - MEWT in progress

Inside the venue – MEWT in progress

With a general buzz of activity as folk continued to arrive, people set about unpacking their various electronics and note taking implements. By about 9am more or less everyone had arrived, so it was time to submit our talks for the day and vote in which order we wanted to hear them.

The MEWT Schedule

The MEWT Schedule

Once done, and after a quick bacon-roll break, we made a start.

Somehow, my talk [Simon Knight – @sjpknight] had made it to the top of the list. Probably something to do with the bombastic nature of the title – Lessons Learned in Root Cause Analysis, From Air Crash Investigations! The intent of the talk was simply to make the point that regression failures are often symptoms of some other issue and that as testers, we should feel comfortable with carrying out an investigation into what the underlying problem might be and taking the necessary steps to either get it resolved or communicate persuasively with people who matter in order to get it dealt with. My slides can be found here if you want to find out more.

When you see a regression...

When you see a regression…

After me came Timothy Munn, aka @NottsMunster with Regression Testing – You Don’t Have to be Mad to Work Here, But it Helps! He made some great some great points about regression testing basically being an opportunity to improve and expand our knowledge about our products, but that the idea of regression testing (doing the same thing over and over again) was basically a road to madness, per the well known Einstein quote.

Richard Bradshaw was up next with his Regression Testing – Rant, presenting flip-chart models of how he sees regression testing work [on agile teams] currently, and how he thinks it should work. One of the main things Richard tried to convey with his talk was the fact that when we actually carry out our regression testing, what we learn is likely to undermine the results of our previous testing, suggesting instead that the focus of our testing should be detecting and investigating change.

Old Model Regression Testing

Old Model Regression Testing

New Regression Testing Model

New Regression Testing Model

Up next was Stephen Blower, from up North where bugs are apparently made of steel. His talk, Myths & Illusions of Software Testing focused on common misconceptions about what Regression Testing actually is. Stephen set the bar high with his research efforts, supplying quotes from his current team and project along with further answers from a recent, public Skype chat about the same topic.

The final pre-lunch talk was from Bill Matthews, who fed our minds (if not our rumbling stomachs) with his assertion that we should “test for regressions” instead of carrying out regression testing in his presentation, How Do You Solve a Problem Like Regression Testing?

Testing for Regressions

Testing for Regressions

Did I say thanks to Equal Experts already? At lunchtime, lashings of ginger beer, sandwiches and cake were enjoyed on the balcony, courtesy of our sponsors and our hosts, the Attenborough Nature Centre.

Well ok, not the ginger beer. But it was a great lunch!

Lunch on the Balcony

Lunch on the Balcony

After the break, MEWTing continued with Mohinder Khosla [@mpkhosla] and The Minefield Analogy, and then a first-time MEWT talk from Dan Casely [AKA @Fishbowler] on Passive Regression Testing starting with the immortal words – why bother? “You’re going to break it anyway!”

The premise of Dan’s talk was that his organisation has had to take a pragmatic, risk-based approach to regression testing due to the insurmountable mountain of technical debt facing him when he arrived as the first tester on the scene. After some consideration arriving at the view that what needed to be done was… Nothing.

Neil Studd [@neilstudd] was next up with his talk Down the Rabbit Hole, elaborating on adventures in regression testing for companies with red logos. You can find his slides here. Only the names have been changed, to protect the innocent…

Only the names

The theme of adventures and experiences in regression testing was continued in the next couple of talks from Luke Barfield [@lukebarfield83] and Paul Berry [@pmberry2007], Regression Risks and The Bane of the Software Tester respectively. Luke’s talk found a consensus with his assertion that “customers don’t care about regressions. They just care if there’s a bug.”

Amen to that brother.

The penultimate talk was delivered by first time MEWTer and budding testing speaker Ranjit Shringarpure [@ranjitsh]. His investigation – Mathematical Models for Regression Testing: Would They Help (in Making Regression Testing Cost Effective) – was one of the standout talks of the day for me and provided plenty of material for further investigation into how regression testing might be made cheaper and more effective.

[Edit] Ranjit’s slides can now be found here.

What is expensive?

What is expensive?

Finally we had Adam Knight [@adampknight] talk to us about why “lack of progression is a regression” in his presentation Progression Testing. Again this was something of an experience report, providing the MEWT attendees with insights into Adam’s evolving family and residential requirements how the Rainstor test architecture has necessarily evolved in capability and complexity in parallel with product and business growth, hammering home the point that if our testing doesn’t evolve to meet the demands of the marketplace then the quality of the products on which we work will inevitably suffer. A fitting end to the day, his slides can be found here.

Progression is regression

Progression is regression

I think it’s probably fair to say that a great time was had by all, with lots learnt in the process. Some of the main takeaways from MEWT for me personally were:

  • Using Root Cause Analysis patterns to investigate and ideally resolve the problems causing regressions – treating the cause instead of the symptoms
  • Seeing the regression test phase (if there is one) as an opportunity to improve the product on which we’re working
  • Re-defining regression testing as “change detection”
  • Using regression tests to “see if what you knew to be true previously has or has not changed ” and to “measure changes to existing functionality that don’t fall into the scope of intended development”
  • To test for regressions, rather than regression test – and to be aware that regression testing carries with it opportunity cost (i.e. chews up time that could be used for other things)
  • Not doing regression testing at all – replacing it with dogfooding and other measures
  • Researching methods of improving regression testing efficiency and reducing regression testing cost
  • Evolving my test approach alongside the organisation I’m working with and the product I’m working on

No doubt there were many more, and I look forward to updating this and other posts on the MEWT site with further stories as they come in. (Hint!)

Hometime

 

MEWT 2 was sponsored by Equal Experts

MEWT 2 was sponsored by Equal Experts