Retrospectives are great to improve collaboration in teams. To add to the many options available, here are ten or so questions you can use for you team retro. The questions follow up key success factors and have the potential to reveal blind spots of a team.
The following sheet can be a good starting point for a team’s retrospective:
When you assign e.g. -2,-1, 0, 1, 2 to each of the boxes, you can easily sum it up. Color those fields where a team thinks improvement is needed the most.
So far so good. But here is where it actually starts. The ratings should trigger discussions. Here is a first trigger for discussion:
- Low ratings: Low ratings are easy to spot and can immediately trigger a more in-depth discussion. In the sheet above, team members e.g. agree that they do not have an effective feedback loop. I propose to review examples, investigate in causes and then what to improve. Here is a how this can look like:
For a good discussion enough expertise is needed. In this case, the team actually had the needed expertise, i.e. one UX expert was in the team. This expert actually wanted to do the feedback loop with users and customers, there was just too much other work to be done. In addition the product owner was happy to define stuff and get it done. The team decided that they would finally do a round of tests with users. They also decided to collect some quantitative data with a standard UX questionnaire. They finally decided that they would do it as a team and not delegate it to the already overloaded person. Respective items where added to the next sprint backlog to be refined and estimated later.
A low ratings is an obvious trigger for discussion. However they just make visible what everybody thinks anyway. So we need more subtle triggers for discussions as well. Here is a second one:
- Inconsistencies: In the example sheet above, team members agreed that they would make good decisions quickly while at the same time they do not think that they have the essential scope. Quite an interesting assessment, don’t you think? Such inconsistencies can trigger a good discussion when team members realize that things they think work well (e.g. decision making) are in fact not really effective (e.g. not building the right features).
This by the way is the goal of this sheet: make team members think about blind spots. The questions take up some key success factors: Market positioning, user needs, decision making, feedback loops, reduction, simplicity, team spirit etc. Theses aspects are related, everyone should know and care for them and more or less agree on what the team achieved so far.
So there are more triggers for an interesting discussion hidden in the responses:
- Disagreement: Team members might have completely different opinions – see e.g. the responses towards “our users and customers love what we create”. In such a situation, it’s great to hear the different positions and understand, how such disagreement came about. In this case some were happy if it worked, others cared for a great design and yet others rated ease of use. Team members had different interpretations of what users would love. To align, they decided to create personas in a team workshop.
- Base of assessment: Team members base their assessment on – well what? On some feeling or gusto? On their experience in another project? On how well it aligns with some theory? On metrics and even on benchmark data? On some qualitative feedback? In the team above, this became especially visible when it came to “creating simple and stable solution”. A lot of gut feeling by more senior developers, some idealistic goals and a lot of uncertainty from the juniors.
- Not my job: Do you have many points in the question mark column? Do you hear statements like “I can’t contribute to improve here, it is this person’s responsibility”? The thing is, everyone in the team contributes to each of the points in the sheet and should therefore be able to assess it. It can be enlightening to look at the team structure and how people collaborate. Do the roles strictly define who does what? Is important information not flowing in the team? Are team members overloaded? Do they complete tasks rather than working towards a shared goal? If you tick yes on any of these questions, there is a big potential for improvement.
You could repeat such a retrospective meeting regularly and observe how the assessment changes.
Over time, a team will hopefully start to collect at least some objective data, e.g. doing bench-marking with a standardized UX questionnaire, looking at ratings in app stores (users love the product), observe sales numbers (market impact) and more. Such data will of course influence a team’s perspective and I would expect a more consistent and less diverse assessment as a result.
Hope it helps in your team as well.