How to use feedback from consultation
The four month consultation period on our local Joint Health and Wellbeing Strategy is just coming to an end and reflecting on it has led me to ask myself how the consultation feedback should best be used and how you can be sure you’re doing it justice.
The consultation consists of a survey and various discussion groups. Disclosure: I have been sitting on the Engagement Group which planned the consultation (though that doesn’t mean I agree with all the decisions on what to do. For instance, one of the suggestions I made was that there should be a deliberative event. There was no objection to that in principle but resources are pretty limited, so it fell by the wayside).
The consultation is presenting 11 potential ‘priorities’ and consultees are being asked to rank them, to say which are most important, with the top 3-5 to be used as a basis for producing a strategy. I don’t think that’s the right approach, but I’ll maybe go into that in more detail in another blog. The ‘priorities’ are things like: promoting physical health; supporting vulnerable people; adult mental health; children’s mental health; health inequalities; and tackling substance abuse.
I have facilitated six discussion groups. In three, I was effectively representing the Engagement Group, so talked about each of the priorities and then just facilitated general discussion. In the other three, which were made up of people well versed in issues of health and wellbeing, I was acting as an existing member of that group and gave my own views on what a strategy should look like, how it should be produced and how I felt this was not the best way to do it. So, inevitably the nature of the discussions varied between the groups.
I was wondering about what I’d learned from the consultation and whether I’d helped move the debate forward (again, more on that in another blog). On the question of what I’d learned from the feedback, my first feeling was, ‘not much’. But then I thought, while there had not been any totally new ideas, there were things I was (could have been?) aware of but hadn’t noted specifically and there was lots of detail about people’s experiences that provided a much richer picture of the reality of health and wellbeing.
That led me to reflect more generally on how you should use the feedback from consultation. I began to realise that this is a huge topic and I won’t try to tackle it comprehensively here, instead to concentrate on a few of the things that occurred to me. The key question is, how is the feedback used, how does it feed into the strategy. Prior to that, though, are questions about how the feedback has been collected and summarised and how it was generated in the first place. At each stage, there is much scope for selection, distortion and bias.
The first thing is that the question you ask, and how you ask it, will influence the answers you get. So, naturally, in those events where I was presenting each ‘priority’ and talking about them in some depth, there was more discussion about those particular issues. Where I was presenting my own ideas about the nature of strategy, the discussion was more wide ranging. However, these are general tendencies rather than absolute determinants. As most people will be well aware, rare is the meeting, whether of the public or professionals, where the discussion does not go off at a tangent.
The second point is that what information is presented and how, makes a difference to how people see it and how they respond. Even presenting all the priorities together, as a whole, rather than pausing for discussion after each one, made a difference to the issues raised. When I was presenting my own ideas, I refined and developed some of the diagrams over time, to make them more user friendly, with more icons and bigger text, and that certainly helped them to be better understood, which led to more discussion on them. And some of the information I gave, which was not part of the ‘official pack’, was pertinent to the debate and was picked up by the participants. For instance, I noted that issues related to diet, (which is not in the list of potential priorities), cost the NHS five times as much as problems from lack of physical activity, (which is)[1]. I also noted that the value of unpaid carers is estimated to be just less than the cost of the entire NHS[2], which led to calls for carers to have a place in the strategy.
However the discussions are written up, there will be some degree of selection and interpretation which alters the original points made to a greater or lesser degree (a verbatim transcript would be unwieldy and in any case misses out intonation; even listening to the recording doesn’t necessarily do justice to the intentions of participants since people do not always say precisely what they mean, they miss out words and so on). There are ways to reduce the risk of bias such as combining two sets of independent notes and checking the summaries back with participants. In academic research there are established methods for reducing bias in qualitative methods and there are various software packages to help with this. It’s well worth using such methods if possible, but time, resources and skills do not always allow.
The thing that struck me most, though, is that there is frequently no straight read across from the points made, to their implications for strategy development. Sometimes, clearly, there was, such as the suggestion that gambling should be included in the priorities alongside substance abuse. But such examples were in the minority. More often (particularly in the discussions with general members of the public rather than people from patient representative groups), the comments related more to their personal experience. They talked, for instance, about problems they had experienced trying to access particular services, about how beneficial things like Sure Start had been, about the many difficulties faced by young mothers.
How such points are taken on board depends a lot on the cognitive frameworks, values and strategy development processes of the people writing the strategy.
There is a real risk of the many cognitive biases to which humans are susceptible coming into play. To take one example. I was very pleased with myself that I had taken a very specific and concrete example – of how close-knit and mutually supportive communities had been decades ago – and drawn out lessons for the strategy. That demonstrates that strong, resilient communities can and have happened, so it is a reasonable objective to aim to have them again in the future. But on reflection, I think this is an example of confirmation bias. I had already had a similar thought about much lower levels of obesity in the 1950s demonstrating that diet and behaviour can change significantly over generational time periods. So, picking up on the community resilience example was to some extent me selecting and interpreting evidence to support my existing view. The same conclusions would probably not have been drawn by people who had not already had such thoughts.
There is also a problem with the values. There is a strong internal pressure to discount or disregard points made with which you disagree. One person was saying how inequality is inevitable and even necessary, another said that what you cover in the strategy should be fairly minimal. I did at least summarise those views in the feedback I wrote up, but if I was writing the strategy, would I include them? To be honest, probably not. At best, I might twist them a bit or try to draw something out that I agreed with (‘you can’t plan the whole 10 years in great detail, but it is still important to be thinking that far ahead’).
A further question is, even if you could accurately record the feedback and avoid all biases, how much should you take them into account anyway? The comments are unlikely to be representative of the whole community. In any case, why should we accept the community view over others – they don’t have all the relevant information and in most cases aren’t able to undertake as sophisticated analysis as the professionals. Many of the views will contradict each other. There will be not just different views, but different interests within the community, and consultation cannot resolve those. And there will probably be too many views to ensure that they are all used in some way. So it is not as simple as how best to incorporate everything. There is a requirement for judgement and choices.
So, what should you do?
The answer is probably more about a general approach to be used rather than some detailed algorithm that would take the raw consultation responses and process them into elements of the strategy. It’s about being as fair, open minded, objective and transparent as possible. Take the points made and think creatively about their implications (like my community resilience example, but, somehow, without the confirmation bias). Try to put yourself into the shoes of the contributors, what’s important to them and how can that be respected and responded to. Are there the germs of some ideas that can be built on and elaborated?
Whether you’re the people writing the strategy, their managers or commissioners, or the public holding them to account, is there a set of questions or criteria you can use to do that? It needs more work to do that properly, but here are a few initial thoughts.
- Did the questions asked bias or distort the discussion? Were participants given sufficient opportunity to raise their own issues?
- Did participants have sufficient information to give well-informed and meaningful answers to the questions?
- How can we be assured that the record of discussions is as accurate, objective and comprehensive as possible? Were the summaries checked back with respondents?
- Has an honest attempt been made to make the fullest possible use of all contributions to meaningfully influence the strategy development? Has each point been assessed by at least two people independently and a record made of how it has been used?
- Has there been feedback on how comments were used so contributors can challenge them and suggest changes?
As ever, these are my thoughts: any contributions from anyone else very welcome.
[1] https://pubmed.ncbi.nlm.nih.gov/21562029/
[2] https://www.carersuk.org/news-and-campaigns/news/unpaid-carers-save-the-uk-132-billion-a-year-the-cost-of-a-second-nhs
One thought on “How to use feedback from consultation”