The purpose of this paper is to explore factors affecting implementing the National Institute for Health and Care Excellence (NICE) quality standard on alcohol misuse (QS11) and barriers and facilitators to its implementation.
Qualitative interview study analysed using directed and conventional content analyses. Participants were 38 individuals with experience of commissioning, delivering or using alcohol healthcare services in Southwark, Lambeth and Lewisham.
QS11 implementation ranged from no implementation to full implementation across the 13 statements. Implementation quality was also reported to vary widely across different settings. The analyses also uncovered numerous barriers and facilitators to implementing each statement. Overarching barriers to implementation included: inherent differences between specialist vs generalist settings; poor communication between healthcare settings; generic barriers to implementation; and poor governance structures and leadership.
QS11 was created to summarise alcohol-related NICE guidance. The aim was to simplify guidance and enhance local implementation. However, in practice the standard requires complex actions by professionals. There was considerable variation in local alcohol commissioning models, which was associated with variation in implementation. These models warrant further evaluation to identify best practice.
Little evidence exists on the implementing quality standards, as distinct from clinical practice guidelines. The authors present direct evidence on quality standard implementation, identify implementation shortcomings and make recommendations for future research and practice.
Introduction
Alcohol-related ill-health and harm is high and is increasing in England (National Audit Office, 2008). Alcohol-related harm’s total annual cost to English society is estimated at £21 bn, with alcohol misuse costing the NHS £3.5 bn a year (Public Health England, 2014). In London, alcohol misuse has been identified as a public health priority (London Health Board, 2013). An average borough in South London is likely to face £14 m costs each year in alcohol-related health, £47 m a year in alcohol-related crime and £30 m a year in alcohol-related reduced productivity. Furthermore, research shows 40 per cent of attendances at emergency departments in two South London boroughs were alcohol-related (Drummond et al., 2014) and admissions to acute care account for 14 per cent within King’s Health Partners and 50 per cent of adult mental health admissions in South West London (Barnaby et al., 2003). While measures to address alcohol misuse in the NHS and more broadly are well evidenced (Pilling et al., 2011; University of Stirling, 2013), how to implement these measures is poorly applied and understood (Drummond et al., 2011).
In England and Wales, the National Institute for Health and Care Excellence (NICE) produces evidence-based guidance documents and associated quality standards – prioritised statements derived from guidance, designed to drive measurable quality improvements within a health or care area. They are relevant to numerous audiences, including health and social care commissioners, care providers and service users. In August 2011, National Institute for Health and Care Excellence published a quality standard (QS11) focussing on alcohol dependence and harmful alcohol use. The QS11 statements are in five broad categories: identification and assessment in all settings; assessment in specialist services; medically assisted alcohol withdrawal; conditions comorbid with alcohol use; and interventions for alcohol misuse. By ensuring that health and social care services are commissioned, delivered and used in accordance with these five categories, the guidance aims to “contribute to improving the effectiveness, safety and experience of care for harmful drinkers and people with alcohol dependence” (p. 7) and “contribute to reducing alcohol-related hospital admissions and readmissions to hospital” (p. 8). Close implementation and adherence to QS11 should, therefore, contribute to alcohol healthcare service improvements in South East London. The degree to which South East London services accord with the QS11 statements is unknown. To address this knowledge gap, we designed a research study on QS11 adherence in Southwark, Lambeth and Lewisham boroughs. Within this geographical area, our study addressed two primary research questions:
To what extent and to what quality level are QS11 quality statements implemented?
What are the barriers and facilitators to better or fuller QS11 quality statement implementation?
Methods
To investigate QS11 implementation, we designed a qualitative, structured-interview study of individuals with direct experience commissioning, delivering or using alcohol healthcare services. The methodological orientation underpinning the study was content analysis (Hsieh and Shannon, 2005) and our geographical focus was Southwark, Lambeth and Lewisham London boroughs.
Developing the interview guide
The QS11 guidance contains 13 quality statements in five domains (Table I). We developed an interview guide based on these statements, comprising 52 questions (each statement had four probes). We tested the interview guide with eight interviewees (who did not participate in the main study). The interviewer (AK) read each quality statement in turn to the pilot study interviewees and then asked four questions. Two probes elicited information about current implementation, two probes about implementation barriers and facilitators. Our pilot interviewees provided feedback on probe wording, which we used to update the interview guide before conducting interviews. The pilot interviewees also commented that most participants would not have direct experience in relation to all 13 statements. Consequently, we decided to ask each interviewee to specify which statements they felt able to discuss from direct personal experience. We then focussed our interview questions specifically on those statements. Therefore, the quality statements addressed in each interview differed depending on interviewee experience.
Selecting and recruiting the sample
Our sampling frame was organisations in Southwark, Lambeth and Lewisham employing staff involved in commissioning or delivering healthcare services for hazardous or harmful drinkers. This included local authorities, NHS-funded and third sector organisations involved in primary, secondary or specialty care. Within this sampling frame, we used a snowball sampling method to identify potential participants and a maximum-variation purposive sampling method to select participants according to numerous criteria such as geographical location, professional background, experience and seniority. We used the snowball method to identify participants, as the population we were sampling was not well delineated. We also had limited personal connections to people in the population. We used the purposive sampling method to select participants as we wanted to control who participated in the research so that we could represent broad perspectives. After each pilot and data collection interview, we asked our interviewees to provide names and contact details for individuals with significant experience relevant to our study (i.e. participant identification). Following discussion among the research team, individuals were purposively selected based on their personal characteristics in comparison with interviewee characteristics (i.e. participant selection). Where possible, we asked the recommending interviewee to provide a personal e-mail introduction to selected participants. In all other cases, we approached selected participants directly by e-mail, explaining that a previous interviewee had recommended him or her. Geographical, healthcare sector and the interviewees’ professional characteristics are presented in Table II. Our sample comprised 12 men and 26 women.
Provisional data analysis started after completing the first interview. We stopped recruiting participants when we reached data saturation (the point at which no new data emerged) both in interview themes and names generated through participant identification. The sampling approach led to a high response rate, with 45 identified and 38 participating in the interview study (84 per cent response rate). Reasons for non-participation were non-specific declines (3), non-response (2) and sickness-absence during data collection (2). Where selected participants were not able to participate, we recruited substitutes with similar characteristics.
Setting and data collection
Interviews were conducted between March and July 2014, either in a private room in the interviewee’s workplace if available, or in the interviewer’s university office. All interviews were conducted individually and face-to-face. Participation was voluntary and informed consent was obtained from interviewees. Interviews lasted 22-82 minutes (mean 52 minutes), were recorded and transcribed verbatim by a professional transcriber. The interviewer was an academic psychologist with no clinical or managerial role, who used the interview guide to structure the interview.
Data analysis
Two data judges (AK and T-LP) qualitatively analysed the interviews using conventional and directed content analysis methods. The framework for the directed content analysis was a 13×4 matrix, reflecting QS11’s 13 quality statements (rows) and the four probe questions about implementation quality and quantity, barriers and facilitators (columns). There was also space for general comments about all 13 statements that did not fit into any column. This analysis provided information that allowed us to answers RQ1 and RQ2. Judges independently read through each interview transcript and extracted data from the interview onto a blank 13×4 data extraction table. Judges completed data extraction tables and met regularly to compare analyses. Discrepancies were discussed and resolved. Judges met periodically with the data auditor (PL) to discuss the analysis. The data auditor’s role was to check the analysis process for rigor and to make technical inputs (e.g. to suggest alternative labels or themes). Conventional content analysis was also conducted on interview data when data extraction tables were completed. This analysis provided supplementary information to allow us to provide a rich answer to RQ2. Judges independently noted the overarching themes (across quality statements and interviewees). During their meetings, judges compared emerging overarching themes. Discrepancies were discussed and resolved by agreeing on a composite themes list endorsed by both judges. Amending the list was repeated until no new themes emerged from the data and until no further changes needed to be made to accommodate both judges’ suggestions (i.e. saturation). The data auditor checked the composite themes lists and made suggestions for alternative categories and labels.
Results
Directed content analysis
Table III synthesises the most common responses about implementation (quality and extent), barriers and facilitators for each quality statement. There were large differences in implementation across the statements. Implementation ranged from nil (statement 6) through partial or patchy (statement 1), to full (statement 4). For some statements, there was also disagreement between interviewees about how widely those practices were implemented (e.g. statement 8). Differences within and between statements were also present in the implementation quality reports; e.g., wide variations in average practice quality were reported in relation to some statements (e.g. statement 2), while average practice quality in other areas was reported to be high (statement 5).
Conventional content analysis
In addition to the statement-level analysis, our thematic-level analysis uncovered seven themes: implementation variability across and within borough, sector and care setting; specialist vs generalist settings; communication between healthcare professionals in different settings; generic barriers to implementation; children and young people (CYP); governance structures and leadership; and service users’ views. These themes are described below.
Implementation variability across and within borough, sector, care setting
A striking theme arising from our interviews was the variability between three contiguous boroughs in relation to the quality and extent that the QS11 quality statements were implemented. For example, interviewees reported particularly high opportunistic screening levels for hazardous and harmful drinking in one borough owing to financial incentives unique to that locality (quality statement 2). Specialist healthcare services provided for alcohol misuse are determined by commissioners at borough level, which resulted in dramatically different service delivery models between boroughs. Alcohol services are delivered by a single third sector organisation in Lewisham, while in Southwark and Lambeth, they are delivered by a collaborative between numerous NHS and third sector organisations. Accordingly, our interviewees highlighted significant differences between the boroughs in adherence to quality statements about specialist treatment. At sector level (i.e. NHS-funded vs third sector organisations), it is difficult to draw conclusions about implementation differences, as few interviewees had direct experience providing or using services in both. However, at the setting level (i.e. primary, secondary and specialty care) our interviewees often described significant differences (e.g. across different GP surgeries or accident and emergency (A&E) departments). Often these differences in implementation were attributed to specialist staff (e.g. a dual-diagnosis nurse or a consultant psychiatrist with an interest in alcohol misuse).
Specialist vs generalist settings
Another key feature in the responses was a perceived wide gap in quality and extent that QS11 was implemented in specialist settings (e.g. community drug and alcohol teams) vs generalist settings (e.g. GP surgeries and A&E departments). Almost universally, specialist settings were described as having closer and better adherence to QS11, while implementing the quality statements was more variable, patchy or lower quality in generalist settings.
Communication between healthcare professionals in different settings
Poor integration and communication between staff different healthcare settings stood out as significant barriers to implementing QS11. Interviewees described this as resulting from regular service re-commissioning. For example, interviewees described inappropriate referrals to A&E departments from GPs, while GPs complained about a lack of confidence in referring patients (quality statement 3). Lewisham’s new integrated service made a concerted effort to ensure a clear alcohol pathway, with training including what services are available and how to signpost patients to services. Nevertheless, our GP interviewees gave mixed feedback on how useful these are in assisting with referrals (quality statement 3). Where interviewees reported good compliance with NHS treatment outcome reviews, they also reported poor follow-up from A&E department staff (quality statement 13). Weak integration and communication between service staff was also reported to have affected the extent to which people received medically assisted alcohol withdrawal in an appropriate setting (quality statement 8). Service users and their carers reported being confused and unclear about treatment pathways. Community or dual-diagnosis nurses were said to improve service communication and integration.
Generic barriers to implementation
Several generic barriers and facilitators to implementation were described by many interviewees. Typical implementation barriers across boroughs, sectors and settings included insufficient time and problems with accessing specialist services or support at certain times, work pressures and stress, and inadequate non-financial resources such as staffing levels. Across many QS11 quality statements, facilitators to better or fuller implementation included financial incentives, performance managing staff for a specific behaviour or outcomes, and obliging staff to undertake an activity as standard operating procedure (e.g. required paperwork).
CYP
The picture described to us by our interviewees was complex regarding CYP alcohol misuse services. Most interviewees had no experience dealing with CYP who misuse alcohol, their families or carers. The experienced few reported that key quality statements had not been implemented. One interviewee suggested that might be due to few CYP presenting with alcohol misuse problems. Nonetheless, there are no clear pathways for referrals within the healthcare system and professionals were unsure what screening tools are appropriate for CYP. Interviewees expressed concern that owing to the focus in youth services on illegal substances, alcohol was socially acceptable and is normalised by youth workers. Interviewees reported that alcohol misuse problems might most often be recognised once a young person was referred to Youth Offending Services (YOS).
The mechanisms for dealing with CYP substance misuse do not fall under the NHS umbrella, except for CAMHS. Although CAMHS is the CYP default service, interviewees reported that CAMHS staff do not see themselves as part of the mainstream healthcare system, which affects integration and communication between alcohol misuse service staff (quality statement 6). Interviewees described the threshold for referral to CAMHS from GPs as so high that referrals are often declined. Interviewees also noted that most children’s services were in the third sector, where worker training was less formalised. Consequently, interviewees expressed concern about a training gap around CYP binge drinking (quality statement 1), in stark contrast to YOS, where CYP receive comprehensive assessment and care as they are obligated to attend and undertake rehabilitation (quality statement 6).
Interviewees expressed concern that CYP needs as carers’ concerns were not recognised, which acts as a barrier to them being offered appropriate support. Schools were where most CYP needs were identified. However, stigma and secrecy attached to alcohol misuse, plus loyalty towards family members, may make it unlikely that problems will be identified. The harm risk identified through safeguarding young carers’ emotional needs may not be identified (quality statement 7). Furthermore, referral to safeguarding services relies on clinicians going out of their way to contact external service staff.
There was recognition among interviewees, who spoke about CYP issues regarding supporting family members and carers. Most services incorporate carer need assessments and services to support families, including having expert family therapists, family and carer groups, and behavioural therapy for couples. Where services were used, they were often reported as being useful or successful. However, interviewees told us how health and social care culture is still focussed on the service user. Thus, some interviewees reported difficulty for families and carers accessing services because triage for family members and carers is mostly carried out via having an existing service user as a patient. Consequently, interventions with family members and carers were described as akin to using a sledgehammer to crack a nut, e.g., in South London and Maudsley NHS Foundation Trust (SLAM), an electronic patient-journey record is opened for families and carers, where information about the intervention may not require the same detail as for a service user. Confidentiality was also considered to be a barrier to families and carers accessing services. Interviewees described how service users do not always want families to know about their situation and family members concerned for loved ones find that health and social care workers cannot discuss patients. Finally, some interviewees were concerned that family members and carers may not want to receive treatment or support in the same physical location as service users (quality statement 7).
Governance structures and leadership
Commissioners reported a correlation between service improvement and proactive governance. For example, identification and brief advice training is a strategic intervention being rolled out across all boroughs. Interviewees reported that there was good follow-up about implementing training through auditing and monitoring trainers’ work. Consequently, interviewees reported that one facilitator to effective training is through Clinical Commissioning Group (CCG) staff making it a strategic priority (quality statement 1). Another initiative introduced in one borough was a financial incentive for GPs to conduct opportunistic screening. Implementation was audited through client satisfaction surveys and staff were asked to evaluate the brief intervention process through feedback questionnaires. Our interviewees reported that the scheme resulted in increased use and improved opportunistic screenings, resulting ultimately in quality statement 2 being implemented better.
Service users’ views
Service users and carers reported difficulty accessing services because an individual service user may not want to associate himself or herself with other service users. This is particularly the case where drug and alcohol services are mixed, housed in unattractive buildings and areas (quality statement 3). Some interviewees reported concerns around whether family members and carers were happy to receive support in the same location as the service user (quality statement 7). Stigma around services was reported to prevent patients from accessing certain services for medically assisted withdrawal (quality statement 8).
Discussion
Our directed content analysis provided information about how widely and how well 13 quality QS11 statements were implemented. Implementation quality and extent varied widely between statements, and for some there were mixed reports about how widely and how well recommended practices were adhered to. For each statement, we asked about implementation barriers and facilitators, which allowed us to explore possible mechanisms or explanations for these findings. For example, interviewees believed that some differences in implementation quality and extent were attributable to inherent differences between healthcare settings (i.e. primary, secondary and specialist care), different delivery systems for specialist care in different geographical locations (i.e. NHS vs third sector provision) and the key leaders and staff presence or absence promoting healthcare services for people who misuse alcohol. Significantly, the QS11 sections relating to CYP and their families or carers (statements 6 and 12) were noted as being particularly poorly implemented – attributed to a discrepancy between how QS11 prescribes services for CYP to be delivered, and how services were commissioned and delivered locally. Relatively few interviewees spoke at length about CYP services and those who did, mentioned poor focus on alcohol misuse among CYP compared to older adults or other substance abuse (e.g. cannabis and other illegal substances).
Directed content analysis findings were supplemented by conventional content analysis, which delineated overarching barriers and facilitators to QS11 implementation. Generic barriers to implementation such as insufficient time and resources were often cited by our interviewees. Variability in service quality provided in different locations, sectors and settings were highlighted, with closest adherence to the guidelines being reported in specialist settings for treating people who misuse alcohol, and less adherence in more generalist settings. Poor or patchy QS11 implementation was often attributed to weak communication between healthcare professionals in different sectors or settings. Conversely, good and widespread QS11 implementation, and relatively good communication between healthcare professionals in different sectors and settings was often attributed to good governance, and strong and focussed leadership at various levels within the care pathway.
Our study has two major strengths. First, it provides a multi-professional perspective on QS11 implementation, considering insights from various stakeholders including commissioners, healthcare professionals and service users. Second, independent researchers conducted the interviews. Responses were likely to be more frank than if they had been conducted by professionals working in the area. Similarly, disinterested researchers conducted the analysis. They were not motivated to report positive or negative findings in any direction. Two limitations, however, may restrict how far our findings are generalised. First, our study was based on interviews with 38 key informants in three South East London boroughs. Therefore, our work requires replication. Second, our sampling strategy was designed to gain insights into 13 quality statements. Nevertheless, we obtained less data in relation to certain quality statements (i.e. CYP) than we did in relation to others (e.g. identification and assessment in all settings). Therefore, some analyses were based on less data from fewer sources than were other analyses. We did not report interviewee suppositions about differences between boroughs, sectors or settings that were not supported by direct experience or that were objectively verifiable (e.g. differences between boroughs in commissioning structure or service provision).
The NICE Standards summarise numerous individual NICE guidance. The aim was to simplify the guidance and enhance local implementation. However, in practice, each standard (containing several statements) requires complex actions by numerous professionals and institutions. It could be argued that standards are aimed at those responsible for commissioning the whole service (or care pathway), while the individual guidance documents are geared towards providers. We identified considerable variation in local alcohol commissioning models associated with varying implementation. The differing models warrant further evaluation to identify which support best practice. It is likely that to improve NICE standards implementation, CCGs, health and well-being boards may need to refer to standards explicitly in their alcohol strategies. Further research is needed to assess whether standards provide added value to the individual guidance documents. It appears that services for young people are an area for further work to understand the most effective service models.
This study was supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care, South London, at King’s College Hospital NHS Foundation Trust. Peter Littlejohns was the founding Clinical and Public Health Director of the National Institute of Health and Care Excellence (NICE) from 1999 to 2011. In this role he designed the process and methods for the development of NICE guidelines and was the executive director responsible for the Citizens Council and the R&D programme. Gillian Leng is an Executive Director at NICE with responsibility for implementing NICE guidance and quality standards. Colin Drummond was Chair of the NICE Standards Committee. The views expressed are the authors’ and not necessarily the NHS, NIHR or Department of Health.
