Language-Games and Quality Improvement in Healthcare in England

All published articles of this journal are available on ScienceDirect.

REVIEW ARTICLE

Language-Games and Quality Improvement in Healthcare in England

Stephen Newman, * Open Modal
Authors Info & Affiliations
Open Medicine Journal 30 Sep 2017 REVIEW ARTICLE DOI: 10.2174/1874220301704010073

Abstract

Background:

Quality in healthcare is high on the political agenda in England. This paper examines the model of inspection used by the Care Quality Commission to inspect healthcare provison in England.

Methods:

The paper comprises a critical review of the literature to examine the model of judgement used by the Care Quality Commission in their inspection framework.

Results:

It is argued that the model of judgement used makes various assumptions which throw into doubt the notion that such inspections provide an objective picture of quality in healthcare. However, the contrary view, that such inspections are highly subjective, is rejected in favour of developing a perspective informed by the later philosophy of Wittgenstein; one which recognises the importance of social agreement and understanding in developing meaning.

Conclusion:

This perspective highlights the importance of the different social groups that work in healthcare, and those who are treated in the healthcare system, for developing shared understandings and meanings of terms such as ‘quality’.

Keywords: Care Quality Commission, Healthcare, Inspection, Language-games, Quality, Wittgenstein.

1. INTRODUCTION

The Care Quality Commission has a high media profile, attracting headlines such as “Three Quarters of NHS Hospitals are ‘unsafe’ according to new report” [1] and “Inadequate celebrity mental health clinic ordered to improve” [2]. The notions of quality and inspection in healthcare also have a high political profile, with the setting up of the Commission for Health Improvements in 2001; then, following the 2003 Health and Social Care Act, by the Healthcare Commission (HCC) [3], which was, by the 2008 Health and Social Care Act, replaced by the Care Quality Commission (CQC). The purposes of the CQC include setting quality and safety standards, inspecting services to “make sure they continue to meet our standards”, and making “fair and authoritative judgements, supported by the best information and evidence” [4]. The notion of quality in healthcare is thus enshrined in legislation, and the emphasis on quality and the regulatory environment are intimately linked. The aim of this article is to critique the CQC’s model of inspection and suggest that a comparison with the model of inspection in English schools, and a perspective informed by the later philosophy of Ludwig Wittgenstein, are helpful in understanding the CQC model of inspection.

2. QUALITY IMPROVEMENT IN HEALTHCARE

As Leatherman and Sutherland remark (pointing to work by Lewis and Alvarez-Rosete [5]), “the regulatory environment in health is complex” [6], and the notion of quality improvement in healthcare is one that has shifted in meaning many times over the years. Some have focussed their attention on defining ‘quality improvement’ [7] which they define as:

the combined and unceasing efforts of everyone—healthcare professionals, patients and their families, researchers, payers, planners and educators—to make the changes that will lead to better patient outcomes (health), better system performance (care) and better professional development (learning) [7].

Some have taken the phrase ‘quality improvement in health care’ and concentrated on ‘quality of care’; thus (for example), Tabrizi [8] has suggested quality of care has three components: technical, service and customer. Others [9] have tried to set out explicitly their definitions of ‘health care’ and of ‘quality’. This has occurred from a political context too [10, 11]. The disputed meaning of ‘quality’ in healthcare came to wider public attention in the government’s dispute with junior doctors in 2015-2016 when Jeremy Hunt as Secretary of State for Health was reported as stating that the government wanted “to be able to promise everyone they will get the same high-quality care every day of the week” [12] and yet where the British Medical Association (BMA) saw the government’s actions as an attempt to “force through a contract on junior doctors which threatens the quality of care patients receive” [13]. It seems that the concept of ‘quality’ is indeed “ambiguous and contested” [3]. As Donabedian put it over a decade ago, “quality of care is a remarkably difficult notion to define” [14].

These difficulties notwithstanding, the notion of ‘quality in healthcare’ is high on the national agenda, and has led to an emphasis on measuring performance and improving quality in the UK and elsewhere [9]. This emphasis on measuring performance and improving quality has, according to Campbell et al., led to an increased focus on the concept of quality of care so “that the concept is clearly understood” [9]. And so, in 2010, Raleigh and Foot noted that:

High Quality Care for All (Department of Health 2008) defined quality in the NHS in terms of patient safety, clinical effectiveness and the experience of patients [15].

Leading on from this, high quality care then came to be defined by the NHS as care which encompasses three equally important parts:

Care that is clinically effective– not just in the eyes of clinicians but in the eyes of patients themselves;
Care that is safe; and
Care that provides as positive an experience for patients as possible.

High quality care can only be achieved when all three dimensions are present- not just one or two of them [16].

Thus, on this view, if all three of these criteria are satisfied, the term ‘quality’ is being used correctly. The criteria are thus the necessary and sufficient conditions for using the term ‘high quality care’.

3. INSPECTION

It is in this context that the CQC, the “independent regulator of health and social care in England” [17], carries out its work of monitoring, inspecting and regulating various providers of health and social care [18]. In their ‘provider handbooks’ [17] the CQC sets out in detail what their inspectors look for when they inspect. There are separate provider handbooks published for Acute hospitals, Community adult social care services, Community health services, Health and social care in prisons and young offender institutions, and healthcare in immigration removal centres, Hospice services, NHS 111 services (a phone service for minor ailments and minor injuries), NHS and independent ambulance services, NHS GP practices and GP out-of-hours services, Primary care dental services, Residential adult social care services, Specialist mental health services and Specialist substance misuse services. However, each version takes a common approach to the process of inspection and the making of judgements. In what follows, examples are taken from the Acute hospitals provider handbook [19].

Given my own background in education, it seemed that it might be helpful to compare the approach adopted by the CQC for the inspection of healthcare provision with that used by the Office for Standards in Education, Children's Services and Skills (Ofsted) some twenty years ago for the inspection of schools in England, using a similar line of argument to that used by Gilroy and Wilcox [20]. In passing, it is worth noting that the overall ratings used by the CQC (outstanding; good; requires improvement; inadequate) are the same as those used by Ofsted [21] and that others have also found it useful to compare the CQC with Ofsted [22]. I turn first to outlining the CQC model of judgement, and I borrow from the structure offered by Gilroy and Wilcox [20] in so doing.

4. THE CQC MODEL OF JUDGEMENT

The way in which an inspection operates, and how that model of inspection has been developed, has been outlined elsewhere [21-23]. The intention is “for [the] CQC to embed validity and consistency in everything we do” [19] in order to make a “comprehensive assessment of care quality” [19]. The notions of validity and reliability in relation to CQC inspections have been summarised by Boyd et al. (pointing to work by Walshe and Shortell [24]) as follows:

Standards should ideally be valid, i.e. actually reflect aspects of quality rather than something else, and reliable, i.e., produce comparable results when used by different inspectors, by the same inspector on different occasions, and on different organisations [23].

This assessment is to be made using a combination of ‘Intelligent Monitoring’ and ‘Expert inspections’ to answer five “key questions” about each core service provide, namely: is healthcare provision safe?; is it effective?; is it caring?; is it responsive? and is it well-led? How these five key questions relate to the three dimensions of quality is more complicated, as every aspect of delivering healthcare has a quality dimension, as is implied by the statement that all three dimensions must be present for high quality care to be achieved. Some links seem clear; for example, the dimension of quality specifying that care should be ‘clinically effective’ seems to link most obviously to the key question about whether healthcare provison is effective, although it is worth noting that the notion of clinical effectiveness per se is lost, or at least, not specified. Similarly, the key question about safety clearly links to the notion of safety listed as one of the key dimensions of quality. Presumably healthcare that is caring and responsive will help to provide patients with as positive an experience as possible, and healthcare that is well-led might be expected to contribute to all aspects. These five key questions, “all equally important” [19], are then given further amplification. I shall return to this point in due course.

The inspection having been carried out, the findings are then described in a summary report. For example, with regard to safety, in their report on safety at the St George’s University Hospitals NHS Foundation Trust [25] the CQC inspectors noted 4 “Key intelligence indicators” (pp.9-10), and then summarised their judgements about safety (pp.11-15). These judgements having been made, the grade to be given is then selected [19]. Inspectors then summarise their judgements on a 4-point scale for each of the five key questions, rating each as outstanding, good, requires improvement, or inadequate [26]. The characteristics of each of these rating levels are summarised in a guide, but these

are not to be used as a checklist or an exhaustive list. The inspection team use their professional judgment, taking into account best practice and recognised guidelines. Not every characteristic has to be present for the corresponding rating to be given. This is particularly true at the extremes. For example, if the impact on the quality of care or on people’s experience is significant, then displaying just one element of the characteristics of inadequate could lead to a rating of inadequate. Even those rated as outstanding are likely to have areas where they could improve. In the same way, a service or provider does not need to display every one of the characteristics of ‘good’ in order to be rated as good [26].

The relevant services having been rated using the five key questions, an aggregated rating has to be ascertained for each service, and then used to give an overall rating of the provider. The overarching aggregation principles are that “The five key questions are all equally important and should be weighted equally when aggregating”, that “The core services are all equally important and should be weighted equally” and that “All ratings will be treated equally for the purposes of aggregating” unless certain additional principles (which are specified) come into play. These additional principles will be applied using “professional judgement” [26].

5. ASSUMPTIONS

In their analysis of the Ofsted model of judgement, Gilroy and Wilcox pointed out several assumptions behind Ofsted’s then model of judgement which caused them concern [20]. The first of these, they argued, was the assumption that the criteria used by Ofsted (in 1997) were generally accepted as standards of good practice. A second assumption was that the meanings of the criteria were unambiguous for all readers and users of the framework. A third assumption was that the application of the criteria was a straightforward process. This led Gilroy and Wilcox to highlight a fourth assumption in relation to school inspections, namely that which saw the process of aggregating judgements as unproblematic. In the view of Gilroy and Wilcox about Ofsted in 1997, the process of aggregation was a problem because

if an inspector’s judgement (based, of course, on supposedly Factual Criteria) should be contradicted by another inspector’s then further criteria would have to be invoked to allow judgements to be made between the two [20].

It was the argument of Gilroy and Wilcox that such further criteria were, at best, covert.

Applying a similar perspective to the notion of the CQC inspections of quality in healthcare, and turning first to see if there is an assumption is that the criteria used in inspection process are generally accepted as “fundamental standards of quality and safety” [19], there is some indication that there may be a measure of acceptance, as the CQC point out that their approach “has been developed over time and through consultation” [19]. However, as Dixon-Woods et al. remark:

Improvement interventions are often ‘essentially contested’: everyone may agree on the need for good quality but not on what defines good quality or how it should be achieved [27].

In any event, ‘consultation’ does not inevitably lead to agreement, and there is evidence that people from different groups (patients, doctors, nurses, the public, the CQC inspectors, and so on) do have different understandings of what high quality care entails [28-30]. For these reasons, and turning now to the possible assumption that the meanings are unambiguous, the notion that there is a shared understanding is debateable: as the

instability of regulatory goals and quality definitions has made it difficult ‘to have a regulatory framework with integrity’, as one regulatory official explained to us. Each successive reform, he noted, ‘takes apart how regulation has been carried out’ and forces ‘us to go about designing these regulatory models basically from scratch’. In that context, it has been difficult to operationalise effective risk-based (or indeed any other) approaches to regulating what has essentially been a moving target [3].

Beaussier et al. also point out that government interventions

tend to undermine previously agreed ex ante compromises between competing quality goals and to reinforce the tendency for new definitions and standards to layerup on top of each other in unstable configurations that are consequently difficult either to assess or enforce in proportion to risk [3].

Thus, argue Beaussier et al., “ambiguities in the very idea of healthcare quality have made it impossible to define clear and enduring goals for regulating it in terms of risk” [3].

Turning to the third possible assumption, it would seem that the application of the criteria is far from straightforward; Boyd et al. noted that in one study by Tuijn et al. [31] “Hospital inspectors demonstrated widely differing interpretations of what each assessment criterion meant” and of the relative importance of the criteria [23]. Boyd et al. also suggested that the application of the criteria in inspections is problematic - they argued that “Some variability in judgements may have been due to different professional backgrounds” between inspectors [23]; that there was often “Difficulty in determining domains and ratings during inspections” [23]; and “a lack of detailed understanding of some of the domains and associated key lines of enquiry” [23]. Some of these difficulties they consider may be due to the categories themselves, and others to their practical implementation [23]. Further evidence of such difficulties is given by Walshe et al. [32] and by the report of the House of Commons Select Committee in 2015, which showed that even the collection and use of measures and evidence is problematic [33]. Some of these relate to practical difficulties such as the recruitment and retention of appropriate inspectors, analysts, and managers [33]. Others relate to the “variation in the quality of initial judgements”, inaccuracies in data, and over-reliance on anecdotal evidence [33].

These difficulties then extend to the fourth area of possible difficulty, namely the process of aggregation. In relation to healthcare, the general process of aggregation is described thus:

When aggregating ratings, our inspection teams follow a set of principles to ensure consistent decisions. Our principles are set out in appendix D. The principles will normally apply but will be balanced by inspection teams using their professional judgement. Our ratings must be proportionate to all of the available evidence and the specific facts and circumstances. Examples of when we may use professional judgement to depart from the principles include:

  • Where the concerns identified have a very low impact on people who use services
  • Where we have confidence in the service to address concerns or where action has already been taken
  • Where a single concern has been identified in a small part of a very large and wide ranging service
  • Where a core service is very small compared to the other core services within a provider [19].

The difficulty here is, of course, that how the principles are “balanced… using professional judgement” and made “proportionate” is not made explicit, nor are the grounds on which the inspectors may depart from the principles. There is an assumption that there will be agreement on what ‘low impact’ means, or on when there is ‘confidence’. Some indication that this may lead to problems is revealed by the statement that:

Where a rating decision is not consistent with the principles, the rationale will be clearly recorded and the decision reviewed by a national quality control and consistency panel. The role of this group is to ensure the quality of every quality report before it is shared with the organisation being inspected [19].

There is a further aspect that is worthy of mention. There is an assumption by the inspection approach that CQC inspections can observe a situation from a ‘neutral’ standpoint. This has at least two implications in relation to the inspections. The first of these is the notion that there are decontextualized and ahistorical “key principles” [19], and “fundamental standards, below which the provision of regulated activities and the care people receive must never fall” [19], which mean that, although how the approach is put into practice might develop, the overall framework will not: “the overall framework, including our five key questions, our core services, the key lines of enquiry and ratings characteristics, will remain constant” [19]. This is reminiscent of English’s critique of a standardized knowledge base in educational leadership development programmes (such as those in the USA about which he writes, and the National Professional Qualification for Headship, in England), as presenting themselves as a

monolithic, uncontested, internally consistent fount of universally accepted stipulations and axioms and tenets [34]

where

instead of the standards enabling practitioners to confront changing circumstances, the veracity and utility of them remain only as long as situations do not change. They are thus antichange [34].

The second notion is that the inspectors can somehow inspect the situation accurately without influencing it in any way; as somehow being neutral observers who are able to enter into a social situation without changing it. In fact, the situation is more akin to that which Giddens described as a ‘double hermeneutic’: “a mutual interpretative interplay between social science and those whose activities compose its subject matter” [35].

DISCUSSION

The notion that Trusts can be “judged objectively against clear criteria” [36] to produce “only one version of the truth that everybody, locally and nationally, will use to drive improvements” [37] has been shown to be something of a chimera. One suggestion to resolve this difficulty is proposing that what is needed are more “explicit criteria that are good ways of assessing the CQC five key questions, and … sensible standards of achievement for each criteria” [38]. This view, however, begs the questions of what is to count as ‘sensible’, and who is to decide? A related approach is that which argues for more quantitative targets, the argument being that

Reporting performance against clear targets is vital for both transparency and accountability and measuring improvements over time [33].

However, the use of such factual criteria (even if they could be devised) is not without its difficulties, as has been shown above and elsewhere [39, 40].

Another suggestion is to more fully explicate the meanings of the criteria, especially where these may be considered as ‘conventional criteria’ [20]. The difficulties in doing this in relation to Ofsted inspections have been highlighted by Gilroy and Wilcox, and similar difficulties can be seen in the approach adopted by the CQC, taking as one example the requirement that inspectors should judge whether patients are safe. This is one of the five ‘key questions’ [19]. The question that then arises is ‘what does safe mean?’, to which the answer given is that, “By safe, we mean that people are protected from abuse and avoidable harm” [19]. Abuse is then further defined as a term to include “physical, sexual, mental or psychological, financial, neglect, institutional or discriminatory abuse” [26]. Whether patients are safe is to be judged by reference to a range of criteria. Taking just one of these as an example, we can look at the question “Are there reliable systems, processes and practices in place to keep people safe and safeguarded from abuse?” [26]. This question is itself to be answered by reference to an additional 11 questions or ‘Prompts’, all of which require inspectors to make judgements. For example, one of the Prompts asks, “Are reliable systems in place to prevent and protect people from a healthcare-associated infection?” [26]. Presumably each of these questions needs further clarification if they are to be ‘objective’ in order to judge what is for example, ‘reliable’.

Such an approach runs the risk of generating an infinite regress of criteria, as each successive term in the definition needs itself to be defined. But, in fact, when we examine the CQC inspection documentation, we see that the regress is halted by the notion of the ‘professional judgement’ of the inspectors. Thus, for example, we read in the Provider handbook that, before the inspection begins, the inspectors use the data collected in advance,

along with their knowledge of the service and their professional judgement to plan the inspection [19]

and that during and after the inspection, “Our inspectors use professional judgement, supported by objective measures and evidence, to assess services against our five key questions” [19]; that the inspection guidance setting out what outstanding, good, requires improvement and inadequate care look like in relation to each of the five key questions “provide a framework which, when applied using professional judgement, guide our inspection teams when they award a rating” [19]. The notion of professional judgement is similarly invoked elsewhere:

The inspection team use their professional judgment, taking into account best practice and recognised guidelines, with consistency assured through the quality control process;

Inspection teams base their judgements on the available evidence, using their professional judgement;

When aggregating ratings, our inspection teams follow a set of principles to ensure consistent decisions. Our principles are set out in appendix D.

The principles will normally apply but will be balanced by inspection teams using their professional judgement. Our ratings must be proportionate to all of the available evidence and the specific facts and circumstances [19].

Examples are given of when professional judgement may be used to depart from these principles, the aim being “to produce a fair and proportionate result” [19].

How do inspectors develop their ‘professional judgement’? For permanent employees of the CQC, in part, this is done by developing their ability to make a ‘corporate judgement’ by participation in a compulsory corporate induction and a six-week long role specific induction programme [41]. Permanent posts are subject to a six-month probationary period, to allow “both the line manager and the employee to assess objectively whether or not the new recruit is suitable for the role” [41]. There are also secondment opportunities lasting 2 years for “candidates with a health or social care background currently employed by a similar public sector organisation, likely to be NHS, local Authority or another non-departmental public body or Arm's Length Body (ALB)” [42]. Such candidates also have to participate in the corporate induction programme. An indication of the membership of an inspection team is that used in the inspection of the St George’s NHS Hospital Trust, to which reference has already been made. In that inspection,

the trust was visited by a team of 62 people, including: CQC inspectors, analysts and a variety of specialists. There were consultants in emergency medicine, anaesthesia and intensive care, obstetrics and gynaecology, radiology and neonatal care. The team also included nurses with backgrounds in all the specialties we inspected, as well as a midwife, an infection nurse and a student nurse. There were also specialists with board-level experience and three experts by experience [25].

Given a sympathetic interpretation, the CQC induction programme can be seen as a way in which inspectors, from different contexts, can come to have shared meaning of the notion of quality in healthcare. This might not be too difficult if the criticism levelled in 2014 still applies, which suggested that a large number of inspectors were drawn from professional contacts and formal and informal networks rather than open recruitment [32]. Less sympathetically, the CQC induction programme might be interpreted as a way in which the CQC meaning of the notion of quality in healthcare is given to (or imposed on) inspectors. For those being inspected who have not participated in the induction programme, the meaning is likely to remain more obscure. They will have possibly different meanings; related, although not necessarily identical, to those of the CQC inspectors.

If the interpretation thus far is plausible, we would expect to see it reflected in accounts of CQC inspections, and indeed we do. Boyd et al. note that there can be variation in CQC assessments; that doctors, CQC staff, and patients/experts by experience may make different judgements; and that what the criteria meant, and how they were to be used often required further explanation and guidance [23]. They argue that in some corroboration sessions,

the rating and report writing process sometimes favours the judgements of CQC staff, on account of their leadership roles, the process of report writing being largely in their hands and some ratings being changed at National Quality Assurance Group meetings [23].

This can be interpreted as supporting the view that the CQC induction programme is a way in which inspectors come to have a shared meaning of the notion of quality in healthcare, and where it is checked and imposed as the correct meaning. As one participant in the research said:

Decisions about domains are checked by the inspector writing the report to ensure they are a correct interpretation (participant quoted by Boyd et al. [23]).

There is evidence too that the CQC inspectors bring their own professional experiences and meanings from their varied contexts, including those from the CQC induction programme, whilst others (such as those working, or being treated) in the contexts being inspected by the CQC have different meanings; related, although not necessarily identical, to those of the CQC inspectors [23]. The Inspectors’ prior experiences and backgrounds (if relevant) may be helpful because they can give the staff being inspected in A&E “confidence that the CQC inspection team … [know] what to look for” (participant quoted in Boyd et al. [23]). If such backgrounds and experiences are not relevant, then those being inspected tend to lose confidence in the judgements those inspectors make [23]. Aggregation too, as is to be expected given the foregoing analysis, can present difficulties. For example, as Boyd et al. report:

standards may vary widely between different parts of the service area. This means that determining an overall rating is difficult, and whatever rating is chosen, some parts of the service will feel that it does not accurately reflect their work… The algorithms that CQC uses to aggregate ratings are also sometimes perceived by staff to produce invalid ratings [23].

One of the participants in their research commented that,

For one of our hospital sites the hospital had an overall rating of Requires Improvement although 6 out of the 8 clinical services inspected were rated as good. This weighting bias towards Requires Improvement was difficult to explain to staff [23].

The picture that thus begins to emerge is one of complexity, nuance, varied perceptions and sometimes contradictions; a recognition that different inspectors have particular personalities, and may have different views and make different judgements; where informal feedback sometimes seems at odds with final feedback [23] – an observation noted too by the House of Commons Committee of Public Accounts [33]. Some methods that the CQC uses (e.g. patient questionnaires) may not be sensitive to local circumstances [33], and particular terms used in the inspection report may send out misleading messages (i.e. have different meanings to those intended) to local media and patients [21]. Even when an inspection leads to an overall rating of ‘Inadequate’, there can be certain services rated as good or where “most patients were positive about the care that they had received from staff and the way they had their treatment explained to them” [25], and where “feedback from survey results showed high levels of satisfaction by patients and relatives with most of the services provided” [25]. It is also useful to highlight that, in spite of recent changes to the inspection model, the mission and purpose of the CQC in general, and of inspection in particular, are, perhaps necessarily, complex. All this is a long way from any simplistic interpretation of ‘quality in healthcare’, which can be inspected and assessed from some exterior, neutral standpoint to arrive at definitive ratings. It would seem, as Maxwell noted over 20 years ago, and as noted again more recently, that quality in healthcare is indeed “multidimensional” [43, 44].

Given this complexity, what are we to make of the notion of quality in healthcare? Although some might be ready to accept the objectivity of the CQC reports (see for example, one hospital Chief Executive who was quoted as welcoming “the objectivity of the CQC inspections” [45], for the reasons discussed, this takes too simplistic a view. An alternative might be to suggest that evaluating quality in healthcare is “highly subjective” [23], and that there is little or nothing that can be done to mitigate such subjectivity. However, such an approach is a ‘counsel of despair’, and the idea that ‘quality in healthcare’ can mean anything to anybody flies in the face of the fact that the phrase does have many different and inter-related meanings to different groups. What is needed, therefore, is an epistemology which recognises the importance of social agreement and understanding in developing meaning.

It is the contention here that one such epistemology capable of giving us an insight into these aspects is that suggested by the later philosophy of Ludwig Wittgenstein. Wittgenstein’s later work is complex and so care is needed when approaching any particular aspect of that work in isolation, less what is argued becomes little more than a caricature which fails to recognise the subtlety of Wittgenstein’s work. With that caveat in mind, one aspect which seems of particular relevance here is his notion of ‘language-games’.

Wittgenstein proposes that words do not have meaning by referring to some ‘objective’ or ‘essential’ meaning; nor do they (or could they) have meaning by subjective introspection. Rather, for Wittgenstein, language is part of a social whole, consisting of both verbal and non-verbal behaviours in specific contexts, in particular times and places [46], where “linguistic and nonlinguistic behavior are woven together into an intricate organic whole” [47]. It is the whole context which provides the ‘frame of reference’ for deciding on the meaning of a particular linguistic or non-linguistic behaviour (Pears [48]).

The term ‘language-game’ has several related meanings [49], but one aspect of the term is to develop the point that it is the whole context which provides the ‘frame of reference’ for deciding on the meaning of a particular linguistic or non-linguistic behaviour in which, as well as verbal language, we observe gestures, actions, expressions, tone of voice, and the like. Wittgenstein gives the following as some examples of different language-games:

Giving orders, and obeying them –

Describing the appearance of an object, or giving its measurements –

Constructing an object from a description (a drawing) –

Reporting an event –

Speculating about an event –

Forming and testing a hypothesis –

Presenting the results of an experiment in tables and diagrams –

Making up a story; and reading it –

Play-acting –

Singing catches –

Guessing riddles –

Making a joke; telling it –

Solving a problem in practical arithmetic –

Translating from one language into another –

Asking, thanking, cursing, greeting, praying [46].

There are “countless kinds” of language games, and new ones can emerge and others fade from view [46]. Within such language games, there are rules or customary ways of acting verbally and non-verbally; the rules which provide that ‘frame of reference’ for any particular language-game may be implicit or explicit; clear or opaque. Sometimes they are just used; sometimes they need to be explained [20]; meanings “are rule and criteria dependent in subtle and complex ways” [20]. What then will provide a criterion of understanding the meaning of a word, or of an action? For Wittgenstein, it is the total social context that gives words and actions their meaning [50]; there is

a way of grasping a rule which is not an interpretation, but which is exhibited in what we call ‘obeying the rule’ and ‘going against it’ in actual cases [46].

In these actual cases, the check will be public and social, “because people in general apply this picture like this” [46].

Learning new meanings then, may be achieved in a variety of ways. On occasions, an explicit explanation or demonstration of the rules of that language-game may be helpful, but the rules of a language-game may be picked up by participation, by observation, or by ‘trial and error’. Sometimes the rules of a language-game may be written down and codified. When we encounter a new language-game, we may indeed be surprised, confused, unsure what is expected of us. We may feel stressed or worried, or perhaps excited. It is likely that different people will react to such situations in individual ways. We may feel that the actions and words of an unfamiliar language-game are different to those we have encountered in other language-games with which we are more familiar. As we become more used to playing a particular language-game, the meanings of the actions and words of that context become clearer to us. We become more able to deal “with situations of uncertainty, instability, uniqueness, and value conflict” [51], and more able to “criticize the tacit understandings that have grown up around the repetitive experiences of a specialized practice” [51]. With others in a similar position, so develops the notion of ‘professional judgement’. Again, as to be expected, the notion of ‘professional judgement’ is not one with a single clearly defined ‘essential’ meaning; it will have varied meanings depending on the context in which it is being used. Recognition that the meanings of the same words and actions in different healthcare language-games can be subtly different needs to be acknowledged and understood; where this does not occur, we may expect to see problems, as indeed has been reported [25, 52].

With this perspective, we can acknowledge and understand the different meanings given by different group to such notions as ‘quality’ and ‘quality improvement’ in healthcare, such as those identified earlier in this paper. We can understand why such terms can be difficult to define to the satisfaction of everyone, in a way that is acceptable in all circumstances. Such an account is capable of explicating the notion of ‘professional judgement’, and of recognising and valuing difference: for example, that people from different language games may have different understandings, that the same word can have several meanings, some of them closely related; others less so; that meanings may be different in different contexts [53]. This may be surprising on occasions, but

that is exactly what happens when an unexpected difference comes to light. One is surprised ... Even more than by differences in the use of different words, we are surprised by differences in the way in which the same word is used in different contexts [53].

How then might all those involved in making judgements about quality in healthcare become more familiar with the various language-games involved, such as that of CQC inspections, as well as of others working in the contexts that are inspected by the CQC? I have suggested elsewhere [54] (there in relation to teacher education and training) that

with the recognition that convergence of meaning involves actions as well as words, there is the acknowledgement that developing deeper understanding of meanings involves words as well as actions; that discussion and not mere blind imitation is of value [54].

Such recognition might, for example, help to ensure that inspectors maintain their familiarity with understandings from different aspects of the healthcare system - likely to be especially important for permanent inspectors, because those on a 2-year secondment are possibly more likely to have such understandings [21]. It could also be helpful for the CQC to continue to widen the range of inspectors. Such a move would allow meanings from other healthcare contexts to inform those used by the CQC, and those used by the CQC to inform those used elsewhere; thus developing a shared language-game. This might also develop from a more flexible

job/career structure for clinically experienced inspectors that enables them to devote substantial amounts of time to both inspecting and to delivering services. This might also facilitate the development of ongoing relationships between inspectors and services, which could be beneficial for service improvement [23].

Healthcare providers can also take steps to develop shared meanings (through for example, discussion, dissemination, policies; staff development; meetings; publications, and so on with the CQC and NHS Improvement, with politicians, the government, doctors, nursing staff, patients, and others working in and with the NHS). This edition of the Open Medicine Journal can be seen as one such opportunity to do so.

CONCLUSION

The perspective offered here offers a way in which the model of CQC inspections, with their associated documentation and explication can be seen, sympathetically, as a way in which a new language game of ‘quality improvement in healthcare’ is being developed, or, less sympathetically, as a way in which a new language game of ‘quality improvement in healthcare’ is being imposed on others. Whichever perspective is taken, the importance of developing shared understandings and meanings is emphasized. Those who take the sympathetic view could argue that the inspection framework, training, reports and handbooks are the ways in which the ‘rules’ of that new language game are being shared; those who take a less sympathetic view might argue that such ‘sharing’ amounts to little more than dissemination, and that the meanings and understandings of those in different language games need to be incorporated into the discussion and develop a more holisitic account of quality improvement in healthcare.

ETHICS APPROVAL AND CONSENT TO PARTICIPATE

Not applicable.

HUMAN AND ANIMAL RIGHTS

No Animals/Humans were used for studies that are base of this research.

CONSENT FOR PUBLICATION

Not applicable.

CONFLICT OF INTEREST

The author declares no conflict of interest, financial or otherwise.

ACKNOWLEDGEMENTS

Declared none.

REFERENCES

1
Mortimer C. Three quarters of NHS Hospitals are ‘unsafe’ according to new report. The Independent 15th October 2015.
2
"Inadequate" celebrity mental health clinic ordered to improve. The Guardian 14th March 2017.
3
Beaussier A-L, Demeritt D, Griffiths A, Rothstein H. Accounting for failure: risk-based regulation and the problems of ensuring healthcare quality in the NHS. Health Risk Soc 2016; 18(3-4): 205-24.
4
CQC. About us: What we do and how we do it. Newcastle: Care Quality Commission 2016. Available at: http://www.cqc.org.uk/sites/default/files/documents/20131108 6657_CQC_Aboutus_A5_Web version.pdf Accessed: 17 November 2016.
5
Lewis R, Alvarez-Rosete A, Eds. How to regulate health care in England?. London: The King’s Fund 2006.
6
Leatherman S, Sutherland K, Eds. The Quest for Quality: Refining the NHS Reforms A policy analysis and chartbook. London: The Nuffield Trust 2008.
7
Batalden PB, Davidoff F. What is “quality improvement” and how can it transform healthcare? Qual Saf Health Care 2007; 16(1): 2-3.
8
Tabrizi J. Quality Improvement in Health Care. Journal of Clinical Research & Governance 2013; 2: 1-2.
9
Campbell SM, Roland MO, Buetow SA. Defining quality of care. Soc Sci Med 2000; 51(11): 1611-25.
10
Boseley S, Campbell D. Jeremy Hunt on the NHS: ‘This decade needs to see the quality revolution’. The Guardian 2015 15 February 2015.
11
Department of Health, Hunt J Making healthcare more human-centred and not system-centred [Speech] 16 July 2015. Available at: https://www.gov.uk/government/speeches/making-healthcare -more-human-centred-and-not-system-centred Accessed: 16 November 2016
12
Demianyk G. Jeremy Hunt says he’s ‘not an academic’ but academics are wrong about NHS ‘weekend effect’: Huffpost politics 2016. Available at: http://www.huffingtonpost.co.uk/entry/ hunt-weekend-effect_uk_5730a099e4b05c31e5726594. Accessed: 15 November 2016
13
BMA The truth about the junior doctors’ dispute. London: British Medical Association .
14
Donabedian A. Evaluating the quality of medical care. 1966. Milbank Q 2005; 83(4): 691-729.
15
Raleigh V, Foot C. Getting the measure of quality: opportunities and challenges London: The King’s Fund; 2010. Available at: https://kf-ssl-testing.torchboxapps.com/sites/files/kf/Getting-the-measure-of- quality-Veena-Raleigh-Catherine-Foot-The-Kings-Fund-January-2010.pdf
16
NHS Health and high quality care for all, now and for future generations 2016. Available at: https://www.england.nhs.uk/about/our-vision-and- purpose/imp-our-mission/high-quality-care/
17
CQC Provider handbooks 2016. Available at: http://www.cqc.org.uk/content/provider-handbooks
18
CQC What we do 2016. Available at: http://www.cqc.org.uk/content/what-we-do
19
CQC How CQC regulates: NHS and independent acute hospitals Provider handbook Newcastle: Care Quality Commission 2015 2015. Available at: http://www.cqc.org.uk/sites/default/files/ 20150327_acute_hospital_provider_handbook_march_15_update_01.pdf
20
Gilroy P, Wilcox B. OFSTED, criteria and the nature of social understanding: A Wittgensteinian critique of the practice of educational judgement. Br J Educ Stud 1997; 5(1): 22-38.
21
Iacobucci G. Anatomy of a care quality commission inspection. BMJ 2014; 349: 7353.
22
Walshe K, Phipps D. Developing a strategic framework to guide the Care Quality Commission’s programme of evaluation. Manchester: Manchester Business School 2013.
23
Boyd A, Addicott R, Robertson R, Ross S, Walshe K. Measuring quality through inspection: the validity and reliability of inspector assessments of acute hospitals in England. European Health Policy Group conference 2014.
24
Walshe K, Shortell SM. Social regulation of healthcare organizations in the United States: developing a framework for evaluation. Health Serv Manage Res 2004; 17(2): 79-99.
25
CQC St George’s University Hospitals NHS Foundation Trust. Qual Rep 2016.
26
CQC. How CQC regulates: NHS and independent acute hospitals. Appendices to the provider handbook. Newcastle: Care Quality Commission 2015. Available at: http://www.cqc.org.uk/sites/default/files/20150326_acute_hospital_ provider_handbook_appendices_march_15_update.pdf
27
Dixon-Woods M, McNicol S, Martin G. Ten challenges in improving quality in healthcare: lessons from the Health Foundation’s programme evaluations and relevant literature. BMJ Qual Saf 2012; 21(10): 876-84.
28
BMA CQC: combative, querulous, crushing 2016. Available at: https://www.bma.org.uk/news/2016/ april/cqc-combative-querulous-crushing
29
Coulter A. Can patients assess the quality of health care? BMJ 2006; 333(7557): 1-2.
30
Sutcliffe A. Putting the record straight 2014. Available at: http://www.cqc.org.uk/content/putting-record-straight
31
Tuijn SM, Robben PB, Janssens FJ, van den Bergh H. Evaluating instruments for regulation of health care in the Netherlands. J Eval Clin Pract 2011; 17(3): 411-9.
32
Walshe K, Addicott R, Boyd A, Robertson R, Ross S. Evaluating the care quality commission’s acute hospital regulatory model: final report. Manchester, London: The University of Manchester/Manchester Business School and The King’s Fund 2014.
33
House of Commons Committee of Public Accounts. Care Quality Commission Twelfth Report of Session 2015–16 London: The Stationery Office 2015. Available at: http://www.publications.parliament.uk/ pa/cm201516/cmselect/cmpubacc/501/501.pdf
34
English F. The Unintended Consequences of a Standardized Knowledge Base in Advancing Educational Leadership Preparation. Educ Adm Q 2006; 42(3): 461-72.
35
Giddens A. The Constitution of Society: Outline of the Theory of Structuration. Berkeley and Los Angeles, CA: University of California Press 1984.
36
NHS Provider Proposed single oversight framework is significant change for all NHS providers [Press release] 2016. Available at: https://www.nhsproviders.org/media/2098/280616-single- oversight-framework-consultation.pdf
37
Carter P [Lord Carter of Coles] Operational productivity and performance in English NHS acute hospitals: Unwarranted variations: An independent report for the Department of Health by Lord Carter of Coles London: Department of Health 2016. Available at: https://www.gov.uk/government/uploads/system/uploads/attachment_data/ file/499229/Operational_productivity_A.pdf
38
Kemple T. The CQC inspections: not ‘outstanding’, may be ‘good’, but ‘requires improvement’. Br J Gen Pract: J R Coll Gen Pract Occas Pap 2015; 65(634): 230.
39
Bevan G, Hood C. Have targets improved performance in the English NHS? BMJ 2006; 332(7538): 419-22.
40
Griffiths A, Beaussier AL, Demeritt D, Rothstein H. Intelligent Monitoring? Assessing the ability of the Care Quality Commission’s statistical surveillance tool to predict quality and prioritise NHS hospital inspections. BMJ Qual Saf 2016; 0: 1-11.
41
CQC. jobs: permanent posts 2015. Available at: https://www.cqc.org.uk/content/ cqc-jobs-permanent-posts - induction
42
CQC. jobs: secondment information 2016. Available at: https://www.cqc.org.uk/content/ cqc-jobs-secondment-information
43
Maxwell RJ. Dimensions of quality revisited: from thought to action. Qual Health Care 1992; 1(3): 171-7.
44
Wilkinson Y. Quality and safety in healthcare. In: Koutoukidis G, Stainton K, Hughson J, Eds. Tabbner’s Nursing Care: Theory and Practice. 7th ed. Chatswood, NSW, Australia: Elsevier 2017; pp. 136-48.
45
BBC. Norfolk and Norwich University Hospital: CQC inspectors order improvements 2013. Available at: http://www.bbc.co.uk/news/uk-england- norfolk-23067097
46
Wittgenstein L. Philosophical Investigations, [translated GEM Anscombe]. 3rd ed.. Oxford: Basil Blackwell 1997.
47
Pitcher G, Ed. The Philosophy of Wittgenstein. Englewood Cliffs, New Jersey NJ: Prentice-Hall 1964.
48
Pears D. The False Prison: A Study of the Development of Wittgenstein’s Philosophy. Oxford: Clarendon Press 1988; Vol. II.
49
Shawver L. On Wittgenstein’s Concept of a Language Game nd Available at: http://postmoderntherapies.com/word.html
50
Wittgenstein L. The Blue and Brown Books: Preliminary Studies for the ‘Philosophical Investigations’. 2nd ed. Oxford: Basil Blackwell 1969.
51
Schön D. The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books 1983.
52
Royal College of Nursing Putting quality into the Care Quality Commission in England London: Royal College of Nursing 2016. Available at: https://www2.rcn.org.uk/__data/assets/pdf_file/0006/426885/ Putting_Quality_into_the_Care_Quality_Commission_England.pdf
53
Malcolm N, Ed. Wittgenstein: A Religious Point of View?. London: Routledge 1993.
54
Newman S. Philosophy and Teacher Education: A reinterpretation of Donald A Schön’s epistemology of reflective practice. Aldershot: Ashgate 1999.