Which of the following states of consciousness occurs when one person suggest to another that certain thoughts or behaviors will spontaneously occur?

  • Journal List
  • Neurology
  • PMC4540252

Neurology. 2015 Aug 11; 85[6]: 543–548.

A guide for understanding disorders of consciousness

Abstract

Uncertainty in diagnosing disorders of consciousness, and specifically in determining whether consciousness has been lost or retained, poses challenging scientific and ethical questions. Recent neuroimaging-based tests for consciousness have cast doubt on the reliability of behavioral criteria in assessing states of consciousness and generate new questions about the assumptions used in formulating coherent diagnostic criteria. The reflex, a foundational diagnostic tool, offers unique insight into these disorders; behaviors produced by unconscious patients are thought to be purely reflexive, whereas those produced by conscious patients can be volitional. Further investigation, however, reveals that reflexes cannot be reliably distinguished from conscious behaviors on the basis of any generalizable empirical characteristics. Ambiguity between reflexive and conscious behaviors undermines the capacity of the reflex to distinguish between disorders of consciousness and has implications for how these disorders should be conceptualized in future diagnostic criteria.

Disorders of consciousness have posed an immense challenge to the clinicians and researchers tasked with diagnosing and managing them. Historically, such disorders were diagnosed on the basis of behavioral measures; patients unable to demonstrate a minimum set of behaviors were considered unconscious. Recent advances in neuroimaging technology allowing for the detection of consciousness in behaviorally unresponsive patients have further complicated the distinction between conscious and unconscious patients. This distinction can be critical in the care of these patients, dictating, in some cases, whether life-sustaining measures are continued. Revised diagnostic criteria accounting for recent technological developments have been proposed but have been criticized as arbitrary.1,–3 Revisiting foundational neurologic principles of the existing diagnostic criteria—specifically, the characterization of reflexive behavior—offers a new way of framing this dilemma of consciousness and offers further justification for the revised criteria previously proposed.

IDENTIFYING CONSCIOUSNESS

The 2 disorders of consciousness that best capture the dilemma of detecting consciousness are the vegetative state [VS; alternatively termed the unresponsive wakefulness syndrome4] and the minimally conscious state [MCS]. In VS, the patient is awake but unaware of the self or environment5,6 [although notably some variations in this definition exist7,8], whereas in MCS, the patient is awake and retains awareness of the self and environment [although demonstration of such awareness is only intermittent].9 With consciousness operationally defined as awareness of the self and environment, VS and MCS can be conceptually distinguished by the absence or presence of consciousness, respectively. In this way, the boundary between VS and MCS represents the clinical correlate of a more fundamental question: how do we determine when consciousness exists and when it does not?

The presence of consciousness—on which the experience of any life quality depends—factors critically into medical and ethical decision-making. In 1989, the American Academy of Neurology issued a statement claiming that life-sustaining treatment “provides no benefit to patients in a persistent vegetative state,”6 a sentiment reflected in other guidelines claiming that the maintenance of consciousness should be the minimum objective of life-sustaining treatment.10 Thus, for many, determining whether a patient has retained or lost consciousness plays a major role in dictating management.

Since the early 1990s, physicians primarily relied on behavioral measures to determine whether a patient had or lacked consciousness. Behaviors such as following commands, answering yes-or-no questions, or engaging in purposeful behavior were considered indicative of consciousness, whereas solely reflexive or nonpurposeful movements indicated a lack of consciousness.11 Although these behavioral criteria established the diagnostic distinction between VS and MCS for at least a decade, technological developments soon revealed their imperfections. Studies using fMRI demonstrated that a subset of patients determined to be in a VS based on behavioral criteria could reliably respond to commands through fMRI signals.12,13

These studies illustrated the fallibility of the current behavioral criteria. The concern of overlooking behaviorally unresponsive but conscious patients, as discussed elsewhere,1 prompted investigations into more sensitive tests of consciousness. For example, researchers have explored EEG responses to auditory stimuli,14 PET responses to painful stimuli,15 fMRI responses to voice,16 and correlations between spontaneous fluctuations in fMRI signals17 as tests for consciousness. However, as such tests became more sensitive, they also became more reductive. In doing so, these tests departed from intuitive conceptions of conscious behavior and, in the absence of a definitive neural signature of consciousness, increasingly risked generating false-positive results [i.e., misattributing consciousness to subconscious neural activity].18

We have previously suggested that a revised diagnostic criterion—one that establishes a minimum threshold marker for consciousness and applies to both behavioral and technological assessments—would reduce this risk of false-positives. Specifically, we proposed that “interactive capacity,” defined as the ability to receive communicated information and intentionally generate a coherent response, should represent this threshold.1 In this proposal, any test claiming to identify consciousness must, at a minimum, demonstrate interactive capacity in the tested patient. Selecting interactive capacity builds on principles of the behavioral criteria, which also largely center around interactive behaviors [e.g., following commands, answering yes-or-no questions],11 and in doing so appeals to intuitive conceptions of conscious behaviors.

Critics of this proposal have argued that selecting interactive capacity as such a threshold is arbitrary,2,3 raising questions at the heart of this proposal: Why choose interactive capacity—a behavioral characteristic that is only intuitively conscious—rather than a behavioral characteristic that actually delineates conscious from unconscious behaviors? And more fundamentally: How can conscious and unconscious behaviors be distinguished? One approach to these questions draws on a clinical tool so mundane and commonplace as to almost escape notice: the reflex.

WHAT IS A REFLEX?

The reflex has become a foundational principle in consciousness testing, extending back to the initial behavioral criteria used to distinguish VS from MCS. Although bodily movements have always been considered possible in VS, according to 1989 guidelines issued by the American Academy of Neurology, “no voluntary action or behavior of any kind is present. Primitive reflexes and vegetative functions that may be present are either controlled by the brainstem or are so elemental that they require no brain regulation at all.” According to more modern guidelines, “MCS is distinguished from VS by the presence of behaviors associated with conscious awareness. In MCS, cognitively mediated behavior is…differentiated from reflexive behavior.”9 The reflex represents the converse of conscious or intentional behaviors; indeed, reflexes can persist even in brain death.19 Thus, identification of the reflex plays a critical role in diagnosing disorders of consciousness: a patient in a VS exhibits only reflexive movements, whereas a patient in an MCS exhibits both reflexive and conscious movements. But what are these reflexes and how can they be identified?

Descriptions of the reflex can be traced as far back as René Descartes in 1649. In “Passions of the Soul,” Descartes20 describes the phenomenon by which “if someone suddenly thrusts his hand in front of our eyes as if to strike us…we still find it difficult to prevent ourselves from closing our eyes.” He claims that

this shows that it is not through the mediation of our soul that [our eyes] close, since this action is contrary to volition…. They close rather because the mechanism of our body is so composed that the movement of the hand towards our eyes produces another movement in our brain, which directs the animal spirits into the muscles that make our eyelids drop.20

In keeping with his dualist philosophy—positing separation of the mind [or “soul”] and body—Descartes interpreted reflexive action as a demonstration of bodily behavior [or “animal spirits”] occurring independently of the mind.

Modern definitions of reflexes have captured the essence of Descartes' description: an action or movement of the body that happens automatically, or without thinking, as a reaction to something.21 These definitions contain 3 core components: a stimulus, subconscious processing, and a resulting reaction. Classically, the reaction represents a visible bodily movement, as in deep tendon or brainstem reflexes. However, the reflex has also described physiologic responses [or responses of the internal organs], such as the baroreflex [by which an elevated blood pressure induces a reduction in heart rate] or the mammalian diving reflex [in which contact of the face with cold water induces a reduction in heart rate and peripheral vasoconstriction].

Perhaps the most important component of the reflex definition is the process mediating conversion of a stimulus into a response. This conversion must occur “automatically” and without “thinking” or “cognitive mediation”—essentially, it must occur below the level of conscious processing. One can be aware of the stimulus and response, but the process by which the former causes the latter occurs subconsciously and without intention and therefore cannot be intervened upon [save for purposeful behaviors that modulate reflexes, such as the Jendrassik maneuver to amplify deep tendon reflexes].

However, with reflexes defined as subconsciously processed reactions to stimuli, the capacity of the reflex to distinguish between VS and MCS becomes problematic. Claiming that the VS diagnosis [a state principally defined by a lack of consciousness] should be identified by reflexes [movements that lack consciousness] is a circular proposition. To diagnose someone as unconscious, their movements must be comprised entirely of reflexes, or movements occurring beneath conscious intention. But the only way to determine with certainty that a movement occurred without conscious intention is to know whether the mover had conscious intention, rendering the attempted diagnosis futile. One potential way to escape this circularity, and to salvage the reflex as a useful tool for differentiating conscious from unconscious behavior, is to approach the reflex from a different perspective: rather than by its conceptual definition, the reflex could be identified by its empirical characteristics. What characterizes a reflexive response? How can we know a reflexive response when we see it? If these questions are answered, then the reflex can be identified on a purely empirical basis, and from there, one can infer the presence or absence of consciousness.

WHAT CHARACTERIZES THE REFLEX?

One potential way of differentiating reflexive from conscious behaviors is to determine whether the stimulus for the behavior is external [e.g., a tap with a hammer or shine of a light] or internal [e.g., hunger or love], respectively. This distinction appeals to our intuition that reflexes are typically subconscious reactions to external stimuli. There are, however, multiple problems with this proposal.

First, reflexive behavior is not always triggered by an external stimulus. Consider, for example, that VS is characterized by “reflexive crying or smiling,”9 whereas MCS is characterized by “contingent smiling or crying.”9 While the latter occurs in response to emotional stimuli, the former occurs essentially spontaneously. For such affective behaviors, the presence of a provoking external stimulus implies that the behavior is conscious, whereas the absence of a provoking stimulus implies that the behavior is reflexive. This instance illustrates that the presence of an external stimulus is neither necessary nor sufficient for characterizing a reflex.

The second problem stems from the context in which reflex identification is being discussed—namely, in tests for consciousness. Consciousness testing, like any other test, requires a probing stimulus. If reflexes were defined by responses to external stimuli and conscious behaviors, in contrast, occurred only outside of external stimuli, then the proposition of testing for conscious behaviors would be paradoxical; the very input of a probing stimulus would preclude the demonstration of conscious behavior. Reminiscent of Heisenberg's uncertainty principle, the test for consciousness would limit the clinician's knowledge of the patient's consciousness. Thus, in order for reflexes to be useful in distinguishing between disorders of consciousness, the external or internal nature of the stimulus cannot be a defining characteristic.

Alternatively, perhaps the reflex can be defined by its consistency. Compared with conscious behaviors, which vary based on the whims of the person, reflexes are typically considered relatively consistent. For example, over time, one's patellar deep tendon reflex should in general produce a more consistent movement than if one is told to kick his or her leg. However, guidelines have also suggested the exact opposite, claiming that cognitively mediated behavior is “reproducible or sustained long enough to be differentiated from reflexive behavior,” the latter of which may occur “on a coincidental basis.”9 Consistency therefore does not appear to be a dependable trait of the reflex.

Lastly, perhaps complexity can differentiate reflexive from conscious behaviors. Guidelines have suggested that “a few observations of a complex response…may be sufficient to determine the presence of consciousness,”9 where conscious behaviors are more complex than reflexive ones. However, emerging technologies have complicated this view. Behaviors elicited with brain stimulation can, in the strictest sense, be considered reflexes: a stimulus is applied to the brain, and the brain subconsciously produces an involuntary reaction. However, these reactions are more complex than those of classic reflexes. Transcranial magnetic stimulation of the motor cortex can produce movements in multiple muscle groups.22 Electrical stimulation of the supplementary motor area can also generate complex motor behaviors [e.g., “pushing” behavior characterized by hand pronation, elbow extension, and shoulder abduction] as well as vocalizations and even the subjective urge to move.23 Although the urge to move is not a behavior per se, the reflexive response need not constitute a visible behavior. Accordingly, just as reflexive responses can include patterns of vasoconstriction and changes in heart rate, intricate patterns of brain activity [measurable with neuroimaging technologies and EEG] in response to painful or auditory stimuli could also be considered a type of reflex.14,15,24 Modern technologies have enabled the provocation and detection of relatively complex reflexive behaviors and physiologic responses, preventing complexity from reliably distinguishing conscious from unconscious behaviors.

WHAT REFLEXES IMPLY ABOUT DIAGNOSIS IN DISORDERS OF CONSCIOUSNESS

Based on the data reviewed, it appears that no empirical characteristics reliably define reflexive behaviors and distinguish them from conscious behaviors. Historically, as demonstrated by Descartes, there was a time in which the external stimuli that evoked reflexive behaviors [e.g., a hand thrust toward the eye, causing a blink] were so far removed from the internal stimuli that motivated conscious behaviors [e.g., a desire, causing a complex behavior] that the former could be conceived as purely mechanistic processes of the body whereas the latter could be attributed to operations of the mind. However, as our understanding of the neural basis of complex behaviors has developed and the technological means of intervening upon and measuring these neural underpinnings have grown more sophisticated, we have gained the ability to reproduce the internal stimuli and provoke the complex responses once strictly relegated to consciousness. The boundaries delineating physiologic from conscious processes have therefore blurred, and the dualistic model separating the body/brain from the mind has fallen from favor. Accordingly, the concept of the reflex as a purely physiologic response completely distinct from mechanisms of conscious behavior reflects the antiquated remnants of dualistic thinking. The scientific developments elucidating and replicating complex behaviors have illustrated that all behaviors, both conscious and reflexive, can be explained in terms of causative neural signals. Whether these provocative neural signals occur in the context of an involuntary reflex [e.g., through brain stimulation] or a conscious impulse is immaterial to the underlying neurobiology, which is similar in both cases.

However, it would be misguided to conclude that there is no difference at all between conscious and reflexive behaviors simply because the underlying neurobiology is shared. The subjective experience of both, for example, vastly differs. The problem is that no empirical method can distinguish between the 2 with complete certainty for any given behavior. The examples of behavior that signify the extremes—such as engaging in a conversation [clearly indicating consciousness] or pupillary constriction in response to light [clearly indicating a reflex]—suggest that our intuitions regarding the perceived intentionality of behaviors form a spectrum, with some behaviors clearly indicating consciousness, others clearly indicating reflexes, and the remainder falling somewhere between.

Without reflexes to strictly distinguish between VS and MCS, the 2 categories become less distinct. Determining the presence or absence of consciousness remains difficult in patients whose behaviors fall in the gray area between conscious and reflexive behaviors. This ambiguity is problematic for many surrogates and clinicians for whom the distinction is binary and critical in guiding decisions. Based on this discussion, however, there do not appear to be generalizable empirical characteristics that delineate conscious from reflexive behaviors. In the absence of such ideal characteristics, adopting a distinguishing criterion such as interactive capacity carries distinct advantages. Within the spectrum between conscious and reflexive behaviors, interactive behaviors frequently fall safely among those intuitively considered intentional and therefore conscious. Perhaps this is because, whereas spontaneous behaviors without context can be ambiguously interpreted, behaviors in coherent response to external cues are more likely to be intentional. The Turing test offers an analogous application of the same intuition. In this test, an examiner types questions into a computer and must determine whether the responses are generated by a human respondent or an automated algorithm.25,26 The Turing test thus assesses an algorithm's ability to simulate human consciousness based on its interactivity. Assessing the consciousness of a patient is conceptually similar: one probes the patient with behavioral and technological stimuli and evaluates whether the responses are sufficiently coherent to signify consciousness.

The Turing test illustrates that no generalizable objective criteria can identify which interactive responses signify consciousness. Rather, the examiner must subjectively determine whether the responses are sufficiently coherent to signify intentionality and therefore consciousness. This element of subjectivity remains present in assessing the interactive capacity of patients; indeed, we define interactive capacity as the ability to receive communicated information and intentionally generate a coherent response. The assessment of intentionality, as discussed, carries inherent uncertainty, but this uncertainty is reduced in the context of interactive capacity. By restricting the range of behaviors that qualify for that assessment—namely, to those that occur as coherent responses to communicated information—the examiner no longer must determine where any given behavior falls along the spectrum between reflex and intention but rather can focus only on those that occur in an interactive context. And because of their adherence to external cues, the proportion of interactive behaviors that can be safely considered intentional far exceeds that proportion among all possible behaviors. Moreover, requiring the intentionality of interactive responses excludes those that are not even potentially intentional [i.e., responses that are inherently nonvolitional even if performed by a conscious individual, such as automatic brain activity in response to voice or pain15,16]. Thus, adopting interactive capacity as a threshold marker of consciousness reduces the risk of false-positives not only by applying to emerging technological assessments [discussed further elsewhere1] but also by restricting the assessment of intentionality to potentially volitional interactive behaviors.

Although applying the discrete criterion of interactive capacity to a continuous spectrum of conscious and reflexive behaviors reduces false-positives, it does risk producing false-negatives. [It should be noted that false-negatives in disorders of consciousness represent a persistent and more fundamental problem. Whereas MCS is defined by the presence of behaviors, VS is defined by their absence. However, the absence of evidence for consciousness is not evidence of absence; because other conditions can explain the absence of these behaviors [e.g., aphasia, encephalopathy, etc.], the absence of consciousness cannot be definitively determined. Our proposal aims to reduce false-positives by redefining the behaviors that signify consciousness but does not solve this more fundamental problem.] However, as discussed, the application of a discrete threshold to a continuous physiologic process can be critical for medical, social, and legal decision-making. Indeed, there are many precedents for doing so despite similarly risking false-negatives. For example, people with visual acuity of less than 20/200 are considered legally blind, even though they may retain rudimentary vision. This meaningful but nevertheless arbitrary binary distinction serves a number of important social purposes, such as identifying people prohibited from driving and those eligible for disability support.

We have argued that no generalizable set of empirical characteristics reliably distinguishes reflexive from conscious behavior and that these 2 forms of behavior likely form a spectrum. This spectrum of behaviors can complicate the assessment of consciousness and, in the context of current diagnostic criteria, the distinction between VS and MCS. A diagnostic approach that helps to resolve this distinction across the spectrum of behavioral and technological assessments could facilitate important decisions, which frequently rely on discrete judgments about the presence or absence of a patient's consciousness. Adopting interactive capacity as a minimum threshold for consciousness, a criterion with precedence in the behavioral criteria and strong roots in intuition, offers a dependable foundation for future diagnostic criteria and could improve the ethical management of patients with uncertain levels of consciousness.

GLOSSARY

MCS minimally conscious state
VS vegetative state

AUTHOR CONTRIBUTIONS

David B. Fischer: idea generation, literature search, writing. Robert D. Truog: idea generation, manuscript editing.

STUDY FUNDING

Graduate Student Award from the Mind, Brain, and Behavior Interfaculty Initiative of Harvard University.

DISCLOSURE

The authors have reported no disclosures relevant to the manuscript. Go to Neurology.org for full disclosures.

REFERENCES

1. Fischer DB, Truog RD. Conscientious of the conscious: interactive capacity as a threshold marker for consciousness. AJOB Neurosci 2013;4:26–33. [Google Scholar]

2. Banja J. Interactive but not conscious; conscious but not interactive: lessons learned from slime molds and Bartleby the Scrivener. AJOB Neurosci 2013;4:40–41. [Google Scholar]

3. Klincewicz M, Frank L. Consciousness is more complicated than that: theoretical limitations of interactive capacity. AJOB Neurosci 2013;4:38–39. [Google Scholar]

4. Laureys S, Celesia GG, Cohadon F, et al. Unresponsive wakefulness syndrome: a new name for the vegetative state or apallic syndrome. BMC Med 2010;8:1–4. [PMC free article] [PubMed] [Google Scholar]

5. The Multi-Society Task Force on PVS. Medical aspects of the persistent vegetative state [2]. N Engl J Med 1994;330:1572–1579. [PubMed] [Google Scholar]

6. Executive Board of the American Academy of Neurology. Position of the American Academy of Neurology on certain aspects of the care and management of the persistent vegetative state patient. Neurology 1989;39:125–126. [PubMed] [Google Scholar]

7. Jennett B, Plum F. Persistent vegetative state after brain damage: a syndrome in search of a name. Lancet 1972;299:734–737. [PubMed] [Google Scholar]

8. Shewmon D. A critical analysis of conceptual domains of the vegetative state: sorting fact from fancy. NeuroRehabilitation 2004;19:343–347. [PubMed] [Google Scholar]

9. Giacino JT, Ashwal S, Childs N, et al. The minimally conscious state: definition and diagnostic criteria. Neurolxogy 2002;58:349–353. [PubMed] [Google Scholar]

10. Rubin EB, Bernat JL. Ethical aspects of disordered states of consciousness. Neurol Clin 2011;29:1055–1071. [PubMed] [Google Scholar]

11. Bernat J. Chronic disorders of consciousness. Lancet 2006;367:1181–1192. [PubMed] [Google Scholar]

12. Monti M, Vanhaudenhuyse A, Coleman M, et al. Willful modulation of brain activity in disorders of consciousness. N Engl J Med 2010;362:579–589. [PubMed] [Google Scholar]

13. Owen A, Coleman M, Boly M, Davis M, Laureys S, Pickard J. Detecting awareness in the vegetative state. Science 2006;313:1402. [PubMed] [Google Scholar]

14. Faugeras F, Rohaut B, Weiss N, et al. Probing consciousness with event-related potentials in the vegetative state. Neurology 2011;77:264–268. [PMC free article] [PubMed] [Google Scholar]

15. Boly M, Faymonville M, Schnakers C, et al. Perception of pain in the minimally conscious state with PET activation: an observational study. Lancet Neurol 2008;7:1013–1020. [PubMed] [Google Scholar]

16. Di HB, Yu SM, Weng XC, et al. Cerebral response to patient's own name in the vegetative and minimally conscious states. Neurology 2007;68:895–899. [PubMed] [Google Scholar]

17. Vanhaudenhuyse A, Noirhomme Q, Tshibanda L, et al. Default network connectivity reflects the level of consciousness in non-communicative brain-damaged patients. Brain 2010;133:161–171. [PMC free article] [PubMed] [Google Scholar]

18. Kobylarz EJ, Schiff ND. Functional imaging of severely brain-injured patients. Neurol Rev 2013;61:1357–1360. [PubMed] [Google Scholar]

19. Truog RD, Fackler JC. Rethinking brain death. Crit Care Med 1992;20:1705–1713. [PubMed] [Google Scholar]

20. Descartes R. The Passions of the Soul. Bennett J, editor. Indianapolis, IN: Hackett Publishing; 1989. [Google Scholar]

22. Pascual-Leone A, Valls-Sole J, Wassermann EM, Hallett M. Responses to rapid-rate transcranial magnetic stimulation of the human motor cortex. Brain 1994;117:847–858. [PubMed] [Google Scholar]

23. Fried I, Katz A, McCarthy G. Functional organization of human supplementary motor cortex studied by electrical stimulation. J Neurosci 1991;11:3656–3666. [PMC free article] [PubMed] [Google Scholar]

24. Boly M, Faymonville M, Peigneux P, et al. Auditory processing in severely brain injured patients: differences between the minimally conscious state and the persistent vegetative state. Arch Neurol 2004;61:233–238. [PubMed] [Google Scholar]

25. French R. The Turing test: the first 50 years. Trends Cogn Sci 2000;4:115–122. [PubMed] [Google Scholar]

26. Stins JF. Establishing consciousness in non-communicative patients: a modern-day version of the Turing test. Conscious Cogn 2009;18:187–192. [PubMed] [Google Scholar]

Articles from Neurology are provided here courtesy of American Academy of Neurology

Which of the following states of consciousness occurs when one person suggests to another that certain thoughts or behavior will spontaneously occur?

Hypnosis: A social interaction in which one person [the hypnotist] suggests to another person [the subject] that certain perceptions, feelings, thoughts, or behaviors will spontaneously occur.

Which of the following is the awareness of our environment and ourselves?

Consciousness is defined as our subjective awareness of ourselves and our environment. The experience of consciousness is fundamental to human nature.

What term describes a social interaction in which one person suggests to another that certain perceptions feelings thoughts or behaviors will spontaneously occur?

hypnosis. A social interaction in which one person [the hypnotist] suggests to another [the subject] that certain perceptions, feelings, thoughts, or behaviors will spontaneously occur.

Which of the following is the term most closely associated with the split in consciousness that allows some thoughts and behaviors to occur simultaneously with others?

A split in consciousness, which allows some thoughts and behaviors to occur simultaneously with others is called: dissociation.

Chủ Đề