<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
    xmlns:dc="http://purl.org/dc/elements/1.1/"
    xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
    xmlns:admin="http://webns.net/mvcb/"
    xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
    xmlns:content="http://purl.org/rss/1.0/modules/content/">
    
    <channel>
    
    <title>Skeptical Briefs - Committee for Skeptical Inquiry</title>
    <link>http://www.csicop.org/</link>
    <description></description>
    <dc:language>en</dc:language>
    <dc:rights>Copyright 2013</dc:rights>
    <dc:date>2013-04-25T16:36:30+00:00</dc:date>    


    <item>
      <title>More Hazards: Hypnosis, Airplanes, and Strongly Held Beliefs</title>
      <pubDate>Thu, 01 May 2003 13:22:00 EDT</pubDate>
	<author>info@csicop.org (<![CDATA[Loren Pankratz]]>)</author>
      <link>http://www.csicop.org/si/show/more_hazards_hypnosis_airplanes_and_strongly_held_beliefs</link>
      <guid>http://www.csicop.org/si/show/more_hazards_hypnosis_airplanes_and_strongly_held_beliefs</guid>
      <description><![CDATA[
        



			<p class="intro">After a single-case history was reported in the psychological literature, I made an unsuccessful attempt to obtain any documents of the case. However, the adventure provided lessons about why some therapists hold so firmly to certain psychological theories and disdain the critical research.</p>

<blockquote>
<p>Imagine that a Viennese prankster, to amuse his friends, invented the whole business of the id and Oedipus, and made up dreams he had never dreamed and little Hanses he had never met. And what happened? Millions of people were out there, all ready and waiting to become neurotic in earnest. And thousands more ready to make money treating them.</p>
<p>Umberto Eco, <cite>Foucault&rsquo;s Pendulum</cite></p>
</blockquote>

<p>In this magazine, Elizabeth Loftus and Melvin Guyer (2002a, b) <a href="/si/show/who_abused_jane_doe_the_hazards_of_the_single_case_history_part_1/">reviewed a single-case history report</a> that had been hailed as evidence of recovered memory. Psychiatrist David Corwin had captured on videotape the story of the abuse of a six-year-old girl and the recovery, at age seventeen, of her &ldquo;repressed memories.&rdquo; However, serious doubts were raised when Loftus reviewed the court records and interviewed the girl&rsquo;s mother. Here I review another single-case history on recovered memory that appeared in the psychological literature. Although my attempts to obtain the facts were less than successful, the adventure provided some lessons about professional credulity and the power of theories that are formed by personal experience.</p>

<h2>A Case History Report of Repressed Memory</h2>

<p>In 1997, Bertram Karon and Anmarie Widener published an article in <cite>Professional Psychology: Research and Practice</cite> entitled &ldquo;Repressed memories and World War II: Lest we forget!&rdquo; In their article, the authors claimed that there were &ldquo;literally hundreds of documented battlefield neuroses that involved the repression of traumatic combat experiences&rdquo; and that professionals who worked in the Veterans Administration hospitals (now Veterans Affairs hospitals) after WWII frequently saw such patients.</p>

<p>Karon and Widener then described what they identified as a typical combat hysterical neurosis. In their example, a psychoanalytic psychologist identified as Edward Karon<sup><a href="#1return">1</a></sup> treated a veteran with a hysterical paralysis for six months in twice-weekly sessions. At the end of this period, the patient brought his therapist a newspaper clipping that presumably dealt with an airplane crash in which he and the pilot had been injured. The patient reported that he had been a tail gunner in a two-man bomber, selected because he was small enough to fit into the cramped tail gunner&rsquo;s turret. The pilot, however, was over six feet tall and weighed over 200 pounds. Returning from a mission, the patient said that six of the planes in their squadron crashed during landing, raising the suspicion of sabotage.</p>

<p>Because the runway was littered with wreckage, the patient&rsquo;s plane was forced to land in a field. The tail gunner broke his arm, while the pilot broke both legs and was unconscious. Rescuers refused to approach the burning plane because its fuel was ready to explode. However, with his one good arm, the patient managed to drag the pilot, inch by inch, away from the plane. Although his broken arm subsequently healed, his other arm was thereafter paralyzed. Furthermore, he had no conscious memory of the crash or of saving his friend. He was reported to have repressed it.</p>

<p>After recovering his memory in an emotional therapy session, the patient regained partial movement of his paralyzed arm for the first time. Unfortunately, the secondary gains from this paralyzed arm were not sufficiently resolved for him to return to work until after another year of psychoanalytic psychotherapy.<sup><a href="#2return">2</a></sup> The authors concluded that current controversies concerning repressed memories &ldquo;are always discussed without reference to this well-documented body of data.&rdquo; They encouraged mental health professionals to &ldquo;remember their past in order to be effective in the real world.&rdquo; In ways they did not intend, this case history sparked many memories for me because I was well acquainted with stories like these and this style of therapy.</p>

<h2>A Search for More Information</h2>

<p>Events in war are sometimes stranger than fiction. I know, because in my twenty-five years as a Veterans Affairs psychologist I checked the records of nearly every patient who, like this tail gunner, asserted improbable and self-aggrandizing claims. Time and again the stories turned out to be bogus.<sup><a href="#3return">3</a></sup> Students and colleagues of mine quickly learned not to present a report like that of Karon and Widener&rsquo;s without first obtaining some verification.</p>

<p>The purpose of checking a veteran&rsquo;s story, of course, is not directed at catching lies but at identifying and treating the proper problem. For example, was this man&rsquo;s arm paralyzed at the time of his discharge, and did he receive a Purple Heart? Was he receiving a service-connected disability pension for his symptom? Maybe the war story provided an explanation for his marital and occupational problems. These questions could be answered by consulting the patient&rsquo;s C-file (claim file) or his DD-214.<sup><a href="#4return">4</a></sup>
Also, when and where was the newspaper article written? Whether the therapist is a psychoanalyst or a behaviorist, such critical details should always be checked against outside records. Nevertheless, these simple facts are almost never verified, a point I return to later.</p>

<p>I wondered as well what documents were available to Karon for his reconstruction of this case. I believed that the author understood that he would be obligated to provide such information because the Ethical Principles of Psychologists (1992) state that &ldquo;After research results are published, psychologists do not withhold the data on which their conclusions are based from other competent professionals who seek to verify the substantive claims through reanalysis. . . .&rdquo;</p>

<p>Thus, in November 1998, I wrote to Dr. Patrick DeLeon, then editor of <cite>Professional Psychology</cite> to ask his assistance. My letter was directed to him because Pendergrast, in preparing a response (1998), had repeatedly made specific requests for documentation, which Karon ignored.<sup><a href="#5return">5</a></sup> My first mailing to DeLeon went unheeded, but he responded to my second request by saying 1) that he thought my first letter was merely a &ldquo;FYI,&rdquo; needing no reply; 2) that I should write directly to Karon; and 3) that he believed that the ethical code about sharing data applied only to &ldquo;empirical data.&rdquo; I disagreed about the empirical data limitation on the grounds that the spirit of the code has always been to promote the science of psychology by allowing open examination of &ldquo;substantive claims,&rdquo; not merely to recheck t-tests.</p>

<p>Subsequently I wrote to Karon. After he failed to respond to my second request, I provided all my correspondence to the Ethics Office of the American Psychological Association for an opinion. Dr. Dolph M. Printz, the acting director of the Office of Ethics, responded by saying that Dr. Gary R. VandenBos was quite familiar with my concerns, and he had summarized his knowledge of the issues in an enclosed memorandum. Printz trusted that the careful review would assure me &ldquo;that no further action is indicated in this matter.&rdquo;</p>

<p>Surprisingly, the enclosed memorandum by VandenBos was merely a discussion of airplanes. This was clearly not my primary concern and was mentioned only parenthetically in the last paragraph of my letter.</p>

<p>The airplane issue had been raised by software engineer James Giglio, in one of the four responses to the Karon article. Giglio (1998) claimed that no such airplane as the one described in the article was ever flown in the European theatre of war, namely a two-man bomber with a tail gunner in a separate tail turret. I wrote Giglio after I read his article, and he provided me with copies of his correspondence with Karon and Widener. Both kept insisting that he was wrong. Widener finally said that she was glad the veteran was no longer around to read Giglio&rsquo;s misguided comments that &ldquo;completely discounted his experience as a soldier and patriot of this country and of democracy.&rdquo; Karon had also suggested several planes, which Giglio showed as not meeting their criteria. Karon finally insisted that the <cite>Rand McNally Encyclopedia of Military Aircraft</cite> (Angelucci 1981) contained bombers that qualified. Giglio then asked for specific page numbers because he found nothing that fit. When Karon responded, &ldquo;I do not have time to teach you how to read,&rdquo; their correspondence ended.<sup><a href="#6return">6</a></sup></p>

<p>The authors&rsquo; inability to name an aircraft that fit the patient&rsquo;s description seriously damaged the credibility of their story. Yet in Karon and Widener&rsquo;s (1998) response to the critiques of their article, they said that when they informed Giglio about qualifying planes &ldquo;he then tried to become technical.&rdquo; Even more damaging, they still failed to mention the name of any specific aircraft that they believed might qualify. And although they never acknowledged their article&rsquo;s factual deficiencies, they nonetheless vigorously defended the truth of their story.</p>

<p>Strangely, the VandenBos memorandum focused exclusively on the airplane issue. He said that he had formally sought input from the editor of a WWII aviation magazine who provided several examples: the Mosquito A-20a and A-26, the Douglas SBD Dauntless, the Curtis SB2C Helldiver, the British deHavilland Mosquito, the Douglas A-20 Havoc, the Douglas A-26B Invader, and the Bristol Beaufighter. However, Giglio had already pointed out why these specific planes failed to meet the criteria. The Mosquito A-20 and A-26 did not have a separate tail gunner; the Douglas SBD Dauntless and Curtiss Helldiver were carrier-based dive bombers deployed exclusively in the Pacific; the British deHavilland Mosquito, Douglas A-20 Havoc, and Douglas A-26B Invader each had no tail gunner or tail turret; the British Beaufighter was a night fighter, not a bomber, and the only models with separate rear-facing turrets (not in the tail) were non-operational prototypes.</p>

<p>VandenBos opined that any distortions of the patient&rsquo;s memory were a &ldquo;side detail&rdquo; and not the essential determinant of accuracy and validity of the clinical discussion. Memory distortion was the issue, and it was difficult for me to dismiss as &ldquo;side detail&rdquo; the obvious importance of investigating the patient&rsquo;s service record, clinical treatment notes, and any other data that could &ldquo;verify the substantive claims&rdquo; of the article. Then I discovered that VandenBos had co-authored a book with Karon. VanderBos was caught in a conflict of interest. Any hope of finding the facts behind this case were now blocked, and it was clear that many issues remained unresolved.</p>

<h2>Boiling Controversy</h2>

<p>About a year after the article appeared, <cite>Professional Psychology</cite> published four critical reviews and a response by Karon and Widener. The Giglio article has already been discussed. The review by Lilienfeld and Loftus (1998) was about twice as long as the original Karon article because the authors reviewed a broad spectrum of research concerning the evidence for repression, the role of hypnosis and sodium pentathol in the recovery of memories, problems with the specific case example, and the appropriate use of single case-history reports. Piper (1998) focused on the problem of definitions that confuse discussions of repression, and he also reviewed many of the papers cited by Karon and Widener that they believed supported the notion of repression and amnesia. Finally, the article by Pendergrast (1998) described many examples of recovered war traumas that were false.</p>

<p>The response by Karon and Widener (1998) reflects the bitter divide that infects the issue of repressed memories. They began their article with another case history--this one about a rape. &ldquo;Would any serious clinician tell her she is lying because there is no such thing as repression?&rdquo; These reviewers, they charge, are dismissing all WW II patients who suffered trauma and repression as malingerers.</p>

<p>The only point of their article, they insist, was to show that repression exists. &ldquo;Every psychodynamic therapist sees it. The only way he or she could not see it is by assuming that what the patient says are lies.&rdquo; Although they put up a brave fight over the research, the bottom line for Karon and Widener was that clinicians know repression exists, and &ldquo;psychologists who dispute the conclusive existence of repression do not do therapy.&rdquo; They implied that those who deny repression are academics who make money by testifying for the defense in court cases, and they agreed with famous attorney Alan Dershowitz when he stated, &ldquo;The defense has no obligation to tell the truth.&rdquo;</p>

<p>The only hint of a concession in the Karon and Widener article was an acknowledgment that hypnosis and pentathol procedures can be leading and suggestive. Further, &ldquo;Remembered events may or may not be literally true,&rdquo; but then, &ldquo;People in or out of therapy have memories of events that never occur as well as memories of events that did occur, but this fact has nothing to do with our article.&rdquo; This admission, it seems to me, suggests the possibility of a mistaken story by a tail gunner. I can think of several options other than lying and malingering to explain the onset of hysterical symptoms and recovered memories. They were the ones who brought up the patient&rsquo;s secondary gain--a mark of malingering. Why does a skeptical attitude about repression evoke such distress in some therapists?</p>

<h2>Remembering the Lessons</h2>

<p>I agree with Karon that the lessons of WWII seem to have been forgotten but "need to be remembered in order for therapists to be effective in the real world.&rdquo; He was also correct in stating that few living clinical psychologists were working in the VA in the 1940s. However, my generation was trained by them. For example, I interacted several times with Jack Watkins who was at the Portland Veterans Administration before moving to the University of Montana where he continued his work in hypnosis and in the multiple personality disorder movement. Further, in 1974, I was president of the Portland Academy of Hypnosis, where month after month speakers shared dramatic case histories that demonstrated the &ldquo;truth&rdquo; of their particular theories.</p>

<p>These therapists promoted a vast array of explanations for the development of symptoms. They focused on childhood events, anniversary reactions, blocked emotions, sexual issues, double binds, internal conflicts, hidden trauma, and, of course, repressed memories. We applauded each theory knowing that next month our fickle devotion would be overwhelmed by a new series of fascinating case histories. Why did each therapist have a different explanation about the cause of symptoms?</p>

<p>In 1784, the French commission investigating mesmerism found that subjects appeared to know when and where they should have a convulsion only if the mesmerist was present to provide the cues. From the very beginning, patients unwittingly confirmed the theories of their therapists. For example, Zerffi (1871) illustrated the extent of this problem when he said, &ldquo;Hundreds of trustworthy witnesses have asserted facts which we cannot understand&rdquo; (p. 67), namely that somnambulists exhibit clairvoyant powers. For example, Grimes (1850) noted that a phrenologist could ask a mesmerized subject to identify the part of her brain where she kept secrets, and she would place her finger exactly on the organ of Secretiveness. Similarly, she could identify other regions of emotions without any understanding of phrenological science. Then, Grimes discovered that phrenologists with different cranial maps obtained information from subjects that confirmed their own individual theories. He concluded: &ldquo;When the subject, the operator, and all concerned, believe in any peculiar notion, the experiments will not contradict that notion, but will confirm it, however absurd it may be&rdquo; (p. 209).</p>

<p>The French neurologist Jean-Martin Charcot confirmed his own theories in a similar manner when he studied hysteria using hypnosis, a process described as "one of the most significant misunderstandings in the entire history of medicine&rdquo; (Webster 1995, p. 72). Charcot was Freud&rsquo;s most significant mentor, and this problematic methodology was passed on to the generation of psychiatrists who were convinced that the conversion disorders of WW I servicemen were caused by repressed battle trauma. Like Karon&rsquo;s patient, they were often treated with hypnotic abreaction in which the patient was expected to re-live the moment of trauma with unrestrained emotions. They believed that memories revealed during abreaction were completely true to the original experience, and if not, for those who wondered, the process itself was probably necessary for healing.</p>

<p>For example, Hadfield (1940) believed that most of the soldiers with traumatic neuroses had repressed experiences of being buried or blasted by an explosion. He used hypno-analysis to recover these memories, although sometimes "considerable patience and persistence are required to recover the experience&rdquo; (p. 142). In such cases, he recommended telling the patient that he will not leave the room until he has recovered the experience. &ldquo;Such persistence nearly always succeeds.&rdquo;</p>

<p>But from WWII on, the number of psychotherapeutic strategies exploded. This was also true for hypnotic interventions, and many of those innovators traveled through the informal speaking circuit of hypnosis societies that I mentioned above. Martin Orne (1959) provided some insight into why this proliferation was happening. Through a series of diabolically clever experiments, he showed that the hypnotic interaction is such a powerful experience for therapist and subject that both remain unaware of how certain implicit cues guide their process. The subject integrates the expectations of the hypnotist in an attempt to be cooperative, while modifying his own story to fit that expectation. Of course, in some situations the patient&rsquo;s story might be true. However, confabulated reports can be &ldquo;extremely deceiving, as they represent a subjectively real situation, and, therefore, are produced with complete sincerity&rdquo; (Orne 1951, 221).</p>

<p>Unaware of how much they are influencing each other, both therapist and subject become convinced that <em>their</em> theory is true, with the result that they will likely come to view research as contrived or irrelevant to their dynamic experience. Checking the facts seems irrelevant, even confrontational or counter-therapeutic. This powerful subjective experience can lead both parties into false beliefs (Pankratz 2002).</p>

<p>During the Vietnam war, conversion disorders were seldom encountered as repressed memories, and abreactive treatments became a quaint historical artifact. The effects of trauma were now expressed as symptoms of avoidance and intrusion, with flashbacks as a marker.<sup><a href="#7return">7</a></sup> Because this war was unpopular, some suggested that most who participated would have symptoms independent of any constitutional vulnerability--if not now, then delayed. Posttraumatic stress disorder (PTSD) entered the diagnostic manual as a natural adaptation to extraordinary adverse situations (Yehuda et al. 1995).</p>

<p>In 1983, Landy Sparr and I were the first to show how easily this new disorder was feigned. However, PTSD became a wildly popular research enterprise. But in their enthusiasm, most researchers failed to check their subjects&rsquo; claims or consider more mundane explanations for their symptoms.<sup><a href="#8return">8</a></sup> Like patients who told their therapists what they wanted to hear, research subjects validated experimenters&rsquo; hypotheses (Orne 1962).</p>

<p>During the twenty years that I have refereed papers submitted to the <cite>American Journal of Psychiatry</cite>, I discovered that many authors merely gathered evidence for what they believed was true about symptoms and the underlying trauma. Fortunately, editors usually understood my skepticism, but it was of great help when Southwick and colleagues (1997) showed that the memories of veterans of Operation Desert Storm were highly inconsistent when questioned one month after combat and then again two years later. Most disturbing was the amplification of recall of traumatic events. Subjects changed their reports to say that they had seen others killed or wounded, that their unit had been ambushed, or that they had encountered booby traps or mines. The authors concluded that &ldquo;If memories of combat are inconsistent, then the relationship between PTSD and combat exposure would be a tenuous one.&rdquo; An accompanying editorial frankly admitted that no one now knows what posttraumatic stress disorder really is (Hales and Zatzick 1997).</p>

<p>But careful research testing competing explanations has shown us how far we have drifted off course. The vast majority of people exposed to toxic events do not subsequently experience any long-term disorder, and delayed responses are extremely rare. Both children and adults, it turns out, are amazingly resilient in the long run to trauma and unfavorable environments (Bowman 1997; Masten 2001). Pre-existing personal vulnerabilities are more predictive of outcome than an event, just as the DSM-I suggested (Yehuda et al. 1995). Finally, B.G. Burkett and Glenna Whitley (1998) provided compelling evidence that Vietnam veterans are better educated, have a lower suicide rate, have a higher employment record, are underrepresented in prison populations, and have a lower homelessness rate than those who did not serve. They suggested that the VA is not treating posttraumatic stress disorder; they are teaching it.</p>

<h2>Conclusions</h2>

<p>In 1781, Mesmer fled Paris in disappointment and fury because the commission appointed to investigate him was not interested in the personal experiences of his patients but in whether there was evidence for his underlying assumption of animal magnetism. In the 1880s, Charcot ordered doubters out of his hospital when they questioned the value of his Tuesday lectures. In the twentieth century, psychiatrists disdained the idea of checking the reality of abreactions and self-reported trauma. As a result, posttraumatic stress disorder disability pensions may now cost taxpayers $2 billion a year, and we must face the possibility that two decades of posttraumatic stress disorder research, all based on dubious self-reports, may be useless.</p>

<p>In the single-case history report investigated by Loftus, small inconsistencies were ignored by professionals who were overwhelmingly convinced by the emotional response of the subject. When Loftus looked for all the facts, she became the object of some serious harassment (Tavris 2002). James Giglio was accused of being unpatriotic when he asked for information, and the American Psychological Association would rather talk to aviation experts than acknowledge whether or not any documents support a repressed memory report.<sup><a href="#9return">9</a></sup></p>

<p>From these generations of neglected critical questioning emerged an eagerness to treat recovered memories, multiple personality disorders, and traumas of every sort. The disheartening news is that we have yet to discover an effective treatment for those who really suffer from chronic posttraumatic stress (Shalev et al. 1996) or from the acute effects of trauma. Litz and colleagues (2002) reviewed six studies of early interventions for acute trauma that they judged as having sound methodology. In all instances, psychological debriefing failed to promote change to a greater degree than no intervention at all, and in two studies the symptoms of treated victims became worse over time. While society demands that mental health professions help, sufferers are likely to be better off relying on their own natural support systems.</p>

<p>I believe psychologists have a responsibility to provide safe and effective treatments to those who use our services. Karon and I agree on one thing: Mental health professionals need to remember their past in order to be effective in the real world.</p>



<h2>Notes</h2>
<ol>
  <li><a name="1return"></a>Bertram Karon told Beth Loftus that Edward was his brother who had died about twenty years previously.</li>
  <li><a name="2return"></a> I published a single-case report describing two sessions of hypnosis to treat a similar hysterical paralysis (Pankratz 1979). My point was that a face-saving strategy can avoid a struggle over the etiology of symptoms, and it is not necessary that the paradigm fit the facts to be effective.</li>
  <li><a name="3return"></a> See, for example, Pankratz 1990, 1998; Pankratz, Hickam, and Toth 1989; Pankratz and Jackson 1994; Pankratz and Kofoed 1988; Pankratz and Lipkin 1978; and Pankratz and McCarthy 1986.</li>
  <li><a name="4return"></a> The DD-214 is the veteran&rsquo;s discharge document that provides a general review of the individual&rsquo;s military history. The DD-214 is now so commonly forged, however, that it should no longer be considered a reliable document.</li>
  <li><a name="5return"></a> Interested readers can obtain a copy of this correspondence from Mr. Pendergrast at markp@nasw.org.</li>
  <li><a name="6return"></a> Interested readers can obtain a copy of this correspondence from Mr. Giglio at jgiglio@nova.umuc.edu.</li>
  <li><a name="7return"></a> Jones, et al. (in press) examined symptoms of UK servicemen from 1854 to the present. They concluded that symptoms of stress have changed dramatically over time and that PTSD (as described in the diagnostic manual) is a culture-bound syndrome.</li>
  <li><a name="8return"></a> My favorite example is from the National Vietnam Veteran Readjustment Study (NVVRS), research that consumed four years and $9 million (Kulka et al. 1988). Six women in the study claimed that their stress was caused by being a prisoner of war. Not one of the many researchers involved in the study apparently realized that no American military woman ever became a POW in Vietnam.</li>
  <li><a name="9return"></a> The American Psychological Association recently was accused of backing away from some controversial scientific findings. To their credit, they devoted an issue of the American Psychologist to the whole affair (see Lilienfeld 2002).</li>
</ol>

<h2>References</h2>
<ul>
  <li>American Psychological Association. 1992. Ethical principles of psychologists and code of conduct. American Psychologist 47: 1597-1611.</li>
  <li>Angelucci, E. 1981. The Rand McNally Encyclopedia of Military Aircraft, 1914-1980. Chicago: Rand McNally.</li>
  <li>Bowman, M. 1997. Individual differences in posttraumatic response. Mahwah N.J.: Erlbaum.</li>
  <li>Burkett, B.G., and G. Whitley. 1998. Stolen Valor. Dallas: Verity Press.</li>
  <li>Giglio, J.C. 1998. A comment on World War II repression. Professional Psychology: Research and Practice 29: 470.</li>
  <li>Grimes, J.S. 1850. Etherology, and the phreno-philosophy of mesmerism and magic eloquence. Boston/London: James Munroe/Edward T. Whitfield.</li>
  <li>Hadfield, J.A. 1940. Treatment by suggestion and hypno-analysis. Chapter in E. Miller, The Neuroses in War. London: Macmillan.</li>
  <li>Hales, R.E., and D.F. Zatzick. 1997. What is PTSD? American Journal of Psychiatry 154: 143-144.</li>
  <li>Jones, E. et al., in press. Flashbacks and post-traumatic stress disorder: The genesis of a twentieth-century disorder. British Journal of Psychiatry.</li>
  <li>Karon, B., and A. Widener. 1997. Repressed memories and World War II: Lest we forget! Professional Psychology: Research and Practice 28: 338-340.</li>
  <li>&mdash;. 1998. Repressed memories: The real story. Professional Psychology: Research and Practice 29: 482-487.</li>
  <li>Kulka, R.A., et al. 1988. Trauma and the Vietnam War Generation. New York: Brunner/Mazel.</li>
  <li>Lilienfeld, S.O. 2002. When worlds collide. American Psychologist 57: 176-188.</li>
  <li>Lilienfeld, S.O., and E.F. Loftus. 1998. Repressed memories and World War II: Some cautionary notes. Professional Psychology: Research and Practice 29: 471-475.</li>
  <li>Litz, B.T., et al. 2002. Early intervention for trauma: current status and future directions. Clinical Psychology: Science and Practice 9: 112-34.</li>
  <li>Loftus, E.F., and M.J. Guyer. 2002a. <a href="/si/show/who_abused_jane_doe_the_hazards_of_the_single_case_history_part_1/">Who abused Jane Doe? The hazards of the single case history Part 1</a>. Skeptical Inquirer 26(3): 24-32.</li>
  <li>&mdash; 2002b. <a href="/si/show/who_abused_jane_doe_the_hazards_of_the_single_case_history_part_2/">Who abused Jane Doe? The hazards of the single case history Part 2</a>. Skeptical Inquirer 26(4): 37-40.</li>
  <li>Masten, A. 2001. Ordinary magic: Resilience processes in development. American Psychologist 56: 227-38.</li>
  <li>Orne, M.T. 1951. The mechanism of hypnotic age regression: An experimental study. Journal of Abnormal and Social Psychology 46: 213-225.</li>
  <li>&mdash;. The nature of hypnosis: Artifact and essence. Journal of Abnormal and Social Psychology 58: 277-298.</li>
  <li>&mdash;. 1962. On the social psychology of the psychological experiment: With particular reference to demand characteristics and their implications. American Psychologist 17: 776-783.</li>
  <li>Pankratz, L. 1979. Procedures for the assessment and treatment of functional sensory deficits. Journal of Consulting and Clinical Psychology 47: 409-410.</li>
  <li>&mdash;. 1990. Continued appearance of factitious posttraumatic stress disorder. American Journal of Psychiatry 147: 811-812.</li>
  <li>&mdash;. 1998. Patients Who Deceive. Springfield, Illinois: Charles C. Thomas.</li>
  <li>&mdash;. 2002. Demand characteristics and the development of dual, false belief systems. Prevention and Treatment 5: Article 39.</li>
  <li>Pankratz, L., D. Hickam, and S. Toth. 1989. The identification and management of drug-seeking behavior in a medical center. Drug and Alcohol Dependence 24: 115-118.</li>
  <li>Pankratz, L., and J. Jackson. 1994. Habitually wandering patients. New England Journal of Medicine 331: 1752-1755.</li>
  <li>Pankratz, L., and L. Kofoed. 1988. The assessment and treatment of geezers. Journal of the American Medical Association 259: 1228-1229.</li>
  <li>Pankratz, L., and J. Lipkin. 1978. The transient patient in a psychiatric ward: Summering in Oregon. Journal of Operational Psychiatry 9: 42-47.</li>
  <li>Pankratz, L., and G. McCarthy. 1986. The ten least wanted patients. Southern Medical Journal 79: 613-620.</li>
  <li>Pendergrast, M. 1998. Response to Karon and Widener (1997). Professional Psychology: Research and Practice 29: 479-481.</li>
  <li>Piper, A. 1998. Repressed memories from World War II: Nothing to forget. Examining Karon and Widener&rsquo;s (1997) claim to have discovered evidence for repression. Professional Psychology: Research and Practice 29: 476-478.</li>
  <li>Shalev, A.Y., O. Bonne, and S. Eth. 1996. Treatment of posttraumatic stress disorder: A review. Psychosomatic Medicine 58: 165-82.</li>
  <li>Southwick, S.M., et al. 1997. Consistency of memory for combat-related traumatic events in veterans of Operation Desert Storm. American Journal of Psychiatry 154: 173-177.</li>
  <li>Sparr, L., and L. Pankratz. 1983. Factitious posttraumatic stress disorder. American Journal of Psychiatry 140: 1016-1019.</li>
  <li>Tavris, C. 2002. <a href="http://www.csi-beta.net/si/show/high_cost_of_skepticism/">The high cost of skepticism</a>. Skeptical Inquirer 26(4): 41-44.</li>
  <li>Webster, R. 1995. Why Freud Was Wrong. New York: Basic Books.</li>
  <li>Yehuda, R., and A.C. McFarlane. 1995. Conflict between current knowledge about posttraumatic stress disorder and its original conceptual basis. American Journal of Psychiatry 152: 1705-1713.</li>
  <li>Zerffi, G.G. 1871. Spiritualism and Animal Magnetism. London: Robert Hardwicke.</li>
</ul>




      
      ]]></description>
    </item>

    <item>
      <title>How Not To Review Mediumship Research</title>
      <pubDate>Thu, 01 May 2003 13:22:00 EDT</pubDate>
	<author>info@csicop.org (<![CDATA[Gary E. Schwartz]]>)</author>
      <link>http://www.csicop.org/si/show/how_not_to_review_mediumship_research</link>
      <guid>http://www.csicop.org/si/show/how_not_to_review_mediumship_research</guid>
      <description><![CDATA[
        



			<p>Most rational scientists agree that the credibility and integrity of a review of a body of research is that it includes all the important information, not just the reviewer&rsquo;s favored information. Ray Hyman&rsquo;s review &ldquo;<a href="/si/show/how_not_to_test_mediums_critiquing_the_afterlife_experiments/">How Not To Test Mediums</a>&rdquo; (January/February 2002) is a textbook example of the selective ignoring or dismissing of historical, procedural, and empirical facts to fits one&rsquo;s preferred interpretation. The result is an inaccurate, mistaken, and biased set of conclusions of the current data.</p>
<p>Hyman is a distinguished professor emeritus from the Department of Psychology at the University of Oregon, who has had a longstanding career as a skeptic focused on uncovering potential flaws in parapsychology research. Hyman is well skilled in carefully going through the conventional checklist of potential sources of experimental errors and limitations in research designs.</p>
<p>Hyman&rsquo;s overall appraisal of the research conducted to date is implied by his conclusion: &ldquo;Probably no other extended program in psychical research deviates so much from accepted norms of scientific methodology as does this one.&rdquo;</p>
<p>Is Hyman&rsquo;s summary conclusion based upon a thorough review of the total body of research? Or does it reflect the systematic ignoring of important historical, procedural, and empirical facts--a cognitive bias used by the reviewer in order to maintain his belief that the phenomenon in question is impossible? As I document below, Hyman resorts to (consciously and / or unconsciously) selectively ignoring important information that is inconsistent with his personal beliefs.</p>
<p>Selective ignoring of facts is not acceptable in science. It reflects a bias that obviates the purpose of research and disallows new discoveries. I have made the statement that the survival consciousness hypothesis does account for the totality of the research data to date. Of course, this does not make the survival hypothesis the only or correct hypothesis--my statement reflects the status of the evidence to date, not necessarily the truth about the underlying process. This is why more research is needed.</p>
<p>Note that I do not use the word &ldquo;believe&rdquo; in relationship to the statement. This is not a belief. It is an empirical observation derived from experiments.</p>
<p>It is correct that some of the single-blind and double-blind studies have weaknesses--we discuss the experimental limitations at some length in our published papers as well as in <cite>The Afterlife Experiments</cite>. However, these weaknesses do not justify dismissing the totality of the data as mistaken or meaningless. Quite the contrary, an honest and accurate analysis reveals that the data, in total, deserve serious consideration.</p>
<p>Our research presents all the findings--the hits and the misses, the creative aspects of the designs and their limitations--so that the reader can make an accurate and informed decision. What we strive for is seeking the truth as reflected in Harvard&rsquo;s motto &ldquo;Veritas.&rdquo;</p>
<p>I appreciate Hyman&rsquo;s effort to outline some of the possible errors and limitations in the mediumship experiments discussed in The Afterlife Experiments. However, as Hyman emphasizes in his review, I do &ldquo;strongly disagree&rdquo; with him about his interpretations. The two fundamental disagreements I have with Hyman&rsquo;s arguments are:</p>
<ol>
<li>Hyman has chosen to ignore numerous historical, procedural, and empirical facts that are inconsistent with his interpretive descriptions of our experiments; and</li>
<li>Hyman has chosen not to acknowledge the totality of the findings following Occam&rsquo;s heuristic principle as a means of integrating the total set of findings collected to date.</li>
</ol>
<p>Space precludes my providing a detailed and thorough commentary here illustrating how pervasively Hyman ignores and omits important information. (An extensive commentary has been published on various Web sites, including <a href="http://www.openmindsciences.com/">www.openmindsciences.com</a>.) Four samples of important ignored facts are provided below.</p>
<h2>Selective Ignoring of Historical, Procedural, and Empirical Facts</h2>
<p><strong>Veritas 1:</strong> In his review, Hyman failed to mention the important historical fact that our mediumship research actually began with double-blind experimental designs. For example, the published experiment referred to in The Afterlife Experiments as &ldquo;From Here To There and Back Again&rdquo; with Susy Smith and Laurie Campbell was completed almost a year before we conducted the more naturalistic multi-medium/multi-sitter experiments involving John Edward, Suzanne Northrop, George Anderson, Anne Gehman, and Laurie Campbell. The early Smith-Campbell double-blind studies did not suffer from possible subtle visual or auditory sensory leakage or rater bias--and strong positive findings were obtained.</p>
<p>Our decision to subsequently conduct more naturalistic designs (which are inherently less controlled), was made partially for practical reasons (e.g., developing professional trust with highly visible mediums) and partly for scientific ones (e.g., we wished to examine under laboratory conditions how mediumship is often conducted in the field).</p>
<p>Conclusion: Hyman makes a factually erroneous criticism when he reports that double-blind experiments were initiated only late in our research program, and therefore makes a serious interpretative mistake when he decides that all the early data can be dismissed because they were not conducted double-blind.</p>
<p><strong>Veritas 2:</strong> In an exploratory double-blind long distance mediumship experiment where George Dalzell (GD) was one of six sitters and Laurie Campbell (LC) was the medium, Hyman states &ldquo;because nothing significant was found, the results do not warrant claiming a successful replication of previous findings.&rdquo;</p>
<p>However, Hyman minimizes the fact that the number of subjects in this exploratory experiment was small (n=6). More importantly, Hyman fails to cite a important conclusion that we reached in the discussion: &ldquo;If the binary 66 percent figure approximates (1) LC&rsquo;s actual ability to conduct double-blind readings, coupled with (2) the six sitter&rsquo;s ability, on the average, to score transcripts double-blind, the 66 percent figure would require only an <em>n</em> of 25 sitters to reach statistical significance (e.g., <em>p</em>
&lt; .01).&rdquo;</p>
<p>Hyman fails to mention that NIH, for example, requires that investigators who apply for research grants calculate statistical power and sample size to determine what n is required to obtain a statistically significant result. This is accepted scientific practice and is required for obtaining NIH funding.</p>
<p>Conclusion: Hyman would rather dismiss the fact that the highly accurate ratings obtained in the single-blind published study for GD were indeed replicated in the double-blind published study, than to admit the possibility that individual differences in sitter characteristics are an important and genuine factor in mediumship research.</p>
<p><strong>Veritas 3:</strong> It is curious that among the many examples of readings provided in <cite>The Afterlife Experiments</cite>, one early subset (cluster/pattern) of facts happened to fit Hyman nicely. It is true that mention of the &ldquo;Big H,&rdquo; a &ldquo;father-like figure,&rdquo; an &ldquo;HN sound&rdquo; would fit Hyman&rsquo;s father like it did the sitter&rsquo;s husband mentioned in the book.</p>
<p>Hyman chose not to report the fact that many other pieces of specific information also reported for the &ldquo;Big H&rdquo; did <em>not</em> fit Hyman but did fit the sitter precisely. Moreover, Hyman consistently failed to report scores of examples from readings reported verbatim in the book that were highly unusual and unique to individual sitters (e.g., John Edward seeing a deceased grandmother having two large poodles, a black one and a white one, and the white one &ldquo;tore up the house&rdquo;).</p>
<p> Conclusion: The reason Hyman failed to mention these numerous examples is because they contradict the conclusion Hyman chose to accept--that the information, by chance, could fit multiple sitters--an erroneous conclusion that can be reached only if we do what Hyman did and accept the information selectively.</p>
<p><strong>Veritas 4:</strong> Hyman&rsquo;s conclusion that experienced cold readers can readily replicate the kinds of specific information obtained under the conditions of our experiments is mistaken at best and deceptive at worst.</p>
<p>Under experimental conditions where (a) professional cold readers do not know the identity of the sitters (i.e., cheating is ruled out), and (b) cold readers are not allowed to see or speak with the sitters (i.e., cueing and feedback is ruled out), it is (c) impossible for cold readers to use whatever pre-obtained sitter specific information they might have obtained, and (d) impossible for cold readers to use their feedback tricks to help them get information from the sitters.</p>
<p>At the two-day meeting I convened in Los Angeles of seven highly experienced professional mentalist magicians and cold readers, they all agreed that they could not apply their conventional mentalist tricks under these strict experimental conditions. However, a vocal subset (Hyman was one of the three), made the unsubstantiated claim that if they had a year or two to practice, they might be able to figure out a way how to fake what the mediums were doing.</p>
<p>My response to this vocal subset was simple. It was &ldquo;show me.&rdquo; Just as I don't take the claims of the mediums on faith, I don't take the claims of the magicians on faith either. I am a researcher. Mentalist magicians who make these claims will have to &ldquo;sit in the research chair&rdquo; and show us that they can do what they claim they can do.</p>
<p> Thus far, the few cold readers who have made these claims have refused to be experimentally tested. They have been unwilling to demonstrate in the laboratory that they can't do what the mediums do under these experimental conditions; and they have been unwilling to demonstrate at a later date that their performance can improve substantially with practice.</p>
<p>Conclusion: The claim that cold reading can account for the research findings is not supported when the experimental procedures are honestly taken into account.</p>
<h2>Failure to Integrate Information and Appreciate the Process of Discovery</h2>
<p>In most areas of science, no single experiment is perfect or complete. Different experiments address different conditions and different alternative explanations to different degrees. The challenge is to connect the dots of the available data and integrate the complex set of findings using the fewest number of explanations (i.e., Occam&rsquo;s razor).</p>
<p>Hyman reveals in his review that he learned as a teenager that it was easy for him to fool many people with palm reading. It is also quite easy to fool many people with fake mediumship, as anyone trained in cold reading will tell you. I have studied a number of books on cold reading and have taken some classes on cold reading myself. However, just because it is possible sometimes to be fooled (especially by the masters of magic) doesn't mean that everyone is fooling you.</p>
<p>Hyman reluctantly agrees that it is improbable that the totality of our findings can be explained by fraud. As a result, his preference is to propose that the set of findings collected to date must involve a complex set of subtle cues providing information in some studies, cold reading techniques being used in some studies, rater bias providing inflated scores in some studies, and chance findings in some studies. The idea that mediums might be obtaining anomalous information that can most simply and parsimoniously be explained in terms of the continuance of consciousness is presumed categorically to be false by Hyman until proven otherwise.</p>
<p>I make no such categorical assumptions, one way or the other. To me the question of whether or not mediums are obtaining anomalous information is a purely scientific one, to be revealed through a program of systematic research. Such research must be conducted by multiple laboratories. The reason for publishing findings, as they emerge, is to encourage other investigators to conduct their own experiments, and then integrate the totality of the findings.</p>
<p>However, the truth is, it is impossible to integrate the totality of the findings in any area of science if one selectively (consciously or unconsciously) ignores those specific findings that do not fit one&rsquo;s preferences or biases.</p>
<h2>Scientific Integrity and Changing One&rsquo;s Beliefs</h2>
<p>I admit, quite adamantly, that I do have one fundamental bias--my bias is to use the scientific method to discover the truth, whatever it is. Discovering the truth cannot be achieved through selective reporting of history, procedures, and data.</p>
<p>So what is the truth at the present time, based upon the available data? When the totality of the history, procedures, and findings to date are examined honestly and comprehensively--not selectively sampled to fit one&rsquo;s particular theoretical bias--something anomalous appears to be occurring in the mediumship research, at least with a select group of evidence-based mediums.</p>
<p>Over and over, from experiment to experiment, findings have been observed that deserve the term extraordinary. In our latest double-blind, multi-center experiments, stable individual differences in sitters have been observed that replicate across laboratories and experiments. The observations are not going away--even with multi-center, double-blind testing.</p>
<p>Hyman once told me, &ldquo;I have no control over my beliefs.&rdquo; When I asked him what he would conclude if a perfect large sample multi-center double-blind experiment was conducted, his response was, &ldquo;I would want to see your major multi-center, double-blind experiment replicated a few times by other centers before drawing any conclusions.&rdquo;</p>
<p>This conversation is revealing psychologically. Until multiple perfect experiments are performed and published, Hyman would rather believe that the totality of the findings must be due to some combination of fraud, cold reading, rater bias, experimenter error, or chance--even if this requires that he selectively ignores important aspects of the history, designs, and findings in order to hold on to his belief that he (or we) are being &ldquo;fooled.&rdquo;</p>
<p>Why spend the time and money conducting multiple multi-center, double-blind experiments unless there are sufficient theoretical, experimental, and social reasons for doing so?</p>
<p>The critical question is, &ldquo;Is it possible that consistent with the actual totality of the data collected to date--viewed historically (e.g., the observations of William James) as well as across disciplines (e.g., from anthropology to astrophysics)--that future research may lead us to come to the conclusion that consciousness is intimately related to energy and information, and that consciousness, as an expression of dynamically patterned energy and information, persists in space like the light from distant stars?&rdquo;</p>
<p>This is ultimately an empirical question; it will be answered by data, one way or the other. If positive data are obtained--and I emphasize if--accepting the data will require that we be able to change our beliefs as a function of what the data reveal. <cite>The Afterlife Experiments</cite> was written to encourage people to keep an open mind about what the future research may reveal.</p>
<h2>Epilogue: What is a Magazine&rsquo;s Responsibility?</h2>
<p>If the <cite>Skeptical Inquirer</cite> wishes to be viewed as being a credible publication, more like the Philadelphia Inquirer than the National Enquirer, it should take responsibility for fact checking its articles and correcting mistakes caused by simple errors and/or the selective ignoring of important information.</p>
<p>For example, Hyman&rsquo;s review begins by stating that I was a professor at Yale University for twenty-eight years--the fact is, I was at Yale for twelve years. If the <cite>Skeptical Inquirer</cite> had not chosen to keep Hyman&rsquo;s review secret, and had asked me to fact check Hyman&rsquo;s review, I would have gladly done so, and therefore enabled both the magazine and the reviewer to correct at least the obvious errors of fact. Clearly, little mistakes, compounded by big mistakes, do not make for a credible publication or review.</p>
<p>I am taking a strong position about accuracy of reporting here not because of the ultimate validity of the survival hypothesis (i.e., whether it is true or not, since that is an experimental question) but because of the nature of scientific reviewing process itself.</p>
<p>The selective ignoring and omission of important information cannot be condoned in either reviewing or publishing. It must be exposed and understood, regardless of the specific research area that is being reviewed or the specific person doing the reviewing.</p>
<p>Note that my argument is not with Hyman as a person, nor with the <cite>Skeptical Inquirer</cite> as a publication. My concern is about the process by which Hyman has written his review, and the responsibility of <cite>Skeptical Inquirer</cite> to decrease the likelihood that this kind of mistaken review will be published in the future. There is a bigger lesson here. It is worth considering, and correcting.</p>
<h2>Acknowledgments</h2>
<p>I thank a number of my colleagues who have graciously taken the time to provide me with useful feedback about this commentary. They include Peter Hayes, Ph.D., Katherine Creath, Ph.D., Stephen Grenard, Ph.D., Donald Watson, M.D., Emily Kelly, Ph.D., Lonnie Nelson, M.A., and Montague Keen. The comments provided here are those of the author, not necessarily those of my colleagues.</p> 




      
      ]]></description>
    </item>

    <item>
      <title>Hyman&amp;rsquo;s Reply to Schwartz</title>
      <pubDate>Thu, 01 May 2003 13:22:00 EDT</pubDate>
	<author>info@csicop.org (<![CDATA[Ray Hyman]]>)</author>
      <link>http://www.csicop.org/si/show/hymans_reply_to_schwartz</link>
      <guid>http://www.csicop.org/si/show/hymans_reply_to_schwartz</guid>
      <description><![CDATA[
        



			<p>I cannot, of course, respond in detail within the allotted space to each of <a href="/si/show/how_not_to_review_mediumship_research/">Schwartz&rsquo;s arguments</a>. Instead, I will comment on his major points and conclude with a general reaction to his rebuttal.</p>
<blockquote><p>1. &quot;Hyman resorts to . . . selectively ignoring important information that is inconsistent with his personal beliefs.&quot;</p></blockquote>
<p>In preparing my critique of his research program, I not only read The Afterlife Experiments carefully, I also scrutinized in detail every report of his research that was available. It was not possible to discuss each separate piece of information in my critique. I took each item into account, however, in making my assessment of the research. I chose to focus my discussions on those items that Schwartz and his colleagues had emphasized as the strongest outcomes amongst their findings. I have refereed and reviewed research reports for more than fifty years for many of the major scientific publications and for major granting agencies. I applied the same standards to my evaluation of the afterlife experiments that I have used in my other assessments.</p>
<blockquote><p>2. &quot;. . . Hyman failed to mention the important historical fact that our mediumship research actually began with double-blind experimental designs.&quot;</p></blockquote>
<p>As his example he refers to his experiment with the mediums Susy Smith and Laurie Campbell that &quot;was completed almost a year before we conducted the more naturalistic multi-medium/multi-sitter experiments involving John Edward, Suzanne Northrop, George Anderson, Anne Gehman, and Laurie Campbell. The early Smith-Campbell double-blind studies did not suffer from possible subtle visual or auditory sensory leakage or rater bias &mdash; and strong positive findings were obtained.&quot;</p>
<p>This is a peculiar example to use as a model of a controlled, double-blind experiment. The experiment involved having Susy Smith, designated as Medium One, apparently contact four deceased persons: her own mother, William James, Linda Russek&rsquo;s father, and Schwartz&rsquo;s father. Smith made a drawing for each of these departed individuals supposedly with their input. She also made a &quot;control&quot; drawing. Laurie Campbell, designated as Medium Two, was then requested to independently attempt to contact these departed individuals and, using the information obtained from them, to try to match each drawing to the associated departed individual. Campbell attempted to contact the departed entities during two sessions in the presence of three experimenters. Campbell is described as being &quot;blind&quot; to personalities of the four departed individuals. However Schwartz, who was not blind to the personalities of these entities, was not only present during these sessions but actively trying to convey this information (through  &quot;telepathy&quot;) to Campbell. This unnecessary blunder compromises whatever blinding would have existed between Medium Two and the personalities of the departed individuals. No psychic investigator would be surprised if Laurie Campbell came up with some correct information such as the gender and other descriptors of the departed individuals under these conditions.</p>
<p>Another defect of this phase of the experiment is that no provisions were made to use a systematic and objective method for assessing the accuracy of Medium Two&rsquo;s descriptions. The evaluation of the information for this stage of the experiment was subjective.</p>
<p>During the sittings with Medium Two, all the experimenters were blind as to which drawing was associated with which departed individual. (Although it is plausible that one might be able to make some reasonable guesses, given the characters of each of the departed individuals, which type of drawing would go with each one.) Unfortunately, the experimenters then make another serious, and completely unnecessary, blunder when it came time to see if Medium Two could accurately match the drawings with the appropriate individual. The experimenters brought Medium Two and Medium One together.  Medium One then displayed the drawings she had made to represent each individual. Medium Two then attempted to match the drawings to the appropriate sources in the presence of Medium One. Ironically, the experimenters openly admit that this could allow clues about the correct matching through the &quot;Clever Hans&quot; phenomenon. They dismiss this as possibility because Campbell was able to correctly match only one of the five drawings to its appropriate source.</p>
<p>At this point in the experiment the report becomes especially murky. Presumably, the experiment has failed. However, the experimenters inexplicably have Medium Two try again to match the drawings to their appropriate source. This second attempt is made after she is shown an explicit summary of her comments about the pictures and the departed individuals. Campbell correctly matches the five drawings (including the control) in this second attempt. No reason is given for giving the medium two tries at matching the drawings, nor do the experimenters tell us how they justify asking the medium to redo her matching. Probably these and other questionable aspects of the procedure are moot given that the possibility of blinding was compromised.</p>
<p>Schwartz and his colleagues, in their published paper, describe this as an &quot;exploratory study.&quot; The proceedings seem to have been improvised at each stage. Certainly, no competent investigator would plan to unnecessarily compromise experimental blinding at the two most critical points of the data collection. Nor does it make sense to design an experiment wherein the medium is given two chances at getting the matching correct. I simply was applying the principle of charity in not discussing this botched experiment.</p>
<blockquote><p>3. &quot;In an exploratory double-blind long-distance mediumship experiment . . . Hyman states 'because nothing significant was found, the results do not warrant claiming a successful replication of previous findings.&rsquo; However, Hyman minimizes the fact that the number of subjects in this exploratory experiment was small (n=6). More importantly, Hyman fails to cite a(n) important conclusion that we reached in the discussion: If the binary 66 percent figure approximates (1) LC&rsquo;s actual ability to conduct double-blind readings, coupled with (2) the six sitters&rsquo; ability, on the average, to score transcripts double-blind, the 66 percent figure would require only an n of 25 sitters to reach statistical significance (e.g. &lt; .01).&quot;</p></blockquote>
<p>This part of Schwartz&rsquo;s rebuttal, like all the other parts, strikes me as both bizarre and off the mark. First, we need to clear up some mistakes and/or misunderstandings. Schwartz confuses the sample statistic with the population (or hypothesized true value). Given twenty-five sitters and a sample outcome of seventeen correct identifications (success rate of 68 percent) of their actual readings (which, given the discrete nature of the binomial distribution is the closest we can get to 66 percent correct) the one-tailed probability would be .054 and not less than .01 as Schwartz claims. Regardless of the correct probability value here, this has little to do with power. Schwartz is hypothesizing that the true (population) proportion of correct binary choices in this situation is close to the 67 percent (4 out of 6) that he observed in his sample. If, indeed, this value is correct, then, given his use of a one-tailed test and a significance level of .01, the probability of getting a significant outcome with twenty-five sitters would be slightly more than 0.54. To have a reasonable power (say close to 90 percent) one would need over 100 sitters.</p>
<p>Schwartz appears to be begging the question here. He begins by observing that four out of six sitters correctly identified which of two readings was meant for them. Because of the small sample, this outcome is consistent with a number of possibilities including the chance value of 50 percent. If he had obtained the same proportion of correct hits with a larger sample, then it would have been significant. However, since we cannot tell what the true proportion is from a sample outcome based on only six cases, we have no basis for predicting the outcome for a larger sample. His argument reduces to the trivial one: If the true proportion is 67 percent then we will be able to get a significant outcome with a larger sample. From his actual outcome, we can just as well say: If the true proportion is 50 percent (and this, too, is consistent with his data), then he will very likely not get a significant outcome with a larger sample.</p>
<p>I find it difficult to understand why Schwartz considers this point worthy of mention. Of course a binary outcome with only six trials has very low sensitivity. However, he did not rely on this outcome. He used two other measures, the number of dazzle shots and the hits and misses, which are clearly much more sensitive. These also failed to provide overall significance. For these measures (as well as for the actual choice of the relevant reading), the overall sensitivity would have been greatly enhanced if each sitter actually rated all six readings. In addition to greatly enhanced sensitivity, this would have avoided the unfortunate situation where each sitter was rating his or her own reading against a foil that differed for each rater. Another plus would have been the opportunity to determine which readings had more general appeal independent of any specific information peculiar to a given sitter.</p>
<p>In his longer rebuttal to my critique which he posted on the Web (see his reference in his rebuttal) Schwartz claims he actually predicted that GD would successfully differentiate his own reading from the accompanying foil reading. The claim that this particular outcome was predicted does not square with the opening sentence of the report wherein the experimenters state, &quot;This paper reports an unanticipated replication and extension. . . .&quot;</p>
<p>I have already pointed out in my critique how Schwartz has an unusually liberal interpretation of &quot;replication.&quot; Not only is the statistical and experimental evidence suspect, but the qualitative analysis of the actual reading for GD in the second experiment does not overlap in any important respect with the reading in the earlier experiment. In particular, none of the apparently striking examples of names, events, and places that are reported for the first reading are in the second reading. I agree with Schwartz that the outcome of this &quot;double blind&quot; experiment is consistent with &quot;individual differences in sitter characteristics.&quot; However, borrowing from Schwartz&rsquo;s propensity to resort to Occam&rsquo;s Razor, I believe it is prudent to suggest a much more mundane explanation. We need only assume two very plausible and non-extraordinary assumptions to account for the results: 1) Luck: GD had a 50-50 chance of choosing the correct reading; 2) Rater bias: given that he has chosen the correct reading, he would show a strong response bias to give high marks to the chosen reading and low marks to the rejected one. Note that this is consistent with the qualitative evidence that I provided in my critique. However, note that the burden of proof is not upon the critic to show that this explanation is correct. Rather, the burden of proof should be on Schwartz to show, as the claimant, that he has ruled out this and other possible mundane explanations. This is what good experimental methodology, which is so far lacking in the afterlife experiments, is intended to accomplish.</p>
<p>Unfortunately, I do not have space to respond to other specifics of Schwartz&rsquo;s rebuttal. In his rebuttal he attributes motives, preferences, and biases to me. These are based on assumption unsupported by facts. For example, he characterizes me as &quot;reluctantly&quot; agreeing that fraud is unlikely. In fact, I have no reluctance at all to make such an assertion. He attributes certain preferences to me that are, in some cases, just not true. He also is factually incorrect on some matters. He says that I was one of the group of cold readers who declared that I could, with training, duplicate what his mediums had accomplished in his laboratory. This is wrong. I deliberately refrained from such a commitment. My major point during the meeting with him on cold reading was that the determination of whether his mediums are using cold reading is a separate matter from the question of whether they were conveying any information of a paranormal nature. If he wanted to study the role of cold reading in the readings given by his mediums, that was an experimental goal that was separate from determining if his mediums are providing evidence for the survival of consciousness.</p>
<p>Nor did I conclude, contrary to Schwartz&rsquo;s implication, that his mediums were using cold reading. I did observe &mdash; and I specifically emphasized that this was a subjective opinion &mdash; that I could see little difference between the utterings of his mediums and those of the typical psychic reader. I want to emphasize again, it is not for me, or other critics, to show that his mediums are using cold reading or some other ploys. The burden of proof is on Schwartz to show that he has convincingly eliminated such possibilities.</p>
<p>So far as I can tell, Schwartz has really not answered my criticisms. A close reading reveals that he does not deny the various failings I have divulged in his research. Instead, he defends the departures from proper experimental methodology on a number of grounds: 1) he and his colleagues were aware of these defects and actually admitted so in their reports (but such admissions do not somehow neutralize the defects); 2) there were practical reasons such as wanting to provide a more naturalistic context (but this does not excuse using inappropriate control comparisons, failing to correct for rater bias, using inappropriate probability and statistical computations, etc.); 3) some of the &quot;defects&quot; were deliberately included to check on certain questions (but this does not justify drawing strong conclusions); and 4) that taken in their totality the experiments somehow provide powerful evidence for anomalous communication even if the individual experiments are flawed (actually, repeatedly making similar mistakes from experiment to experiment compounds rather than compensates for the errors).</p>
<p>Despite the deficiencies in his experiments, Schwartz seems convinced that his mediums have provided, in some cases, specific and unique information including names, places, etc., that the critics cannot explain away. For one thing, these apparently specific items are much fuzzier than he believes. His examples are selected just because they appeared to contain such specifics. This raises the difficult question of how to actually assess how much of this is just coincidence. Furthermore, even the most specific and concrete match is problematical because practically no constraints are placed upon the sitter in finding a suitable match (e.g., it can be a dead or a living person; it can be someone close to the sitter or a mere acquaintance; etc.). No actual check is made as to how close the match actually is. My point here is that Schwartz really has provided us with nothing to explain. We do not know if he has produced anything worth taking seriously until he can convincingly demonstrate that he has obtained his data under methodologically appropriate conditions. Science demands this in the conventional fields of inquiry. We should demand no less from Schwartz.</p>




      
      ]]></description>
    </item>

    
    </channel>
</rss