More Options

Research on the Feeling of Being Stared At

Follow-up

Rupert Sheldrake

Volume 25.2, March / April 2001

See also: Robert Baker’s reply

Two recent articles in the Skeptical Inquirer have claimed that the feeling of being stared at is an illusion. Both have attempted to refute my own experimental research on the subject, which indicates that many people do indeed have an unexplained ability to detect stares.

A variety of surveys have shown that most people believe they can feel unseen stares (Sheldrake 1994). In his article ”Can we tell when someone is staring at us?” (March/April 2000 SI) Robert A. Baker, a CSICOP Fellow, dismissed this belief as false. “Skeptics . . . believe that it is nothing more than a superstition and/or a response to subtle signals from the environment” (Baker 2000, p. 40). He claimed to provide empirical evidence to support his presuppositions.

David Marks (also a CSICOP Fellow) and John Colwell in their article ”The Psychic Staring Effect: An Artifact of Pseudo Randomization” (September/ October 2000 SI) claimed that my own results were an artifact arising from one of the randomization procedures I have followed: “When random sequences are used people can detect staring at no better than chance rates,” they asserted. In this article I show that this claim is not true. Both papers are seriously flawed, and neither stands up to skeptical scrutiny.

Baker’s “Demonstrations”

For his first demonstration Baker selected people who were engrossed in eating or drinking, watching TV, working at computer terminals or reading in the University of Kentucky library. He unobtrusively positioned himself behind them and stared at them. He then introduced himself and asked them to fill in a response sheet.

Baker’s prediction was that people engrossed in an activity would “never” attend to a sensation of being stared at. Thirty-five out of forty people checked the expected response: “During the last 5 minutes I was totally unaware that anyone was looking at me.” But two people reported that they had been aware that they were “being observed and stared at” and three reported they felt something was “wrong.” Baker noted that while he was staring at these very subjects, “All three stood up, looked around, shifted their position several times and appeared to be momentarily distracted on a number of occasions.”

The answers of these five people went against Baker’s prediction, so he retrospectively introduced another criterion. He ruled that subjects should be able to say where he had been sitting when he was looking at them. None could. He regarded their inability to do so as a “good reason to believe that they were . . . not aware that they were being viewed” (Baker 2000, p. 40). But this begs the question. A sensitivity to being stared at does not necessarily imply an awareness of the position of the starer.

To complete his analysis, Baker “discarded” the results from the two people who said they knew they had been stared at. He regarded them as “suspect” because one claimed she was constantly being spied on, and the other claimed he had extrasensory ability. But if the sense of being stared at really exists, people with paranoid tendencies might be more sensitive than most (Sheldrake 1994), and so might people who claim to have extrasensory abilities.

In Baker’s second demonstration subjects were looked at from behind by Baker himself, together with a student, at random intervals, and asked to say when they thought they were being looked at. They were told that they would be stared at for five one-minute periods during a twenty-minute trial. In accordance with his expectations, he found that their guesses were no better than chance.

Why were these results so different from the consistently positive and statistically significant effects obtained by myself and others, even when subjects were blindfolded and separated from starers by closed windows (Sheldrake 2000)? There are several relevant differences in procedure.

In my own experimental design, in a series of 20 trials there were more or less equal numbers of control and looking trials, whereas in Baker’s there were 15 control and only 5 looking one-minute periods. This peculiar feature precluded a straightforward statistical analysis of the results. Each subject was allowed only five guesses as to when they were being looked at. If guesses were entirely random, misses would be three times more probable than hits.

In my experiments each trial lasted only about 10 seconds, but Baker used 60-second trial periods. In preliminary tests, I found that subjects gave the highest percentage of correct guesses when they were asked to guess quickly, without spending much time thinking about their response.

Baker also introduced three different sources of distraction for his subjects:

  1. Beside each time on the specimen score sheet shown in Baker’s paper there was a pair of unexplained numbers, for example: 0801 1&2; 0802 2&3 (Baker, 2000, p. 38). I wrote to Baker to ask for a clarification, but his reply confused matters further. He said that the times shown on his specimen time-sheet “were not on the subject’s time-sheet at all-since they, of course, would differ from subject to subject. The 1&2 indicates the first minute, the numbers 2&3 indicates the second minute of the time-period, etc.”

    If I had been one of Baker’s subjects, I would have been at a loss to understand his instructions. If I thought I was being stared at, to start with I would have had to calculate from the clock in which minute this happened. Then I would have had to decide where to write my response. Say I felt I was being stared at in the seventh minute. Would I write my response on the line labeled 6&7 or on the line labeled 7&8?

  2. The instructions published by Baker are self-contradictory. He says that the subjects were told that there would be five one-minute staring periods. Yet the specimen instruction-sheet states that subjects would be stared at “five times for two minutes each.” Baker now concedes that this was an error (Baker, personal communication, May 27, 2000). To confuse matters further, in his article the one-minute staring periods are also described as “five-minute periods” (Baker 2000, p. 38).
  3. Not only did Baker instruct his subjects to guess when they were being stared at, but they were also asked to compare their guess with their responses in other periods so that they could change their previous guesses, if they wanted to. This instruction might well have helped to distract subjects still further from their immediate feelings.

Like Baker, I predict that those who follow his experimental methods (including his ambiguous instructions) are likely to replicate his negative results. But I also predict that my own positive results should be replicable by those who use similar methods to my own (Sheldrake 1998, 1999, 2000).

Marks and Colwell’s Claims

In January 2000 the British Journal of Psychology published a paper entitled “The ability to detect unseen staring: A literature review and empirical tests” by John Colwell, Sadi Schröder and David Sladen. In their principal experiment, they used methods based on my own procedures, and followed my own randomized sequences of trials. They obtained strikingly significant (p<0.001) positive results that closely resembled my own findings (Sheldrake 1998, 1999). However, they argued that their participants’ positive scores did not support the idea that people really can feel stares; instead, they were an artifact that arose from “the detection and response to structure” present in my randomized sequences. This is the paper on which Marks and Colwell based their SI article.

The Background to this Controversy

In my book Seven Experiments That Could Change the World (1994) I described how the feeling of being stared at could be investigated empirically both simply and inexpensively. As well as carrying out many experiments of my own, I published detailed instructions on my Web site (www.sheldrake.org) and more than 20,000 trials have now been carried out, many of them in schools and colleges. These experiments have given positive, repeatable, and highly significant results, implying that there is indeed a widespread sensitivity to being stared at from behind (Sheldrake 1998, 1999, 2000).

The results showed a characteristic and highly repeatable pattern, with highly significant positive scores in the looking trials and scores close to the chance level of 50% in the not-looking trials (figure 1a).

figure 1a

A. Combined results from experiments carried out with adults and in schools. (Data from Sheldrake 1999, Table 5. Total number of trials: 13,900)

This pattern is consistent with an ability to detect unseen staring (Sheldrake 1998, 1999). If the sense of being stared at is real, it would be expected to work when people are indeed being stared at. In the not-looking trials the subjects were being asked to detect the absence of a feeling of being looked at, a situation with no parallel in real-life experience; and under these conditions their guesses were no better than chance. Hence an asymmetry between the two kinds of trials would be expected if there really were an ability to detect unseen staring. By contrast, if subjects were cheating or responding to subtle sensory clues, scores should be elevated symmetrically in both looking and the not-looking trials.

Experiment One

In their first experiment Colwell et al. (2000) followed my own procedures in most respects, but instead of testing a large number of subjects in just one or two sessions, as in my own experiments, they tested twelve subjects in twelve successive sessions. And instead of the participants working in pairs, taking turns as starers and subjects, one of the authors, Sadi Schröder, was the sole starer in all sessions. In the first three sessions the subjects received no feedback; in the following nine they received immediate feedback as to whether their guesses were correct or not.

In the sessions with feedback, in the looking trials 59.6 percent of the guesses were correct. By contrast, in the not-looking trials the results were exactly at chance levels, with 50 percent correct (figure 1B). The overall accuracy of the subjects’ guesses was significant at the p<0.001 level. These findings were in remarkable agreement with my own and those of other investigators. But Marks and Colwell (2000) tried to dismiss them as an artifact.

figure 1b

B. Data from the trials with feeback in Colwell et al.'s Experiment One. (Data from Colwell et al. 2000, Table 1. Total number of trials: 2,160)

The first point in Marks and Colwell’s argument was that the positive results were obtained when subjects were given feedback. I too have found that subjects perform better with feedback (Sheldrake 1994, 1999). We also agree that feedback can enable the participants to improve their performance with practice. Colwell et al. (2000) provided clear evidence for a learning effect, with a significant (p<0.003) linear trend of improvement in accuracy over nine sessions.

Marks and Colwell then postulated that the subjects’ success when they were given feedback was due to an implicit learning of structures hidden in my randomized sequences. They showed by means of several tests that my sequences deviated from “structureless” randomness. Ironically, this was because I adopted a recommendation by Wiseman and Smith (1994) to use counterbalanced sequences containing equal numbers of looking and not-looking trials. Like Marks and Colwell, Wiseman and Smith (1994) obtained an unexpectedly positive result in a staring experiment and then tried to explain it as an artifact of the randomization procedure, but in their case they attributed it to a lack of counterbalancing.

The crux of Marks and Colwell’s argument was that because of the deviations from “structureless” randomness in my sequences, participants given feedback could have learned implicitly to detect patterns, for example that there was a relatively high probability of an alternation after “two of a kind.” But they offered no evidence that their participants in fact learned to follow such rules. They also failed to mention a fundamental flaw in their hypothesis, perhaps hoping that readers would not spot it. Implicit learning should in principle enable participants to improve equally in looking and not-looking trials. But this is not what happened; significant improvements occurred only in the looking trials (figure 1b).

Unlike Marks and Colwell (2000), Colwell et al. (2000) explicitly acknowledged this problem, but could only suggest that participants may have “focused more on the detection of staring than non-staring episodes.” This begs the question. The subjects must have selectively detected when staring trials were happening, otherwise their scores would not have been above chance levels and shown such an improvement in successive sessions. This might have occurred because they could indeed detect when they were being stared at.

Experiment Two

Colwell et al.'s second experiment was designed to test their pattern-detection hypothesis by using “structureless” random sequences. Sure enough, this time there was no significant overall positive score, although in two of the three sessions there was a highly significant excess of correct guesses in the looking trials.

At first sight, the overall non-significant result seems to confirm their hypothesis. But Marks and Colwell (2000) omitted to mention the crucial fact that in Experiment Two there was a different starer, David Sladen. Can we take it for granted that changing the starer made no difference?

Such experimenter effects are not symmetrical. The detection of Schlitz’s stares by the participants under conditions that excluded sensory cues implies the existence of an unexplained sensitivity to stares. By contrast, the failure to detect Wiseman’s stares implies only that Wiseman was an ineffective starer. Perhaps his negative expectations consciously or unconsciously influenced the way he looked at the subjects.

In Colwell et al.'s Experiment Two, the starer, Sladen, as one of the proponents of the pattern-detection hypothesis, was presumably expecting a nonsignificant result. His negative expectations could well have influenced the way in which he stared at the participants. It would be interesting to know if Sadi Schröder, the graduate student who acted as starer in Experiment One, was more open to the possibility that people really can detect when they are being stared at.

Other Relevant Experiments

Marks and Colwell claimed that their pattern-detection hypothesis invalidated the positive results of staring experiments carried out by myself and others. If these experiments had involved pseudo-random sequences and feedback, as required by their hypothesis, their criticism might have been relevant. But this is not how the tests were done, as they would have seen for themselves if they had read my published papers on the subject.

First, in more than 5,000 of my own trials, the randomization was indeed "structureless,” and was carried out by each starer before each trial by tossing a coin (Sheldrake 1999, Tables 1 and 2). The same was true of more than 3,000 trials in German and American schools (Sheldrake 1998). Thus the highly significant positive results in these experiments cannot be “an artifact of pseudo randomization.”

Second, when I developed the counterbalanced sequences that Marks and Colwell describe as pseudo-random, I changed the experimental design so that feedback was no longer given to the subjects. Since the pattern-detection hypothesis depends on feedback, it cannot account for the fact that in more than 10,000 trials without feedback there were still highly significant positive results (Sheldrake 1999, Tables 3 and 4).

Conclusions

In spite of their prior assumption that an ability to detect unseen staring must be illusory, both Baker (2000) and Colwell et al. (2000) in their first experiments obtained unexpected positive results consistent with such an ability. They attempted to dismiss these findings with question-begging arguments. In their second experiments, which gave the non-significant results they expected, an investigator with negative expectations acted as the starer. This arrangement provided favorable conditions for experimenter effects, already known to occur in staring experiments (Wiseman and Schlitz 1997). Both Baker and Marks and Colwell also failed to mention a large body of published data that went against their conclusions. In short, their claims were misleading and ill-informed.

See also: Robert Baker’s reply

Acknowledgment

I am grateful to Brian Evans for helpful comments on a draft of this article.

References