FriendsCommunitiesMetaphyiscalHeader

Forums Forums Answering Skeptics and Debunking Cynics PEARs, apples, oranges, bananas.

4 voices
21 replies
  • Author
    Posts
  • #62829
    Cantata
    Participant

    PEAR stands for “Princeton Engineering Anomalies Research”. It is a group of people who works on the “Scientific Study of Consciousness-Related Physical Phenomena”. Their homepage is here.

    PEAR is famous in paranormal circles for their exhaustive research and performing massive tests. The group has been hailed as finding proof of psi, through these tests.

    What are PEAR actually performing experiments on?

    Looking through the list of publications from their website, we find a broad range of psi-tests, usually concerning human influence on random event generators.

    These random event generators have usually been machines, but a major obstacle in comparing data from the various tests is that they are not all performed the same way. These huge tests are usually meta-experiments, where a lot of different data and protocols have been collated into a whole, that looks impressive, but are – to say the least – dubious in their conclusions.

    A good example is from the list of publications. When we look at the conclusions for Pub.11, we find that randomly instructed trials tended toward higher scores than volitional trials. This is in direct opposition to Pub.15, where random vs. volitional instruction had no significant influence.

    The problem is, that both tests are taken as proof of what PEAR calls “anomalous” phenomena. How can two opposing findings prove the same phenomenon?

    We can also compare Pub.11 with Pub.22. In Pub.11, it is found that only 10 out of 42 show positive psi, indicating that only some have psi abilities. In Pub.22, it is found that the effects are found to compound incrementally over a large number of experiments, rather than being dominated by a few outstanding efforts or a few exceptional participants. Two conflicting findings. Both are seen as evidence of psi.

    Pub.11 in itself is a study in the double-thinking performed by psi-researchers: Out of 42 people, 10 are found to have positive psi-abilities. Unfortunately, the overall results are almost nullified by some of the rest of the group, 4 of which pull the results in the other direction. Is this regarded as normal statistical fluctuations? No, these people have negative psi. It follows that everything is evidence of psi!

    Comments?

    #75055
    sgrenard
    Participant

    You can micro analyze micro-PK and the individual reports of the Princeton Engineering Anomalies Research Lab (Department of Mechanical and Aerospace Engineering, Princeton University) if you wish but don’t neglect the summarizing paper by its Operations Coordinator, Roger D. Nelson, titled: “The PEAR REG Experiments Database Structure.” This paper analyzed all the data collected by the PEAR program over 12 years and 5.6 million trials involving 108 individuals using a random event generator (REG). Its conclusion is that PK or, if you prefer, micro PK is real and that mind can affect matter.

    The sentinel qualifying parameters of the PEAR body of work is:

    1. the sheer number of experiments
    2. the rigorous controls devised to eliminate possible fraud,
    human, machine and computer error
    3. the willingness with which they shared information publicly, and
    not in a sensational manner in the public press but in serious
    journals
    4. their willingness to have any skeptic examine the data

    Jahn invited qualified, responsible and objective skeptics to conduct an independent exam of the equipment, protocols and results.

    A few other things need to be understood. Of these, there were 3 major stumbling blocks or problems that needed to be overcome:

    unsophisticated and archaic testing devices could bias results

    limited numbers of experiments which cause dismissal of
    favorable results as flukes (this assertion was made by
    skeptics) or statistical aberrations. Small numbers were
    also statistically insignficiant.

    inability of individuals to reproduce the PK effect on demand.

    Amazingly after the results of some 5.6 million trials were made, the same and additional skeptics said there were so many it biased the experiments as well, and these were some of the same skeptics who said a small number of trials was equally
    useless. So its either too little or too much. No skeptic would said how many were ideal.

    Anyway to solve the first issue, Jahn replaced the primitive dice throwing machines of early researchers like Rhine with electronic random event generators which could feed its output directly into a computer, automating data collection and statistical analysis.
    This eliminated human error or bias. Jahn also went to great lengths to shield the apparatus from environmental effects: heat, vibration/sound, em waves, the equipment operator and the test subject. It also eliminated the file drawer effect.

    To solve the second issue, they collected over 5.6 million trials, the results, when taken as a whole, conclusively proving the existence of PK. And many individuals were able to influence the REG machine, producing results which were beyond chance.
    Additional statistical tests confirmed the results were neither statistical aberrations or flukes.

    The large number of trials also addressed the third issue: the inability of single individuals to reproduce the effect on demand. Of course this was and remains a favorite target of the skeptics.
    What was soon apparent is that none of the individuals tested were able to produce consistently positive results but that over time they could produce, on average, PK efects that exceeded chance. This issue has been addressed by others and, in my own limited experience, have attributed failures to fatigue, diversions or inability to concentrate as well as momentary lapses in concentrated thought patterns.

    The PEAR tests also confirmed the existence of a remote or distance PK effect with results similar to proximity PK testing.
    By exerting a PK effect miles from the equipment, this completely eliminated any possibility of fraud.

    As you indicated in various separate reports there were varying degrees of results. Overall the experiments returned a positive effect 3% over chance and in some cases 4% over chance. This may not seem like very much if you were tabulating how many one-eyed fish there were in a catch of ten thousand fish but in PK this is highly signficiant. Even 1% beyond probability is considered significant when given such a huge number of trials …. As you are no doubt aware, it is axiomatic that but a single white crow is evidence that white crows exist. I believe William James uttered this truth a long time ago.

    As for your specific objections to the PEAR group having confirmed the PK effect, you mention 10 out of 42 — which is nearly 25% in a group of 42 individuals. And that some have negative psi.
    There is no arguing that in any group of 40 or 50 people there may be a few with negative effects, some with straight on probability and a few, in this case, 25% who operated above probability. This is not an indictment of this research nor does it “follow,” (facetiously or otherwise) that this means “everything” is evidence of psi although it is widely held but not proved that mind/body duality and mind over matter is indeed pervasive and possible in a vast majority if not everyone save for the fact that our “minds” are too preoccupied with other matters to positively concentrate on it.

    #74661
    sgrenard
    Participant

    I hasten to add that the random versus volitional mode experiments were not concerned with REG and PK, bur earlier
    PEAR experiments which they called PRP trials… for “precognition remote perception.” These involved targets and remote viewers
    or percipients and an agent, who was an individual known to the percipient who would visit a site in the field. Qualitatively the results are almost the same (there are some differences in the protocol) as those reported for the SRI/SAIC/AIR (e.g. CIA) program.

    In your post re PEAR you may’ve unintentionally merged the data from separate types of experiments. Of the 334 PRP trials published up to 1987, 125 were in the instructed mode and 209 in the volitional mode. The final odds against chance for the overall database were 100 billion to 1. For the instructed trials alone, the outcome was 1 billion to 1 and for the volitional trials, 100,000 to 1.

    Although you call into doubt the validity of meta-analysis in the overall PEAR body of work, clearly both meta-analysis and individual analaysis still returned significant odds against chance.

    #74627
    Cantata
    Participant

    Forgive the abbreviated use of quotes.

    Originally posted by sgrenard
    You can micro…can affect matter.

    I can, and I will, micro-analyze every aspect of the research. You see, if your evidence is a long line of factors, each and every one of these factors must stand up to scrutiny. If one link in the chain breaks, your whole theory crumbles. You cannot prove anything if one part of your evidence is faulty.

    “Mind can affect matter.” I will let that stand for a moment.

    Originally posted by sgrenard
    The sentinel qualifying…protocols and results.

    Who was invited? Who accepted? What was the outcome? Where can we see that?

    As for the points:

    1: Very impressive. However, the experiments are not done using the same protocols, nor are they testing a positive hypothesis.

    2: Admittedly, PEAR has improved the protocols greatly. Big thumbs up! The results are much lacking, though.

    3: This is not true. Hyman complains of “the SAIC program was hampered by its secrecy and the multiple demands placed upon it. The secrecy kept the program from benefiting from the checks and balances that comes from doing research in a public forum.”

    4: Which skeptics actually looked these data over? Again, Hyman complains of the lack of openness.

    Originally posted by sgrenard
    A few other things…were ideal.

    First, I would like to see some references for your claim that the high number of trials could be a problem.

    Second, more than 5 million trials is impressive on the surface. But remember that these 5.6 million trials were not using the same protocols, nor the same methodology, nor did they test for the same thing. They are simply not testing for the same thing the same way. This, in itself, is a huge obstacle in producing a heterogeneous body of data.

    None of these experiments were done from a positive hypothesis. Every one of these were done from a negative hypothesis: “If we can’t explain the findings, it has to be psi.”

    Can you point to one positive hypothesis for psi that has been scientifically tested and found valid?

    Originally posted by sgrenard
    ANyway to solve…the test subject.

    I can easily understand the problems arising from using dice: They can be slightly crooked, so a 6 would come up just a teensy bit more often than the 1 (shooting crap in the alleys of New York will teach you that much!). And shooting dice simply takes too much time! We can all clamor for more money and resources, but what is really needed these days is more time! How often do we say “I don’t have time for that”, “I can’t make it” or “I am too busy”? Time is a factor here, I agree completely.

    However, we move from the easily understandable physical behavior of dice being thrown (their every move can be computed), to the more complex “behavior” of the innards of a computer. This, in itself, must change the outcome of the experiments. If we accept that computers are used for generating random data, then every other previous data must be scrapped. We stick to the same protocol, the same methodology, or our research is not comparable.

    I agree with Hyman (and having worked professionally with computers for almost two decades): No program is without fault. It takes a lot of debugging to eliminate most of the errors, and I have yet to see a computer program that was completely without error possibilities (except the “Hello, World!” example). Therefore, we have to wait and see, and work to debug these programs. It will take time, effort and money.

    Originally posted by sgrenard
    To solve the…aberrations or flukes.

    Not one single positive hypothesis, not one single replicable experiment on demand.

    Originally posted by sgrenard
    The large number…concentrated thought patterns.

    Exceeded chance with how much? Why is this number so minuscule? It is often claimed that psi ability can be learned and improved with time. If this is true, then why can’t we find a “Michael Jordan” of psi, who scores consistently much higher than your average “psi-player”? (Forgive the basketball analogy, but this is the US! :) )

    If psi skills are not improvable, why does Hyman complain that PEAR uses “experienced” viewers? In other words, is psi learnable?

    The problem with ruling out these factors of fatigue, diversions or failure to concentrate is: When do we know when these factors kick in, and when do they not? Is a dataset “excused” because of fatigue or is it because it shows no result?

    Originally posted by sgrenard
    The PEAR tests…of fraud.

    This could also be a result of the tests not being influenced by humans at all, but to flaws in the equipment. I don’t see any reason for claiming confirmation of the existence of remote PK effect.

    Originally posted by sgrenard
    As you indicated…long time ago.

    Very true. However, given the many trials as well as the use of experienced “psi-ists”, we should see a significantly higher number. This low percentage very much points to the fact that psi ability is not improvable. If we look at the “best performers” of psi, will we see a consistently higher result? Who are these people? If psi is not improvable, why use experienced people at all?

    Originally posted by sgrenard
    As for your…concentrate on it.

    Do these 25% show up in Pub.22? Why not? If this phenomenon is a general one, we should find it everywhere. We do not.

    Please read your initial statement that “mind can affect matter”. Now you are changing your tune to it is “not proved”? Which is it? Can mind affect matter or can it not?

    Originally posted by sgrenard
    I hasten to …(e.g. CIA) program.

    Nothing in the PEAR publications indicate that the random and the volitional findings are to be regarded as separate. They are referred to simultaneously, not separately. In context, not out of context.

    Originally posted by sgrenard
    In your post…trials, 100,000 to 1.

    And yet, none of these experiments used the same protocols, the same methodology or operated from a positive hypothesis.

    Originally posted by sgrenard
    Although you call into doubt the validity of meta-analysis in the overall PEAR body of work, clearly both meta-analysis and individual analaysis still returned significant odds against chance.

    By how much? Are they of similar size? Do they use a positive hypothesis?

    I really hate not being able to quote you in full. Damn technology!!

    #72845
    sgrenard
    Participant

    Reply: There are plenty of examples of long series of experiments, with drugs, toothpaste, floor cleaners, just about anything involving a “claim” where there are failures. These are not simply weak links, they are probabilities. This is why we calculate them. If 10 out of 40 people demonstrate psi, that is 25% that do and 75% who do not. This does not falsify psi, and it demonstrates that as % of this population, this group performs way beyond chance.

    Reply: Hyman was the principal skeptic who responded and published on his examination of the PEAR data. For those who don’t know Ray Hyman, he is a member of the executive council of CSICOP and a senior member of that organization. He is a highly respected psychologist and skeptic with an excellent grasp on the matters tested. However, many remain perplexed at his willingness to accept the results of the experiments but his failure to translate that to full or even partial acceptance. Some have gone so far as to say this is due to his biases shaped by his worldview and his comittment to skepticism and secular humanism. I don’t know what is true nor do I have anyway of knowing.

    1: Very impressive. However, the experiments are not done using the same protocols, nor are they testing a positive hypothesis.

    Reply: Hypotheses, again, are erected to falsify.

    2: Admittedly, PEAR has improved the protocols greatly. Big thumbs up! The results are much lacking, though.

    3: This is not true. Hyman complains of “the SAIC program was hampered by its secrecy and the multiple demands placed upon it. The secrecy kept the program from benefiting from the checks and balances that comes from doing research in a public forum.”

    Reply: The data which Hyman evaluated was substantially classified by blacking out names and in some cases targets but not statistical results. This was during the cold war, in a spy vrs. spy atmosphere. Some of the operators were eventually outted when the NSF, on political as much as on Hyman’s recommendations, recommended that the program be suspended.
    It may, in fact, still be in operation on an ad hoc basis but if so, this too is secret. In our present environment this is more than justifiable. But the subjects such as Swan, Dames, McMoneagle and others did come out with biographies which detailed their involvement follows the declassification of Cold War era operations. Hyman’s remarks that a program is “hampered” by secrecy may’ve been his viewpoint but from the other point of view, secrecy also protected the program from enemy counter-measures. I am not a conspiracy theorist, but it is not unrealistic for some of the Stargate folks to to be worried that one day they might’ve met an untimely accident (e.g. run off the road) were their participation publicized.

    4: Which skeptics actually looked these data over? Again, Hyman complains of the lack of openness.

    Hyman. Some looked at it but then decided not to publish.

    We have two DIA Documents on our website that have been mostly declassified that looked at programs of a similar nature in Czechoslovakia in particular and the Warsaw Pact nations in general. Yes, we have yet to receive similar analyses of the U.S. counterpart and this may not be any time soon if some sort of similar program is still operating.

    First, I would like to see some references for your claim that the high number of trials could be a problem.

    Reply: pg 54 in Radin. Apples and Oranges. Seems like he uses the same terms you used for this thread.

    Second, more than 5 million trials is impressive on the surface. But remember that these 5.6 million trials were not using the same protocols, nor the same methodology, nor did they test for the same thing. >

    Reply: Thousands of trials nested within these MORE than 5.6 million trials indeed used the same protocol. There are no rules governing this impressive number of studies which says they must ALL be identical. The body of data is heterogenous as to method or protocol but tests an underlying concept that in the end is the same. Would you suggest experimeneters not use or try different methods to test a concept? LOL. If every one of the
    5.6 million trials were identical I could hear the uproar from skeptics: why didn’t you try different tests to falsify?

    Reply: I have not read that quote; however it is a logical conclusion based on the evidence provided by this enomrous amount of data.

    Reply: Probably none that you would accept but that is why we created http://www.survivalscience.org together with the introduction that people can read this information and decide for themselves.

    If you are sick you go to the doctor, if you are sued you go to the lawyer; if your car won’t run you go to the mechanic. If you are in doubt about psi, this is one area you can decide on for yourself.
    There is no shortage of data as well as narrative/rhetoric to help people make up their own minds.

    #72786
    sgrenard
    Participant

    I sat down and spent 1 hour replying to you. Then the system refused to allow me to post it because I am limited to 5K characters. So I cut and tried to paste 3/4s of my reply to put in separate posts but the system wouldnt let me do that either and the balance of my post was lost. I do not have time tonight to do this again so you will have to forgive me for not responding to the balance of your post under the circumstances.

    #72768
    sgrenard
    Participant

    I really have nothing to add to my responses. They speak for themselves and people can make up their own minds. You
    seem bent on stipulating the need for a positive hypothesis and this is fine. As I said elsewhere and repeatedly and as you no doubt know: any hypothesis can be turned negative or positive through wording.

    Skepticism is not science, it is philosophy. Bob Carroll, the man who started “skepdic” admits this. When we get hung up in the semantic issues of philosophy, we also get bogged down in reaching conclusions based on the evidence, not language.

    Thanks for your questions and opinions and for allowing me to respond here.

    Steve

    #75563
    Cantata
    Participant

    sgrenard,

    I’ll try to post without quoting – let’s see how that works! :)

    When you test for a floor cleaner, and one of the tests come up with a negative or merely neutral result, you do not take this test as proof that the floor cleaner still works. But this is what happens in psi research. We saw that in PEAR publications and we also see it everywhere else in psi research.

    If the group you gave as an example performs “way beyond” chance, we have to:

    1. define what “way beyond” chance means. Is this the usual minuscule effect psi research can show?
    2. perform the same test, using the same people and protocols, to see if we get similar results again.

    Can you define what “way beyond” means? Are these tests ever repeated (and can be repeated), for all to see?

    We have to ensure that a single test wasn’t merely a statistical “bump”. Chance does not dictate that every experiment falls smack on average. Sometimes, we get a nice evenly distributed series of “heads” and “tails”, sometimes you get a series of 10 “heads”.

    Regarding skeptics, you have only mentioned Hyman. Which other has looked over the data? Who decided not to publish – and why? If you know they didn’t publish, you must also know their names.

    You mention that Hyman could be biased. That will also mean that the psi researchers could be biased, couldn’t it?

    I am glad that we agree that there weren’t a full, open and complete access to the data. For whatever reason, the data were not publicly available. Scientifically, this is unacceptable, and makes it impossible to claim any proof of psi. It makes it even more incredible that anyone would claim this, given the earth-shattering consequences: Would any intelligence allow such data to be made available to all?

    The 5.6 million trials did not use the same protocols. These experiments have been going on for decades, and PEAR makes a point out of their ability to improve constantly on their methods.

    Fearing an “uproar” from your adversaries should not deter you from performing the tests in a sound, scientific manner.

    It isn’t a question of making me happy by naming an example of a positive hypothesis that has been scientifically tested and found valid. It is a question of naming just one. You don’t do that, I’m afraid, but simply point to a website and say “go find out for yourself”. I understand the limited timeframe we are all under (or rather: slaves to!), but I would appreciate it if you could find time to form your own opinion, instead of posting a URL. This is, after all, not Yahoo, but a forum for discussion. :)

    Skepticism is indeed not science, but it certainly is not a mere exercise in semantics either. Whether a hypothesis is positive or negative is not a matter of simply twisting words, it can be shown logically to be either one or the other.

    #69747
    sgrenard
    Participant

    http://www.boundaryinstitute.org/articles/Circle_of_lights2.pdf

    Is a published example of some of the more recent efforts being performed to replicate the phenomena documented in the PEAR trials.

    #69748
    sgrenard
    Participant

    According to published reports Robert Jahn made an open invitation to any qualified persons to visit the lab and evaluate the data. In fact Gary Schwartz made such an invitation to Randi to visit the HESL but Randi declined when he would not meet the conditions:

    1. He would be videotaped evaluating the data.

    2. Any conclusions he made would be videotaped and put in writing.

    3. He would not use this evaluation as the basis for a one sided
    rhetorical condemnation of the project in the press or on his website ….without giving or assuring fair and equal time to responses to any criticisms he might make.

    Obviously Randi could not operate under these circumstances so he de facto declined. So while Randi makes conditions for others,, he does not feel that conditions are necessary for him.

    If you peruse the pages of the SI and The Skeptic you will find plenty of skeptics who have spoken out on such data. If you are asking me how many evaluated that data objectively and were qualified to do so, sadly, besides Hyman, I can’t think of one.
    And I guess I am not surprised.

    #68941
    Cantata
    Participant

    Originally posted by sgrenard
    http://www.boundaryinstitute.org/articles/Circle_of_lights2.pdf

    Is a published example of some of the more recent efforts being performed to replicate the phenomena documented in the PEAR trials.

    These efforts have not been met with unequivocal success. 8 of 27 experiments pointed to success. 12 was on chance. The rest, 7, was apparently showing the opposite.

    The references are predominantly pro-psi. We are clearly reading a believer’s paper.

    This is in no way convincing. Quite au contraire.

    #68653
    Cantata
    Participant

    Originally posted by sgrenard
    …Obviously Randi could not operate under these circumstances so he de facto declined. So while Randi makes conditions for others,, he does not feel that conditions are necessary for him.

    The conditions of the Randi Challenge are not made for Randi, but for the claimant. Randi is not the one who has to prove anything. As always, the burden of proof is on the claimant. The Randi Challenge is not about Randi going out and waving the flag. It is simply about proving you can do what you claim. To me, these points seem like feeble excuses for not being able to do just that.

    Originally posted by sgrenard
    If you peruse the pages of the SI and The Skeptic you will find plenty of skeptics who have spoken out on such data. If you are asking me how many evaluated that data objectively and were qualified to do so, sadly, besides Hyman, I can’t think of one.
    And I guess I am not surprised.

    I am. You mentioned earlier that:

    Originally posted by sgrenard
    Some looked at it but then decided not to publish.

    Since you now cannot think of a single one, this can only mean that the you made that statement up. You never knew of any other skeptic that had overlooked the data.

    #68625
    sgrenard
    Participant

    I am afraid I do know other members of the CSICOP Board, for example, who looked at this data. I am not interested in pursuing
    an ad hominem attack on these people so I will not name them since I have already implied, and you are well aware, that they did not objectively look at that data. They looked at the studies and then proceeded to engage in the usual sophist rhetoric,
    trying to make their case on philosophical grounds.

    I will mention one name since he had publicly decided to resign as a Physics Professor and become a Philosopher………………
    Victor Stenger. If there was an ever an example of what I have been arguing about science vrs philosophy, it might well be exemplified by this. In short, V.S., a physicist and founder
    of Hawaiian skeptics spends his entire career in physics attempting to debunk the obvious through scientific arguments.
    I frankly havent followed everything he has written but for whatever reason he now writes no longer from the authority of
    physics but from philosophy. Some day I hope he shares in depth his reasoning. I am sure it would be interesting.

    #68623
    ForumModerator
    Participant

    The purpose of this forum is to close the gap of understanding between believers and non-believers, not to debate which is right, wrong, or scientifically provable.

    Please re-direct this conversation to that end.

    #68621
    sgrenard
    Participant

    Originally posted by ForumModerator
    The purpose of this forum is to close the gap of understanding between believers and non-believers, not to debate which is right, wrong, or scientifically provable.

    Please re-direct this conversation to that end.

    Trying to reduce this debate, from my point of view to one central issue, it is that of whether one should look at the proof for psi, LAD, ADCs, OBES, ESP, etc etc from a philosophical or a scientific point of view.

    Closing the gap of understanding between believers or acceptors and non-believers or non-acceptors may hinge on this single point.

    You know where I stand, whether its right or wrong, it is my personal belief that philosophy and science are apples and oranges, oil and water. I am not even sure they have anything to offer each other.

Viewing 15 posts - 1 through 15 (of 22 total)

The topic ‘PEARs, apples, oranges, bananas.’ is closed to new replies.