Dawson Church, PhD
There are problems as well as opportunities for energy medicine in the new Affordable Care Act healthcare legislation, dubbed “Obamacare,” passed in the US. The legislation faces legal challenges, and it’s unclear how much of it will get as far as implementation.
Some provisions, such as the restriction on insurance companies denying care based on pre-existing conditions, have already kicked in. Others seem likely to be modified by individual states, struck down by judicial decisions, or delayed due to legal challenges.
How will The Affordable Care Act affect the delivery of energy psychology?
There are a number of possible scenarios, some of which would increase delivery, and others which would inhibit it. For the sake of convenience, I place energy psychology within the larger domain of energy medicine, which in turn I place within the domain of CAM (complementary and alternative medicine).
One of the requirements of The Affordable Care Act is that insurance companies can’t discriminate against integrative practitioners working within the scope of their practice as defined by their states.
If you’re a naturopath (ND) working in a state that has recognized naturopathy, that’s good news; but for fields like energy psychology which are too new to have state-sanctioned definitions, the benefits are uncertain, unless practitioners unite to form state-recognized accreditation bodies.
Another mandate of the law is the formation of community health teams.
These are required to include CAM practitioners. A new workforce healthcare commission specifically defines healthcare to include CAM practitioners both as members of the commission and members of the healthcare workforce. These definitions are crucial; they legitimize CAM by admitting its practitioners to the community of health providers officially recognized as members of the healthcare profession.
However, there are two central aspects of the way the new law is applied that, while matters of administrative agency action rather than government legislation, might have a great deal more impact than the legislation itself. These two factors are billing codes, and the definition of “evidence-based” treatments.
Billing codes seem like a no-brainer; a patient has a condition, a provider treats it and enters a code for that condition, to be reimbursed by an insurance company.
The truth is otherwise.
There are in fact two competing billing code systems, one of which ignores CAM treatments; the other of which includes them. The billing code system adopted by The Affordable Care Act is crucial to the health of CAM.
The first system was developed by the AMA (American Medical Association). It is owned by the AMA, which derives substantial income from the system. It covers every device, surgery, remedy, and supply item in the allopathic toolkit, but contains no codes for CAM items. It’s called CPT (Current Procedural Terminology).
In 1983, the US Department of Health and Human Services contracted with the AMA to make CPT the sole system for billing Medicare.
The second system is called the ABC codes. It contains 4,400 codes that describe both allopathic medicine, and the practices of the 4.3 million practitioners that fall outside of CPTs scope, such as those working in nursing, midwifery, minority and ethnic health, spiritual care, behavioral health, and alternative medicine.
They comply with HIPAA, the 1996 Health Insurance Portability and Accountability Act, which supports electronic billing, the sharing of medical records, and the portability of health insurance between jobs.
The ABC codes are written on standard forms that can be used by both licensed and non-licensed practitioners, are compatible with the CPT codes, and comply with the requirements of state medical boards. For all these reasons, the energy medicine field is likely to strongly support the use of the ABC system.
However, the HHS exclusivity arrangement with the AMA for use of the CPT codes remains in effect despite the success of a two year pilot trial of the ABC codes, and it is unknown whether or not this monopoly will be broken by The Affordable Care Act.
The second factor is the definition of “evidence-based” treatments.
The Affordable Care Act mandates a focus on evidence-based therapies. You might imagine that if a therapy is in widespread use by MDs, it must be evidence-based. You would be wrong. According to an article by the former editor of the New England Journal of Medicine, the vast majority of commonly-used medical treatments have no established basis of proof.
A case in point is arthroscopic micro-surgery for arthritic knees. By the time it had been shown to be no better than placebo by two randomized controlled trials (RCTs), it had grown into a $9 billion industry. The elimination of such treatments that are not evidence-based should save money while simultaneously improving patient outcomes.
The number or studies published by CAM practitioners and proponents has increased exponentially in the past decade as therapeutic modalities adopt the conventional research model and conduct open trials and even RCTs. RCTs, the Gold Standard of evidence, are difficult to design, finance, execute, and conduct, yet many CAM groups have conducted them, spurred partly by the growing emphasis on “evidence-based” standards recently enshrined in The Affordable Care Act.
When “evidence-based treatments” are defined as treatments based on empirical evidence, this criterion supports CAM interventions that have established themselves by research.
If a CAM intervention shows a statistically significant treatment effect, it is considered “evidence-based” by this criterion. Statistical significance means that there is only one possibility in 20 that the results of a study are due to chance, expressed as p < .05.
However, this is rarely the way the term has come to be applied. It is more often interpreted by medical review boards to mean the implementation of treatments with the largest evidence base.
Treatments are ranked by the number of clinical trials, the number of subjects in those trials, and the conformity of trials to criteria such as JADAD, which ranks trials on a 0 to 4 scale based on factors such as the quality of randomization, the number of dropouts, and the reason for dropouts.
While standards such as CONSORT and JADAD are appropriate for drug trials, they are inapplicable to most CAM studies. Yet review boards can, and have, used them to invalidate CAM trials, rather like a cake baker rejecting a loaf of bread because it has no icing.
This interpretation of “evidence-based” heavily favors long-established treatments and shuts out newer treatments. Older treatments have a larger evidence base simply because they’ve been around longer. They have more practitioners and more money available for research.
Young treatments have fewer practitioners able to do research and little or no money for research. Old treatments have institutional support for research. Psychology departments in universities are funded partly by conducting trials, and graduate students make themselves eligible for employment by doing research that moves knowledge forward by small increments, not giant leaps.
I asked the head of a psychology department at a large state university, who uses EFT (Emotional Freedom Techniques) for her personal issues, why she did not encourage her post-doctoral students to conduct studies on EFT.
Her answer was brief and succinct, “Because I want them to be able to get jobs.”
Thomas Kuhn, in The Structure of Scientific Revolutions, observed that the transition from a fledgling to an established paradigm is marked by the last book on the subject that can be read and understood by an educated lay audience. Past that point, the language has become too jargon-filled for general comprehension.
It’s an arcane code spoken and understood only by the high priests of that paradigm. Past that point, research becomes an explication of known and accepted propositions. No new ground is broken. That’s the risk we run if the term “evidence-based” comes to mean a preponderance of well-funded trials of existing treatments exploring tiny fringes of incremental knowledge using large numbers of subjects.
There are several medical definitions of what constitutes an “evidence-based” treatment, one of the most useful of which is that adopted by the National Registry of Evidence-Based Treatments (NREPP) in the US.
It requires a standardized description of the method in the form of a manual and training materials, documentation that the treatment was delivered with fidelity to that method, the use of validated and reliable outcome measures, corrections for dropouts (such as an intent-to-treat analysis), appropriate statistical analysis, sample sizes sufficient to produce a probability of p < .05 or better, and publication in a peer-reviewed professional journal.
The field of psychology also enjoys a precise definition. In the 1990s, a far-sighted group of academic psychologists in Division 12 (Clinical Psychology) of the American Psychological Association set out to define “empirically validated treatments.” They examined hundreds of studies and came up with a set of standards that is rational, objective, and reasonable.
While these criteria are not administered by any body that officially declares a new therapy to have met them, at least they exist. They form a published set of standards against which novel therapies can define their research base. The Task Force defines an “empirically validated treatment” as one for which two different controlled trials have been conducted by independent research teams, and published in a peer-reviewed journal.
For a treatment to be designated as “efficacious,” the studies must demonstrate that the treatment is better than a wait list, placebo, or established efficacious treatment. To be designated as “probably efficacious,” a treatment must have been shown to be better than a wait list in two studies that meet these criteria or are conducted by the same research team rather than two independent teams.
In the absence of a similar standard in medicine, new therapies are at the mercy of skeptics who impose arbitrary and ever-shifting standards on novel therapies.
Review boards tend to reject early clinical reports of the efficacy of a method because there are no clinical trials. When the first clinical trials are published, critics retort that they are not RCTs.
When the first RCTs are published, the critics reject them because they have not been replicated. When they’re replicated, critics fall back on the position that there are not enough of them. When several RCTs are finally published, critics move the goal posts yet again, citing arbitrary and unscientific objections.
These include objections such as stating that the number of subjects was too small, that the peer-reviewed journals in which they were published were insufficiently prestigious, that too few researchers were active in the field, that studies were performed by proponents of the method, or that a study was missing some trivial technical detail.
These objections are self-reinforcing.
A novel therapy cannot afford trials involving hundreds or thousands of subjects. Initially, it has only a small number of researchers, typically impecunious volunteers dedicated to the promotion of the method. No six-figure research scientist salaries are on offer till after a method crosses the paradigmatic threshold. Early pilot studies often contain unavoidable methodological flaws.
The key test of a study is statistical significance at the p < .05 level or better. Yet I’ve witnessed review boards invalidating a therapy that meets APA standards with statistically significant studies on grounds like those above. At no point in the process do critics offer their own evidence for the method’s lack of efficacy.
The skeptical position usually occupies what biologist Rupert Sheldrake calls an “evidence-free” zone. While proponents are expected to meet ever-higher standards of proof, critics exempt themselves from producing any proof whatsoever for their point of view. That’s the dilemma that slows the progress of healing breakthroughs and scientific advances, and prompted physicist Max Planck to declare that science advances only “one funeral at a time,” as the old guard dies off and is replaced with a more open-minded generation.
Andy Grove, the former CEO of Intel Corporation, has published several articles comparing medical research with computer research. Impeded by a skeptical old guard, medical advances typically take 30 years to get from laboratory to patient.
In contrast, computer companies compete vigorously to exploit any competitive improvement, resulting in a doubling of information processing ability every two years. If the computer field had followed the medical model, he says, human beings would still be doing arithmetic using the abacus.
If you’ve ever tried to get a paper on a novel therapy published in a peer-reviewed journal, you’ll know how difficult it is. Anna Baldwin, a research psychologist at the University of Arizona, had 20 conventional papers published in a particular journal. She then submitted an outcome study of Reiki to that same journal. She was rejected on the grounds that the mechanisms of action of Reiki were unknown.
No matter that physicians happily used aspirin or quinine for a century before their mechanisms of action were known, or that outcome studies are not meant to concern themselves with mechanisms.
Even APA review boards have disregarded Division 12 criteria in order to invalidate promising new approaches. I decided to start the peer-reviewed journal Energy Psychology: Theory, Research, & Treatment after submitting a basketball study that met all reasonable criteria for quality to a top sport performance journal.
It was a randomized controlled trial and met all the common standards for these. The editor enthusiastically accepted it for peer review. One reviewer liked it, another disliked it so much that he scrawled rejection “Xs” all across the top of the review form! The third reviewer made typical notes on the paper up till the point at which I described athletes tapping on acupressure points, at which point he wrote, “Is a hoax being perpetrated on this journal?” and quit the review.
In the absence of a fair hearing from established professional media, new therapies usually have to start their own journals, as has happened in the case of energy psychology.
I wholeheartedly subscribe to the APAs Division 12 criteria since they represent a fair, published standard against which any therapy can be assessed. In the absence of a similar medical model, it’s worth quoting APA criteria in any review. They are an antidote to the view that only studies with huge numbers of subjects and perfect quality scores should be considered “evidence-based” therapies.
If The Affordable Care Act falls into this unscientific trap, it will doom medicine to perpetuate established treatments while denying promising new therapies to suffering patients. If it perpetuates the CPT coding system, it will impose what sociologist Margaret Mead called the “ultimate taboo” on CAM; the ultimate taboo is to make something nameless.
Yet, if it avoids these twin traps, providing objective and scientific standards for what constitutes an “evidence-based” treatment, and providing coding for CAM treatments, it will gradually shift the medical landscape, and with it, the health of the entire population.