Category: Scientific Method (page 1 of 2)

Making pseudoscience of homeopathy immune from criticism does not serve public weal

A physician friend alerted me the other day about a strange new official proclamation from the Government of India (GoI). With a long history of uncritical friendliness (as well as State-sponsorship) towards various alternative medicine modalities, GoI —specifically, the ministry in charge of altmed, the Ministry of AYUSH (Ayurveda, Yoga, Unani, Siddha & Homeopathy) in this case— announced that a “high level committee” has been set up to “deal with issues” related to “false propaganda against homeopathy”.

Continue reading

Arnica Extract Changes Gene Expression in Extracellular Matrix? Probably. Does Homeopathic Arnica? Haha, No!

ResearchBlogging.org

A paper published last month in PLOS One by a group of investigators from the University of Verona in Italy states that Arnica montana Stimulates Extracellular Matrix Gene Expression in a Macrophage Cell Line Differentiated to Wound-Healing Phenotype. Given my abiding interest in pharmacognosy and ethnobotany, I was suitably intrigued because the extract derived from Arnica montana, a European flowering plant of the sunflower family, is likely to be biologically active due to the presence of certain sesquiterpene lactones (same class of substances as present in the plant-derived anti-malarial Artemisinin), the plant metabolite flavonoids (substances with some in vitro anti-oxidant and anti-inflammatory activity), and derivatives of thymol (phenolic substance with antimicrobial action). Like many bioactive phytopharmaceutical substances, Helenalins, the sesquiterpene lactones and their fatty acyl esters in Arnica montana, are toxic in high concentrations, but have anti-inflammatory properties via its inhibition of the transcription factor NFkB.

Continue reading

Homeopathy: Is It Really Effective In Upper Respiratory Tract Infections With Fever In Children? Not Quite

ResearchBlogging.org

A recently published paper, with the outcomes of a collaborative European Randomized Controlled Trial (RCT) undertaken in Germany and Ukraine, is making waves amongst jubilant homeopaths as yet another evidence supporting their long-held belief in the clinical effectiveness of homeopathy. Naturally, this 2016 paper in the Journal Global Pediatric Health by van Haselen et al. piqued my curiosity and I dove in to see what the hullabaloo was all about.

Continue reading

PLOS ONE Meta-analysis on Acupuncture in Pain Management Spins Out Undue Recommendations

Science communicators are no strangers to spin in the reporting of scientific studies, especially in Press Releases. This is a favorite tactic of aficionados and researchers alike in the so-called ‘complementary and alternative medicine’ (CAM), which includes acupuncture — a pre-scientific therapeutic modality originating in ancient China with roots in medical astrology and ignorance of human anatomy and physiology. I have earlier written several times on an issue that I continue to find rather perplexing: when it comes to publishing studies on CAM research, the usually-high publication standards of the premier open access journal PLOS ONE appear to be ignored, in the context of both primary research and systemic, quantitative and analytical reviews.

Continue reading

Perception of Effectiveness of Homeopathy and Other Alternative Medicine Relies on Placebo Effect

The world of alternative medicine – nowadays more fashionably known as complementary and integrative medicine (CIM), replacing the erstwhile CAM (A = alternative) – encompasses a wide range of practices. Some of these practices involve physical motion of parts or whole of the body, such as massage, Yoga, and Tai Chi; if one subtracts the dollops of mysticism, especially of Eastern origin, that have come to be associated with these practices, one finds that they perform much of the same functions as any other regular exercise regimen, providing similar benefits. A few practices employ dietary supplements (vitamins, minerals, various salts, et cetera) and folk-remedies based on herbal medicine (Traditional Chinese Medicine/TCM, Ayurveda, Siddha, Unani, Amachi, and so forth) – some of which may and do contain biologically active substances, but the evidence for those being functional, safe, and effective therapeutic modalities in actual clinical situations is extremely scant, and the wide-ranging claims made by the practitioners are mostly never backed up by solid, scientific empirical methods. (Further reading: 1. Veteran ScienceBlogger Orac explains how the multi-billion dollar Supplements Industry takes their adoring clients for a ride; 2. I argue how the recent accolades for work stemming from the use of herbal medicine as a resource is not a context-less validation that herbalism works.)

Continue reading

2015 Nobel to Traditional Chinese Medicine Expert is a Win for Evidence-based Pharmacognosy

Yesterday, on October 5, 2015, one half of the Nobel Prize in Physiology or Medicine was awarded to scientist and pharmaceutical chemist Tu Youyou (alternatively, Tu Yo Yo, 屠呦呦 in Chinese), for her discovery of the anti-malarial Artemisinin. (The other half went jointly to William C. Campbell and Satoshi Ōmura, for their discovery of a novel therapy for roundworm infection.)

Continue reading

Nuance is critical in science communication both ways

Over at Communication Breakdown, my Scilogs-brother and science communicator par excellence Matt Shipman has brought out an interesting post, highlighting the problems in health research coverage by reporters as well as public information officers writing news releases. Matt exhorts these communicators to pay attention to three important concepts: context, limitations, and next steps.

Continue reading

Homeopathy ‘research’: scienciness sans science – Part Deux (paper review)

ResearchBlogging.org

While contemplating the scienciness of homeopathy research and the time, money and effort wasted by misguided homeopathy researchers, I recently came across a paper which represents one such effort; it was published in the Journal of Analytical Methods in Chemistry in 2012, written by two Indian authors, one from the prestigious Indian Institute of Technology in Kharagpur, West Bengal, and the other from a medical college associated with a local district hospital. Intrigued by the title claim of “Medicinally Active Ingredient in Ultradiluted Digitalis purpurea”, I decided to delve in.

Continue reading

Homeopathy ‘research’: scienciness sans science – Part Un (dilutions)

The “alternative medicine” modality called homeopathy is popular in some parts of the world, especially some European countries (including Germany, where it was invented in the late 18th century; France; the UK), and in India and its neighbors in the subcontinent. Many Indian homeopaths are well-known amongst the global homeopathy-aficionado community, and there were over 250,000 registered homeopaths in India in 2010 – which is not surprising, considering that homeopathy enjoys official government patronage in India and is recognized as a valid system of medicine in that nation.

Continue reading

Issue of Spin in the Communication of Scientific Research

Ada Ao, a cancer and stem cell biologist, and aspiring science communicator writing for Nature Education‘s SciTable blog, has an interesting post put up today. She cautions that it is a tirade (according to her, of course; pffft!) against a recently-published PLoS Medicine article by Amélie Yavchitz and associates, titled “Misrepresentation of randomized controlled trials in press releases and news coverage: a cohort study” (Yavchitz et al., PLoS Med., 9(9):e1001308, 2012).

In explaining the motivation behind the study, the PLoS Medicine Editor’s Summary indicates:

Findings of randomized controlled trials (RCTs—studies that compare the outcomes of patients randomly assigned to receive alternative interventions), which are the best way to evaluate new treatments, are sometimes distorted in peer-reviewed journals by the use of “spin”…

… which the authors have defined in the PLoS Medicine paper as “specific reporting strategies (intentional or unintentional) emphasizing the beneficial effect of the experimental treatment“. The Editor’s Summary continues,

For example, a journal article may interpret nonstatistically significant differences as showing the equivalence of two treatments although such results actually indicate a lack of evidence for the superiority of either treatment.

“Spin” can distort the transposition of research into clinical practice and, when reproduced in the mass media, it can give patients unrealistic expectations about new treatments. It is important, therefore, to know where “spin” occurs and to understand the effects of that “spin”.

To this end, the researchers led by Yavchitz used two database indexes, EurekAlert and Lexis Nexis, to evaluate the presence of spin in 70 press releases and corresponding 41 news reports, associated with two-arm, parallel-group RCTs over 4-months. They sought to analyze whether the media coverage contained misinterpretation and/or misrepresentation of RCT results.

The article concluded that about 47-51% of press releases and media coverages of RCTs contained spin; they found that these occurrences of spin correlated positively with similar spin in the article abstracts.

Bummer? Does it cast a shadow over half the clinical trials the authors looked at? Does this indicate that these clinical trials are inherently unreliable? Not really, as Ada explains, expressing her indignation at the implications:

Managing the “spin” factor in scientific publishing requires a certain type of finesse. On the one hand, scientists are expected to present their data dispassionately and objectively; at the same time, they are also expected to make their research sound “sexy,” or at least relevant and orderly. Scientists must appear to know what they’re doing, even though research is a messy, disorganized affair as researchers grope around in the dark in uncharted territory. The unexpected always happens and Murphy’s law holds sway. Yet, scientists must appear to be in control and to have an agenda – to understand disease X, or explain phenomenon Y – all to justify public funding, get a paper published, or prop up an image of competence.

So, are there a lot of spin in the published literature? You bet. Does the spin cross from self-promotion to outright fraud? That’s a grey area, and like pornography, you know that line’s been crossed only when you see it.

Rather eloquently put, I thought, in the last paragraph; Ada seems to have captured the spirit of matter very well in her ‘tirade’. In addition, as a professional scientist myself, I wanted to add a little bit of clarification to it, specifically two points.

First, Yavchitz’s study focused on spin in “press releases and news coverage”, not on spin in actual scientific papers. This is an important distinction. Yavchitz and her associates did bivariate and multivariate analysis to figure out the source of the spin in media coverages and implicated the article abstract (which is an author-written summary, required by the publishers to be sent to indexing services such as PubMed). I submit that the severely abridged nature of the article abstract (often constrained to 250 words or less) often precludes most mentions of the complexities of the research findings. The abstract provides, therefore, an essentially incomplete picture, and Yavchitz’s observations, if anything, highlight the inherent danger in trying to assess the merit of the information in a scientific paper from its abstract.

In addition, press releases and news coverages don’t necessarily have to serve the truth – though ideally, they should; they serve different masters (such as, say, commerce, or popularity, or attention of funding agencies). In contrast, the only allegiance a scientific paper has (or should have) is to the empirical evidence. In this latter format, there is not much space left for spin.

Every paper tries to tell a story coherently; the introduction and discussion parts are used to lay out available evidence and explain the observations. While it is true that conscious or unconscious bias on part of the authors may get into the interpretation of the observed data, the beauty of a scientific paper is that it still contains the results section with raw and/or derived data; with Open Source publishing, more and more publishers are enabling authors to also make supplementary data available for others. This allows independent scrutiny and evaluation of the observations.

Therefore, when we assume the role of scientists and read a paper, we must delve into the actual results and judge for ourselves the interpretations made by the authors. If we find a contradiction, or some point of an unsatisfactory nature, we must question the author(s) – another process made relatively easier by Open Source publishing.

All this to say that chances of spin influencing scientific papers are minimal, given the intense scrutiny they receive before and after publication. Ada brings out this fundamental point about the peer-reviewed, published scientific papers, when she writes:

There’s… an unspoken expectation for the readers to look at the data presented and draw their own conclusions. It’s like every paper comes with a presumption of guilt, and the reader’s job is to prosecute the hell out of it…

That means applying the same level of common sense and skepticism that we may apply to other aspects of our lives. A science paper isn’t meant to tell you what to think, it’s meant to be prosecuted vigorously based on the evidence presented.

It’s not just the sundry readers. As I have written elsewhere, the scientific process demands independent verification and/or replication of the results by, say, other groups. This recursive process distils out scientifically tenable propositions, which eventually no amount of spin can influence.

The readers of scientific papers (in which I don’t, of course, include press releases and news coverage), may be of two types: (a) scientists, or others with some expertise in science and the scientific process (such as veteran science journalists), and (b) the general public. There is an important distinction between the two. Ada understands this; she writes:

… The public was never told, point blank, to read between the lines and seriously critique a paper…

However, the reason for that – to her mind – which is…

… because that would contradict the dispassionate persona science has maintained in the public consciousness — science is supposed to be the distiller of truth.

… is probably not the right way to put it. To me, the reason the general public isn’t meant to seriously critique a scientific paper is essentially one of expertise and specialization, in the same way the general public isn’t expected to argue about finer points of law, or intricacies of economics, or procedures in medicine. This is why the general public likely relies more on press releases and news coverages to become aware of scientific undertakings and facts.

And this, right there, adds to the responsibility of the scientists, who must engage themselves enthusiastically in the process of science communication and education – beyond what they normally do, i.e. the investigation of natural processes; the scientists must be in a position to comment to the general public, explaining processes, interpreting scientific data, and correcting misrepresentations, i.e. generally counteracting spins. Public engagement and science education have gained a crucial significance like never before. Scientists must take the lead.

Older posts