Posts Tagged ‘Medicine’

31
Jul

The Scientific Journal of Failure

Written by Rick Henrikson. Posted in Commentary, Science

Science is riddled with failure

Really, it’s all over the place.  It’s built right into the scientific method.  You make a hypothesis, with a firm understanding that anything could happen to disprove your faulty notions.  Sometimes it works and you see what you expected, and sometimes it doesn’t.  And some of the most interesting discoveries of all time have come from these “accidents” where researchers stumbled on something that didn’t work how they expected it to.

However.  The majority of these scientific snafus result in absolutely nothing.  You just blew 16 hours and $3,000.  Gone.  But at least now you know that mixing A & B only produces C when you have the conditions very precisely set at D.  But what happens to all of your failed attempts?  Your (very expensive) failed attempts?

They disappear.  There is neither forum nor incentive for researchers to publish their failures.  This leads to an enormous amount of redundant effort, burning through millions of dollars every year (many of which are funded by the tax-payer).  The system sounds really inefficient, right?  So why do we do it?

People tend to focus on successes.  Sure, things didn’t work out 6 or 7 or 17 times…but then one time it all came together.  And then you go back and repeat every single obscure condition that led to your ephemeral success (even wearing the same underwear and eating the same breakfast – but maybe I’m just more scientifically rigorous than most).  And what do you do?  You manage to repeat the success and you publish it.  Maybe you publish a parametric analysis showing how you optimized your result.  But you have left a multitude of errors and false starts in your wake that will never see the light of day, leaving countless other researchers to stumble into the very same unfortunate pitfalls.  It’s a real waste.

But this protects you.  In the business world they call it “barriers to entry“.  You’ve managed to find yourself racing ahead in first place on something and the only thing you can do to protect yourself is drop banana peals to trip up your competitors (fingers crossed nobody gets a blue turtle shell).  On top of that, every time you publish something, you’re putting your reputation on the line.  A retraction can be devastating, particularly for an early career.  Why take such a risk just to publish something that doesn’t even seem very significant?  It doesn’t help you any.  But is this good for science?  Of course not.  You’re delaying progress.  You’re wasting money.  And, particularly in the medical sciences, you’re probably actually killing people.

So Let’s Document The Traps

Danger Science in Progress

I propose a journal devoted entirely to failures.  Scientists can publish any negative results that are deemed “unworthy” of standard publication.  Heck, we can even delay publication by 6-12 months, to give the authors a little head-start on their competitors.  And there would have to be some sort of attribution.  But nobody wants their name attached to The Journal of FAIL.  So let’s call it The Scientific Journal of “Progress”.  Maybe without the quotes.

I wasn’t sure if such a journal existed already (though if it does, I certainly haven’t heard of it – and I’d probably be one of the biggest contributors, right behind this guy).  The only close option that came up was the Journal of Failure Analysis and Prevention, but they seem to be focused on mechanical/chemical failures in industrial settings.  So the playing field is still wide open.

Imagine if every experimental loose end was captured, categorized, and tagged in an efficient manner so that any person following a similar path in the future could actually have a legitimate shot at learning something from the mistakes of others.  It would be kind of like a Nature Methods protocol, but actually including all of the ways things can go wrong.  Authors could even establish credibility by describing why certain conditions didn’t work (a very uncommon practice among scientists).  We might have to develop some technological tools that automatically pull and classify data as researchers collect it, making it a simple “tag and publish” task for the researcher.  And people could reference your findings in the future, providing you with even more valuable street cred for the entire body of work you have developed (not just the sparse successes).

Of course none of this will happen any time soon.  There simply isn’t enough of a motivating force to drive this kind of effort (besides good will).  Maybe one day science will get its collective head out of its collective ass and things will change.  And this will be one of those things.  In the meantime, I will do my best to record and post tips on avoiding my own scientific missteps.

BONUS WEB2.0 TOOL: TinEye

[you’ve read this far, you deserve a reward]

I originally did a google search for “Science Fail” and found the first image of this post on another blog.  I was curious about the actual origins of the image, so I used the TinEye image search engine to find the real source.  Tineye is just like any other search engine, except your query is an actual image.  You just upload (or link to) an image of interest, and TinEye scours the web for any similar images (actual pixels, not metadata).  I quickly found that the picture originally camed from a “Science as Art” competition sponsored by the Materials Research Society (MRS).  TinEye searched over 1.1011 billion images in 0.859 seconds to find that result.  Pretty nifty, right?

The obvious applications are for controlling the distribution of copyrighted images, and there are some potentially frightening applications of such technology for facial recognition in the not-so-distant future (imagine someone sees you on the street, snaps a picture surreptitiously, and does a quick image search to find out who you are – and they’re likely to find out everything).  Scary.  But still kind of fun.

Reblog this post [with Zemanta]

Tags: , , , ,