On the whole, 2014 was at best unremarkable and at worst unpleasant (speaking personally – on the global scale, it was rather a shit year all around). But there was one most excellent thing that came out of it.
I’m very selective (on the verge of Luddite) in my embrace of online tools and technology – somewhat unfortunate, given my line of work, as the road to freelancer success can be somewhat more smoothly paved for those of us who hurl ourselves vigorously into the blogosphere or Twitterverse whatever hackneyed ‘social media’ + ‘topographic delineation’ construct you may prefer. There are many reasons I fundamentally dislike Facebook and Twitter, but it’s often hard for me to express them lucidly without descending into visceral “you’re a big doody-head” level arguments.
I am, in fact, on Facebook. Furthermore, I go on every single morning, primarily because I’ve learned the hard way that it’s the only way to find out whether people I know are getting hired, fired, married, divorced, pregnant, exiled, executed or canonized. And it has that little birthday box on the right. But I would say that only a vanishingly small percentage of my friends know how to use Facebook the way I would want to use it – posting interesting, thoughtful articles and provoking stimulating discussion with people I wish I knew. Otherwise, it’s like a cross between the world’s longest no-cover open-mic night and the comments section of YouTube. Likewise with Twitter – I am not personally on Twitter, but I do use it to, for example, get ‘behind the scenes’ information from scientific meetings or product launch events. But then I have to roam through endless repetitive retweets and sad, sad attempts at “wit”. And god save us all from #hashtaghumor.
This is why the idea of ‘social search’ scares me – even though tech pundits seem to love the concept so. So few people are looking for exactly what I want, and almost none of my friends does exactly what I do, and everybody’s idea of what’s ‘important’ and ‘interesting’ is miles apart from everbody else’s, with the convergence of this messy Venn diagram unfortunately landing squarely on lolcats and the Harlem Shake. If this kind of ‘digital democracy’ determines what comes up on top of my list, it’s going to take me ten times as long to find anything I need – if I ever find it at all.
And this, in turn, is why I love, love, love Google Reader. Love it.
I’m a Google partisan anyway – Android phone, Chrome browser, Google as homepage – but there is no single page I spend more time on. In the last five years, I have put together a magnificently manicured collection of feeds for both business and pleasure. For my work as a journalist, it’s critical – I can’t even begin to count how many times I’ve started my day stumbling across a story or two that is directly relevant to an assignment I’m working on. My other option is to go to each journal homepage, science blog and science news website every day and try to figure out what’s been updated. Open about three or six dozen browser tabs, because otherwise I can’t keep track of what I haven’t read yet, and hope I remember to ever look at them again. Then, when it’s time to take a break and goof off, let’s just slap a few dozen more tabs up there, because I can’t be spending all day goofing off, but there’s so much new stuff on Cracked and AV Club… Google Reader is my salvation; an hour or two every morning, and maybe a half hour in the afternoon, and I can economically blast through information that would take me eight hours or more to find otherwise. And of course, I have it on my Android phone, so I can never be far from my precious, precious feeds.
But now that’s done. As the nerds among you – and who else is visiting my website, anyway? – know, Google has decided that I should instead find my information by stumbling across whatever articles the five people I know on Google + are reading. Oh, or I suppose I could let Facebook and Twitter show me what to read. Probably something about cats. Cats are funny.
I think this headline from The Guardian puts it better than anybody else I’ve seen so far: “Killing Google Reader is like killing the bees.” This is a tool for primary productivity. It’s like when you hear all the wonderful shiny news about how the new media will kill off the dinosaur tree-killing old media. Yes, there’s lots and lots of room for complaint about newspapers and magazines, and there’s a lot to be said for the fast upload, fast update means of information dissemination… except that probably 75% of all of the news being dissected and re-analyzed online was first broken by reporters and editors working at ‘old media’ institutions. You know – the bees.
Maybe it’s a touch melodramatic – the RSS format isn’t dead, after all. But without the momentum of a unified Google Reader community behind it, who knows how long that will last. This will almost certainly be the last time I ever write these words, but perhaps Hitler put it best:
UPDATED: Oh, and if you’d also like to join in the quixotic battle against a massive global corporation dizzy with its own power, please sign this petition at Change.org: “Keep Google Reader Running.” It’s almost up to 150,000 signatures…
In spite of a solid grounding in experimental practice and animal models, gene therapy has had a difficult road in the clinic. According to the Journal for Gene Therapy, over 1800 gene therapy-oriented clinical trials have been conducted worldwide since 1989, including 67 Phase III trials – typically the last stage in a drug’s journey to the market. However, only one gene therapy product has been approved for use in patients: Glybera, from Dutch company UniQure, which uses a virus to deliver a replacement for a damaged gene normally responsible for fatty acid metabolism. Importantly, Glybera is only available in the EU, although the company seems confident that they’ll win approval from the FDA in short order.
In other words, pharma and biotech companies are still feeling their way in this new land – but that’s not to say that there aren’t a lot of really cool and promising studies out there. A lot of neat stuff has been bubbling up around muscular dystrophy (MD), which encompasses a family of disorders arising from mutations of the gene encoding the dystrophin protein. Dystrophin is critical to the proper arrangement and stabilization of muscle fibers and, without it, muscle tissue wastes away until patients ultimately perish from failure of the muscles of the heart and/or respiratory system.
Now, something you should know about dystrophin. It’s big – I mean epically big. Burj Khalifa big. Blue whale big. Let me put this in perspective: the median length of a human protein is 375 amino acids, but dystrophin contains a whopping 3,677. The DNA encompassed by the gene spans 0.07% of the total human genome, and the process of transcribing the gene into a protein-coding messenger RNA molecule takes SIXTEEN HOURS. This is a damn long gene – the kind that would make Marcel Proust or David Foster Wallace proud.
It’s just slightly smaller than this picture, in other words.
But here’s the cool part – you don’t need the whole thing.
An interesting new article in PLoS ONE, by an interesting guy whose work I’ve just now become acquainted with… Aaron Clauset, formerly of the Santa Fe Institute and now at the University of Colorado at Boulder, who uses computer modeling and data analysis to investigate the principles underlying complex systems ranging from the dynamics of social networks and terrorist cells (probably not as dissimilar as one might expect) to evolutionary biology.
Clauset goes solo on this article, a theoretical piece simply titled ‘How Large Should Whales Be?‘ It’s essentially a follow-up to a study he published in Science in 2008, looking at the distribution of body sizes among terrestrial animals.
In that earlier work, he and co-author Douglas Erwin plotted more than 4,000 land-dwelling bird and mammal species (both living and extinct) from the past 2 million years based on their body mass, and arrived at a curve that was heavily skewed toward the left (low body mass) with a long tail tapering off toward larger and larger body masses:
This reveals a clear peak that appears to represent an optimal minimum size (approximately 40 grams) from an evolutionary perspective, below which remarkably few species seem to tip the scales. The great majority of species analyzed are considerably larger than this ‘peak minimum’, although true heavyweights are relatively rare. Clauset and Erwin arrive at a model that describes this long tail of bigger body sizes as the result of two competing forces. On the one hand, bigger animals are less likely to be eaten by predators and may be better capable of dealing with short-term changes in resource availability than their small-bodied colleagues. This tendency is described by an evolutionary principle known as Cope’s rule, which states that organisms in a given generation of a particular lineage will generally tend to be larger than their ancestors. On the other hand, excessively large animals become less energetically efficient, reproduce more slowly and are generally more prone to extinction events. So, to bust out the cliches, size DOES matter… but bigger isn’t always better.
And yet – as Captain Ahab learned the hard way – the rules change at sea. A 7,000 kg elephant may be king of the hill on terra firma, but that’s only twice as massive as the smallest whale species – such as the adorably puny pygmy right whale, which typically weighs in at a mere 3-3,500 kg.
Try not to step on it.
And as Clauset shows, many of the mammals that spend their entire lives at sea are notably larger than even the mightiest mammoths of yesteryear.
So what gives? The reason turns out to be a single factor – body temperature regulation.
For land mammals and birds, Clauset describes a relatively firm 2 g minimum, below which animals lose body heat into the air too rapidly to maintain their proper internal temperature. However, body heat is generally transferred into water much more rapidly than in air, pushing the minimum mammalian body size up to a whopping 7 kilograms. Using this as a starting point, he applied computational modeling to predict the likely size distributions for cetacean species based on the same parameters he previously applied with land-based species, and the results were astonishingly close to the real distributions for the world’s 77 living cetacean species.
Accordingly, this single factor seems to be sufficient to entirely explain the striking difference in range of sizes between land and sea mammals. These thermoregulatory limits would also have shaped the timing with which marine mammal species began to appear on the evolutionary timeline, with the earliest mammals far too small to survive a full-time seafaring life. This is in keeping with current estimates that suggest that the earliest whale ancestors took their first big dip around 50 million years ago, well over 150 million years after the first mammals came on the scene.
This study provides strong additional support for a surprisingly simple and elegant general model for how the distribution of animal body sizes has shifted over time. Or, as the author modestly notes, “Rarely in biological systems are the predictions of mathematical models so unambiguous and rarely are they upheld so clearly when compared to empirical data.” What remains now, he concludes, is to determine whether our cold-blooded contemporaries the fish, reptiles and amphibians, have played by the same evolutionary rules.
One of my two articles in the November issue of Nature Biotechnology looks at some of the cool stuff that’s going on right now with wearable wireless medical sensors that can track and transmit data about cardiac health, blood sugar and even brain activity to a smart phone for the benefit of patients, their family members and their caregivers.
Among the most commonly cited challenges in bringing these tools forward is providing adequate power to drive both the device and to enable routine data transmission to ensure that the wearer is getting a real-time ‘story’ of physiological changes. We all know that even while technology just gets more powerful and versatile, batteries have more or less continued to suck, and the folks who have been trying to create fully functional medical devices that are essentially small enough to be taped onto your body with a Band-Aid have really been grappling with this. Julian Penders at Belgium’s IMEC research institute is deeply immersed in the world of so-called ‘body area networks’, the design of wireless electronics for physiological monitoring, and a lot of his team’s efforts are particularly focused on this problem and in particular on the idea of moving away from conventional batteries altogether. He told me:
“There has been quite a lot of work in terms of energy harvesting… the dream is to have wearable devices that can be used for an entire lifetime without having to charge them. We’ve been looking into thermal harvesting, radiofrequency harvesting and harvesting of energy from vibrations. We’re not at the point where we can have a fully autonomous system, but I would say that the gap between consumption and generation and harvesting of power is really shrinking. We are on the order of 1 milliwatt for a wearable electrocardiograph, and what we can generate in terms of harvesting is typically on the order of around 100 microwatts – we still have a factor of ten to bridge, but this gap has become much easier to cross.”
Obviously, this is really cool stuff – the equivalent of a body-powered ‘perpetual motion machine’ (or at least as perpetual as things get for we mere mortals). Plus, it’s just one less thing to worry about when your doctor is worried about making sure your heart is working properly 24/7.
An early iteration of IMEC’s wearable ECG patch.
If only I’d been able to wait another month! Just in the past week, a pair of research teams have presented two completely different mechanisms for continuously harvesting power to drive an on-body medical device. The first of these appeared in the exact same issue of Nature Biotech as my article.
MIT’s Anantha Chandrakasan and Konstantina Stankovic of the Massachusetts Eye & Ear Infirmary took advantage of a source of natural electrical power within the inner ear. The cochlea is a snail shell-shaped structure where vibrations received at the eardrum get translated into neural impulses to the auditory cortex of the brain. It contains two types of fluid with different ionic concentrations, separated by a membrane, resulting in a voltage potential of 70-100 millivolts. This endocochlear potential (EP) is a critical component for the generation of electrical signals in response to sounds and therefore must be stably maintained, making it a promising biological battery for ear implants.
Of course, this requires accessing the cochlea and connecting electrodes to the biological ‘anode’ and ‘cathode’ without interfering with normal ear function – no mean feat. But in a series of experiments with guinea pigs (as in actual research guinea pigs, not unwary humans), Stankovic and Chandrakasan pulled it off.
(from Nature Biotechnology)
As Penders remarked, making electronics that can get by with what the body provides is a singular challenge, and the researchers here devised a custom semiconductor chip that can work with the approximately 1.1-6.3 nanowatts generated by the guinea pig EP – including a remarkable wireless radio transmitter that requires only 46 picowatts of standby power, and is estimated to be in ‘active mode’ only 0.0001% of the time. The initial test was only a proof of concept, showing that the chip could run and transmit a wireless signal for at least five hours continuously. Importantly, the authors observed some impairment of hearing, and note that:
“Our data imply that major improvements in low-impedance, small-diameter electrode design would be required to allow long-term energy extraction from the EP without causing long-term cellular trauma during electrode insertion.”
Nevertheless, it’s a very exciting demonstration both of sensor miniaturization and a promising potential mechanism for the long-term operation of cochlear implants and other inner-ear-based sensors in the future. Especially since most of the immediate challenges appear to relate to electrode miniaturization, an area where device manufacturers are making remarkably steady progress.
The second study, although more preliminary, is exciting because it’s targeted for use in cardiac pacemakers, which are the most widely-used medical implants currently on the market: according to a recent article in the American Heart Association’s journal Circulation, there are roughly three million people with implanted pacemakers worldwide. However, with a battery life of five to ten years, many patients can anticipate having to undergo replacement surgery at least once in their lifetime. As an alternative, M. Amin Karami of the University of Michigan is looking into piezoelectric harvesting – drawing power generated from vibrations. Last winter, Karami and his UMich colleague Daniel Inman published an article in Applied Physics Letters, describing how energy obtained from heartbeats might be transformed into a stable reservoir of long-term power for implanted pacemakers. More recently, they presented findings at the American Heart Association showing that their piezoelectric harvester, which is half the size of existing pacemaker batteries, could obtain ten times the power required to operate a pacemaker from the range of vibrations generated by a typical heart. Although these devices have not yet been put to the test in animal studies, this demonstration suggests a promising road forward for pacemakers as well as implanted defibrillators.
It’s definitely nice to think that there may be at least one aspect of our lives where bitching about lousy battery life might soon become a thing of the past – especially if it’s for technology designed to help keep us alive.
Just over a week ago, I lost my very dear friend Kern to a truly evil case of liver cancer.
Given the (astonishingly low) frequency with which I update my website, and the (astonishingly high) proportion of dog-oriented content that those updates comprise, I want to take a moment to eulogize this sweet, odd creature who had been my buddy, my henchman and, in some ways, my alter ego for almost 12 years.
For those of you who don’t feel like reading on as I wax sentimental, the rest is below the jump – as an alternative, perhaps you’d prefer Matt Inman’s very sweet ruminations on dog ownership over at The Oatmeal? The rest of you – follow me.
Like most other Americans, I always plan to kick off each new year with a vigorous storm of self-improvement – grabbing every day by the lapels, smacking it in the face a few times and then striding down the street to celebrate with a cold beer. But more realistically, my year tends to begin like this:
"2012, huh? Wake me when it's 2013."
So I’ll spare you my roster of ‘resolutions’, which will in all likelihood be as far from binding as anything posted on the internet can be. But I will say that even though things look exactly the same in January 2012 as they did in December 2011, I’m hopeful that this is just the launching point for some fun and exciting projects and adventures. I’ve got a mountain of pictures to post, story ideas to develop and – most importantly – trips to take.
So happy new year to all, and let’s see if this can’t all be the start of something good!
Apparently, I’ve been a bad dog-parent. My pup, now 11, still loves to chase his tail, and after 11 years, I still find this endlessly amusing and giggle-inducing. Yes, I like the simple things. But apparently I’ve been reinforcing his clueless behavior even while I mocked it, at least according to a new article from PLoS ONE:
“I gathered data on the first large (n = 400), non-clinical tail-chasing population, made possible through a vast, free, online video repository, YouTube™. The demographics of this online population are described and discussed. Approximately one third of tail-chasing dogs showed clinical signs, including habitual (daily or ‘all the time’) or perseverative (difficult to distract) performance of the behaviour. These signs were observed across diverse breeds. Clinical signs appeared virtually unrecognised by the video owners and commenting viewers; laughter was recorded in 55% of videos, encouragement in 43%, and the commonest viewer descriptors were that the behaviour was ‘funny’ (46%) or ‘cute’ (42%). Habitual tail-chasers had 6.5+/−2.3 times the odds of being described as ‘Stupid’ than other dogs, and perseverative dogs were 6.8+/−2.1 times more frequently described as ‘Funny’ than distractible ones were… These findings highlight that tail-chasing is sometimes pathological, but can remain untreated, or even be encouraged, because of an assumption that it is ‘normal’ dog behaviour. The enormous viewing figures that YouTube™ attracts (mean+/−s.e. = 863+/−197 viewings per tail-chasing video) suggest that this perception will be further reinforced, without effective intervention.”
I’m nothing but an enabler! Plus, you can apparently now do science while watching silly animal videos on YouTube. What an age we live in!
And if you ARE a scientist or science writer, and you’ve actually seen figures and graphical abstracts like these before, well there’s just so much more to love. I remember the receptor complex slide I used to use in grad school that everybody but me thought looked like an angry robot, and my friend’s slide from his TB talk that showed a snapshot of a man sneezing where the dispersing vapor looked just like a giant breast… but these really take the cake. Just what were these editors thinking?
In a world where young’uns hurl around the phrase ROFL with excess and reckless abandon, I must say I came pretty close to literally ROFL for some of these. Hat tip to ‘In The Pipeline‘ for bringing this to my attention. As Derek Lowe rightly points out: “I’ve no idea who this is, but they’re helping, in their way, to make the world a better place.”
Here’s another one that ‘got away’ – an article on various stem cell therapy-based approaches for multiple sclerosis that I wrote this past August, originally slated for a special collection that fell through at more or less the last minute. Since it doesn’t seem like that project is going to be resuscitated at the moment or any point in the near future, here is the final draft of that article…
* * * * * *
The remarkable potential of stem cells to develop into healthy adult tissue has led many people to view them as a biomedical Wizard of Oz, ready to grant them a healthy new heart or brain on demand—a perception fuelled by fevered media coverage extolling their vast therapeutic potential. But as with the Great and Powerful Oz, misconceptions abound regarding the present capabilities of stem cell-based therapies, and some patients with serious degenerative disorders such as multiple sclerosis (MS) are finding themselves disappointed once they actually peer behind the curtain.
“Most of the patients that come to us ask me to give them stem cells because they want to walk again,” says Antonio Uccelli, a neuroimmunologist performing clinical stem cell research at Italy’s University of Genoa. “Patients are mesmerized by the hope that stem cell treatment is a treatment for regenerating tissue, and it’s difficult to convince them otherwise.”