When did scientific research become a paying gig?

Hey folks! Long time no see! I’m back from ‘new baby’ land and looking forward to getting some new and interesting posts up here. As some of you may know, I’ve transitioned from scientific research to freelance science writing. That means that most of my writing time goes to trying to earn my living. But I love this blog and getting to share thoughts and information on biology and evolution with all of you too much to want to let it go. So what I’m going to try is a more informal approach. Until now, I’ve written each blog post as though it were an entire article… longish, with a full story and some references, following a set format. Doing that takes a lot of time, and I haven’t been able to make it work with the circus that is life as a freelancer with small children. From now on, I’ll be trying out some shorter, more conversational posts with less of a rigid format. And I’ll be discussing a wider range of topics around science, its history, and its place in the world, rather than strictly stories about biology. I always welcome comments, so please let me know what you think of the new format and/or what you’d like to see in this space. Thanks for reading!

***

I’m currently working on a personal project related to Victorian-era natural history (botany & zoology), so it’s my plan to share what I’m learning and reading here over the next little while. As an armchair enthusiast of Victorian history, I’m always taken aback by just how much of what we’ve come to consider the norm in today’s culture has its roots in the 19th century. One fascinating example of this is our modern concept of the scientist. As of the early and even mid-19th century, scientific research wasn’t a ‘profession’, per se. That is, it wasn’t a job you went in to expecting to earn a living. At that time, it was largely undertaken by men with independent fortunes, who used their personal money to finance their work. These ‘gentlemen scientists’ as we now refer to them (even the term ‘scientist’ wasn’t in use then) were the primary driving force in research. Even those with teaching positions in universities typically received only a small honorarium for their work. As a matter of fact, it was considered less than respectable at the time to do science for money, which barred anyone without independent wealth from making a career of it. This didn’t change until the second half of the century, when these social taboos began to relax and government-funded science (which is the norm even today) became more common. During the transition period between these two social views and systems of conducting research, men who wished to go into science, but who needed to earn an income, were anxious about the potential damage to their reputation of being seen to study science for money, and were careful to cultivate their image as gentlemen to as not to be tainted by their income. It’s ironic that in recent years, many university teaching positions have become so poorly paid that those in them need to have an independent source of income to survive in that job. A return to the early Victorian mode of self-funded science isn’t something we should be aspiring to today.

What’s in a Name?

Part Two: How’s Your Latin?

295738_obamadon_f
The awesomely named Obamadon gracilis.  Image: Reuters

What do Barack Obama, Marco Polo, and the band Green Day have in common? They all have at least one organism named after them. Obama has several, including a bird called Nystalus obamai and an extinct reptile named Obamadon gracilis. Green Day’s honorary organism is the plant Macrocarpaea dies-viridis, “dies-viridis” being Latin for “green day.” Many scientists also have species named after them, usually as recognition for their contributions to a field. My own PhD advisor, Dr. Anne Bruneau, has a genus of legumes, Annea, named after her for her work in legume systematics.

Nashi_pear
“Pear-leaved Pear”   Photo via Wikimedia Commons

Scientific names, which are colloquially called Latin names, but which often draw from Greek as well, consist of two parts: the genus, and the specific epithet. The two parts together are called the species. Though many well-known scientists, celebrities, and other note-worthies do have species named after them, most specific epithets are descriptive of some element of the organism or its life cycle. Many of these are useful descriptions, such as the (not so bald) bald eagle, whose scientific name is the more accurate Haliaeetus leucocephalus, which translates to “white-headed sea eagle.” (See here for some more interesting examples.) A few are just botanists being hilariously lazy with names, as in the case of Pyrus pyrifolia, the Asian pear, whose name translates as “pear-leaved pear.” So we know that this pear tree has leaves like those of pear trees. Great.

In contrast to common names, discussed in our last post, Latin names are much less changeable over time, and do not have local variants. Soybeans are known to scientists as Glycine max all over the world, and this provides a common understanding for researchers who do not speak the same language. Latin is a good base language for scientific description because it’s a dead language, and so its usage and meanings don’t shift over time the way living languages do. Until recently, all new plant species had to be officially described in Latin in order to be recognized. Increasingly now, though, descriptions in only English are being accepted. Whether this is a good idea remains to be seen, since English usage may shift enough over the years to make today’s descriptions inaccurate in a few centuries’ time.

This isn’t to say that scientific names don’t change at all. Because scientific names are based in organisms’ evolutionary relationships to one another (with very closely related species sharing a genus, for example), if our understanding of those relationships changes, the name must change, too. Sometimes, this causes controversy. The most contentious such case in the botanical world has been the recent splitting of the genus Acacia.

acacia
The tree formerly known as Acacia. Via: Swahili Modern

Acacia is/was a large genus of legumes found primarily in Africa and Australia (discussed previously on this blog for their cool symbiosis with ants). In Africa, where the genus was first created and described, the tree is iconic. The image of the short, flat-topped tree against a savanna sunset, perhaps accompanied by the silhouette of a giraffe or elephant, is a visual shorthand for southern Africa in the popular imagination, and has been used in many tourism campaigns. The vast majority of species in the genus, however, are found in Australia, where they are known as wattles. When it became apparent that these sub-groups needed to be split into two different genera, one or the other was going to have to give up the name. A motion was put forth at the International Botanical Congress (IBC) in Vienna in 2005 to have the Australian species retain the name Acacia, because fewer total species would have to be renamed that way. Many African botanists and those with a stake in the acacias of Africa objected. After all, African acacias were the original acacias. The motion was passed, however, then challenged and upheld again at the next IBC in Melbourne in 2011. (As a PhD student in legume biology at the time, I recall people having firm and passionate opinions on this subject, which was a regular topic of debate at conferences.) It is possible it will come up again at this year’s IBC in China. Failing a major turnaround, though, the 80 or so African acacias are now known as Vachellia, while the over one thousand species of Australian acacias continue to be known as Acacia.

The point of this story is, though Latin names may seem unchanging and of little importance other than a means of cataloguing species, they are sometimes both a topic of lively debate and an adaptable reflection of our scientific understanding of the world.

Do you have a favourite weird or interesting Latin species name? Make a comment and let me know!

What’s in a Name?

Part One: Common vs. Scientific Names

img_3146-staghornsumac

When I was a kid growing up on a farm in southwestern Ontario, sumac seemed to be everywhere, with its long, spindly stems, big, spreading compound leaves, and fuzzy red berries. I always found the plant beautiful, and had heard that First Nations people used the berries in a refreshing drink that tastes like lemonade (which is true… here’s a simple recipe). But often, we kids were warned by adults that this was “poison sumac,” not to be touched because it would give us itchy, burning rashes, like poison ivy did. In fact, plenty of people would cut down any nascent stands to prevent this menace from spreading. We were taught to fear the stuff.

 

Toxicodendron_vernix#1978a#2_400
THIS is the stuff you need to look out for. Via The Digital Atlas of the Virginia Flora

It was many years later before I learned that the red-berried sumacs I grew up with were not only harmless, but were also not closely related to the poisonous plant being referred to, which, as it turns out, has white berries and quite different leaves. Scientifically speaking, our innocent shrub is Rhus typhina, the staghorn sumac, while the rash-inducing plant is called Toxicodendron vernix. Not even in the same genus. Cautious parents were simply being confused by the similarity of the common names.

 

This story illustrates one of the ironies of common names for plants (and animals). Though they’re the way nearly everyone thinks of and discusses species, they’re without a doubt the most likely to confuse. Unlike scientific (Latin) names, which each describe a single species and are, for the most part, unchanging, a single common name can describe more than one species, can fall in and out of use over time, and may only be used locally. Also important to note is that Latin names are based on the taxonomy, or relatedness, of the species, while common names are usually based on either appearance, usage, or history.

 

This isn’t to say that common names aren’t valuable. Because common names describe what a plant looks like or how it is used, they can convey pertinent information. The common names of plants are also sometimes an important link to the culture that originally discovered and used the species, as in North America, where native plants all have names in the local languages of First Nations people. It seems to me, although I have no hard evidence to back it up, that these original names are now more often being used to form the Latin name of newly described species, giving a nod to the people who named it first, or from whose territory it came.

 

One high profile case of this in the animal world is Tiktaalik roseae, an extinct creature which is thought to be a transitional form (“missing link”) between fish and tetrapods. The fossil was discovered on Ellesmere Island in the Canadian territory of Nunavut, and the local Inuktitut word “tiktaalik”, which refers to a type of fish, was chosen to honour its origin.

 

But back to plants… Unlike staghorn sumac and poison sumac, which are at least in the same family of plants (albeit not closely related within that family), sometimes very distinct species of plants can end up with the same common name through various quirks of history. Take black pepper and bell or chili peppers. Black pepper comes from the genus Piper, and is native to India, while hot and sweet peppers are part of the genus Capsicum. Botanically, the two are quite distantly related. So why do they have the same name? Black pepper, which bore the name first, has been in use since ancient times and was once very highly valued. The confusion came about, it would seem, when Columbus visited the New World and, finding a fruit which could be dried, crushed, and added to food to give it a sharp spiciness, referred to it as “pepper” as well.

Sa-pepper
A black peppercorn. Easy to confuse with a chili pepper, I guess? Via: Wikimedia Commons

 

Another interesting, historically-based case is that of corn and maize. In English-speaking North America, corn refers to a single plant, Zea mays. In Britain and some other parts of the Commonwealth, however, “corn” is used to indicate whatever grain is primarily eaten in a given locale. Thus, Zea mays was referred to as “Indian corn” because it was consumed by native North Americans. Over time, this got shortened to just “corn”, and became synonymous with only one species. Outside of Canada and the United States, the plant is referred to as maize, which is based on the original indigenous word for the plant. In fact, in scientific circles, the plant tends to be called maize even here in North America, to be more exact and avoid confusion.

 

1024px-Spanish_moss_at_the_Mcbryde_Garden_in_hawaii
Not Spanish, not a moss. Via: Wikimedia Commons

And finally, for complete misinformation caused by a common name, you can’t beat Spanish moss. That wonderful gothic stuff you see draped over trees in the American South? That is neither Spanish, nor a moss. It is Tillandsia usneoides, a member of the Bromeliaceae, or pineapple family, and is native only to the New World.

 

And that wraps up my very brief roundup of confusing common names and why they should be approached with caution. In part two, I’ll discuss Latin names, how they work, and why they aren’t always stable and unchanging, either.

 

There are SO many more interesting and baffling common names out there. If you know of a good one, let me know in the comments!

 

*Header image via the University of Guelph Arboretum

Forever Young

How Evolution Made Baby-faced Humans & Adorable Dogs

human_development_neoteny_body_and_head_proportions_pedomorphy_maturation_aging_growth

Who among us hasn’t looked at the big round eyes of a child or a puppy gazing up at us and wished that they’d always stay young and cute like that? You might be surprised to know that this wish has already been partially granted. Both you as an adult and your full-grown dog are examples of what’s referred to in developmental biology as paedomorphosis (“pee-doh-mor-fo-sis”), or the retention of juvenile traits into adulthood. Compared to closely related and ancestral species, both humans and dogs look a bit like overgrown children. There are a number of interesting reasons this can happen. Let’s start with dogs.

When dogs were domesticated, humans began to breed them with an eye to minimizing the aggression that naturally exists in wolves. Dogs that retained the puppy-like quality of being unaggressive and playful were preferentially bred. This caused certain other traits associated with juvenile wolves to appear, including shorter snouts, wider heads, bigger eyes, floppy ears, and tail wagging. (For anyone who’s interested in a technical explanation of how traits can be linked like this, here’s a primer on linkage disequilibrium from Discover. It’s a slightly tricky, but very interesting concept.) All of these are seen in young wolves, but disappear as the animal matures. Domesticated dogs, however, will retain these characteristics throughout their lives. What began as a mere by-product of wanting non-aggressive dogs has now been reinforced for its own sake, however. We love dogs that look cute and puppy-like, and are now breeding for that very trait, which can cause it to be carried to extremes, as in breeds such as the Cavalier King Charles spaniel, leading to breed-wide health problems.

cavalier-king-charles-spaniel-blenheim-dog-tag-id-image-nm
An undeniably cute Cavalier King Charles spaniel, bred for your enjoyment. (Via Wikimedia Commons)

Foxes, another type of wild dog, have been experimentally domesticated by scientists interested in the genetics of domestication. Here, too, as the foxes are bred over numerous generations to be friendlier and less aggressive, individuals with floppy ears and wagging tails – traits not usually seen in adult foxes – are beginning to appear.

But I mentioned this happening in humans, too, didn’t I? Well, similarly to how dogs resemble juvenile versions of their closest wild relative, humans bear a certain resemblance to juvenile chimpanzees. Like young apes, we possess flat faces with small jaws, sparse body hair, and relatively short arms. Scientists aren’t entirely sure what caused paedomorphosis in humans, but there are a couple of interesting theories. One is that, because our brains are best able to learn new skills prior to maturity (you can’t teach an old ape new tricks, I guess), delayed maturity, and the suite of traits that come with it, allowed greater learning and was therefore favoured by evolution. Another possibility has to do with the fact that juvenile traits – the same ones that make babies seem so cute and cuddly – have been shown to elicit more helping behaviour from others. So the more subtly “baby-like” a person looks, the more help and altruistic behaviour they’re likely to get from those around them. Since this kind of help can contribute to survival, it became selected for.

chimpanzee-mastiff-dog-friends-10
You and your dog, essentially. (Via The Chive)

Of course, dogs and humans aren’t the only animals to exhibit paedomorphosis. In nature, the phenomenon is usually linked to the availability of food or other resources. Interestingly, both abundance and scarcity can be the cause. Aphids, for example, are a small insect that sucks sap out of plants as a food source. Under competitive conditions in which food is scarce, the insects possess wings and are able to travel in search of new food sources. When food is abundant, however, travel is unnecessary and wingless young are produced which grow into adulthood still resembling juveniles. Paedomorphosis is here induced by abundant food. Conversely, in some salamanders, it is brought on by a lack of food. Northwestern salamanders are typically amphibious as juveniles and terrestrial as adults, having lost their gills. In high elevations where the climate is cooler and a meal is harder to come by, many of these salamanders remain amphibious, keeping their gills throughout their lives because aquatic environments represent a greater chance for survival. In one salamander species, the axolotl (which we’ve discussed on this blog before), metamorphosis has been lost completely, leaving them fully aquatic and looking more like weird leggy fish than true salamanders.

axolotl_ganz
An axolotl living the young life. (Via Wikimedia Commons)

So paedomorphosis, this strange phenomenon of retaining juvenile traits into adulthood, can be induced by a variety of factors, but it’s a nice demonstration of the plasticity of developmental programs in living creatures. Maturation isn’t always a simple trip from point A to point B in a set amount of time. There are many, many genes at play, and if nature can tweak some of them for a better outcome, evolution will ensure that the change sticks around.

Sources

*Header image by: Ephert – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=39752841

Redesigning Life

redesigning-life_cover

This post originally appeared on Science Borealis

“Imagine if living things were as easy to modify as a computer Word file.” So begins John Parrington’s journey through the recent history and present-day pursuits of genetic modification in Redesigning Life. Beginning with its roots in conventional breeding and working right up to the cutting edge fields of optogenetics, gene editing, and synthetic biology, the book is accessible to those with some undergraduate-level genetics, or secondary school biology and a keen interest in the subject. This audience will be well served by a book whose stated goal is to educate the public so that a proper debate can take place over the acceptable uses of genome editing.

 

Parrington doesn’t shy away from the various ethical concerns inherent in this field. While he points out, for example, that many fears surrounding transgenic foods are the result of sensational media coverage, he also discusses the very real concerns relating to issues such as multinational companies asserting intellectual property rights over living organisms, and the potential problems of antibiotic resistance genes used in genetically modified organisms (GMOs). Conversely, he discusses the lives that have been improved with inventions such as vitamin A-enriched “golden rice”, which has saved many children from blindness and death due to vitamin deficiencies, and dairy cattle that have been engineered to lack horns, so they can be spared the excruciating process of having their horn buds burned off with a hot iron as calves. These are compelling examples of genetic modification doing good in the world.

 

This is Parrington’s approach throughout the book: both the positive and negative potential consequences of emerging technologies are discussed. Particular attention is paid to the pain and suffering of the many genetically modified animals used as test subjects and models for disease. This cost is weighed against the fact that life-saving research could not go ahead without these sacrifices. No conclusions are drawn, and Parrington’s sprawling final chapter, devoted solely to ethics, is meandering and unfocussed, perhaps reflecting the myriad quagmires to be negotiated.

 

Weaving in entertaining and surprising stories of the scientists involved, Parrington frequently brings the story back to a human level and avoids getting too bogged down in technical details. We learn that Gregor Mendel, of pea-breeding fame, originally worked with mice, until a bishop chastised him for not only encouraging rodent sex but watching it. Mendel later commented that it was lucky that the bishop “did not understand that plants also had sex!” We’re told that Antonie van Leeuwenhoek, known as the father of microscopy, was fond of using himself as a test subject. At one point, he tied a piece of stocking containing one male and two female lice to his leg and left it for 25 days to measure their reproductive capacity. Somewhat horrifyingly, he determined that two breeding females could produce 10,000 young in the space of eight weeks.

 

The applications of the fast moving, emerging technologies covered in Redesigning Life will astound even those with some familiarity with modern genetics. The new field of optogenetics, for example, uses light-sensitive proteins such as opsins to trigger changes in genetically modified neurons in the brain when light is shone upon them. In a useful, yet nevertheless disturbing proof-of-concept experiment, scientists created mind-controlled mice, which, at the flick of a switch, can be made to “run in circles, like a remote-controlled toy.” More recently, sound waves and magnetic fields have been used to trigger these reactions less invasively. This technique shows potential for the treatment of depression and epilepsy.

 

The book goes into some detail about CRISPR/CAS9 gene editing, a process that has the potential to transform genetic modification practices. This system is efficient, precise, broadly applicable to a range of cell types and organisms, and shortens the research timeline considerably compared to traditional methods of creating GMOs. It underpins most of the other technologies discussed in the book, and its applications seem to be expanding daily. In the words of one of its developers, Jennifer Doudna, “Most of the public does not appreciate what is coming.” These words could be applied to almost any technology discussed in this book. Already within reach are so-called “gene drive” technologies, which could render populations of malaria-bearing mosquitos – or any other troublesome species – sterile, potentially driving them to extinction, albeit with unknown ancillary consequences. Researchers have also developed a synthetic genetic code known as XNA, which sports two new nucleotides and can code for up to 172 amino acids, as opposed to the usual 20. Modifying organisms to contain XNA opens up the possibility of creating proteins with entirely novel functions, as well as the tantalizing prospect of plants and animals that are entirely immune to all current viruses, due to the viruses’ inability to hijack a foreign genetic code for their own uses.

 

While the book touches on agriculture, its main preoccupation is medical research. Despite many of the therapies covered being far from ready for use in humans, one can’t help but feel that a revolution in the treatment of diseases, both infectious and genetic, is at hand. Only a year ago, gene editing was used to cure a baby girl of leukemia by engineering her immune system to recognize and attack her own cancerous cells. In the lab, the health of mice with single gene disorders such as Huntington’s disease and Duchenne muscular dystrophy is being significantly improved. Writing in 1962 in his book The Genetic Code, Isaac Asimov speculated that someday “the precise points of deficiency in various inherited diseases and in the disorders of the cell’s chemical machinery may be spotted along the chromosome.” Some 54 years later, we have the technology not only to spot these points but to fix them as precisely as a typo in a manuscript.

An Inconvenient Hagfish

1280px-eptatretus_stoutii

We think of scientific progress as working like building blocks constantly being added to a growing structure, but sometimes a scientific discovery can actually lead us to realize that we know less than we thought we did. Take vision, for instance. Vertebrates (animals with backbones) have complex, highly-developed “camera” eyes, which include a lens and an image-forming retina, while our invertebrate evolutionary ancestors had only eye spots, which are comparatively very simple and can only sense changes in light level.

At some point between vertebrates and their invertebrate ancestors, primitive patches of light sensitive cells which served only to alert their owners to day/night cycles and perhaps the passing of dangerous shadows, evolved into an incredibly intricate organ capable of forming clear, sharp images; distinguishing minute movements; and detecting minor shifts in light intensity.

584px-diagram_of_eye_evolution-svg
Schematic of how the vertebrate eye is hypothesized to have evolved, by Matticus78

In order for evolutionary biologists to fully understand when and how this massive leap in complexity was made, we need an intermediate stage. Intermediates usually come in the form of transitional fossils; that is, remains of organisms that are early examples of a new lineage, and don’t yet possess all of the features that would later evolve in that group. An intriguing and relatively recent example is Tiktaalik, a creature discovered on Ellesmere Island (Canada) in 2004, which appears to be an ancestor of all terrestrial vertebrates, and which possesses intermediate characteristics between fish and tetrapods (animals with four limbs, the earliest of which still lived in the water), such as wrist joints and primitive lungs. The discovery of this fossil has enabled biologists to see what key innovations allowed vertebrates to move onto land, and to precisely date when it happened.

There are also species which are referred to as “living fossils”, organisms which bear a striking resemblance to their ancient ancestors, and which are believed to have physically changed little since that time. (We’ve actually covered a number of interesting living fossils on this blog, including lungfish, Welwitschia, aardvarks, the platypus, and horseshoe crabs.) In the absence of the right fossil, or in the case of soft body parts that aren’t usually well-preserved in fossils, these species can sometimes answer important questions. While we can’t be certain that an ancient ancestor was similar in every respect to a living fossil, assuming so can be a good starting point until better (and possibly contradictory) evidence comes along.

So where does that leave us with the evolution of eyes? Well, eyes being made of soft tissue, they are rarely well preserved in the fossil record, so this was one case in which looking at a living fossil was both possible and made sense.

Hagfish, which look like a cross between a snake and an eel, sit at the base of the vertebrate family tree (although they are not quite vertebrates themselves), a sort of “proto-vertebrate.” Hagfish are considered to be a living fossil of their ancient, jawless fish ancestors, appearing remarkably similar to those examined from fossils. They also have primitive eyes. Assuming that contemporary hagfishes were representative of their ancient progenitors, this indicated that the first proto-vertebrates did not yet have complex eyes, and gave scientists an earliest possible date for the development of this feature. If proto-vertebrates didn’t have them, but all later, true vertebrates did, then complex eyes were no more than 530 million years old, corresponding to the time of the common ancestor of hagfish and vertebrates. Or so we believed.

hagfish
The hagfish (ancestors) in question.  Taken from: Gabbott et al. (2016) Proc. R. Soc. B. 283: 20161151

This past summer, a new piece of research was published which upended our assumptions. A detailed electron microscope and spectral analysis of fossilized Mayomyzon (the hagfish ancestor) has indicated the presence of pigment-bearing organelles called melanosomes, which are themselves indicative of a retina. Previously, these melanosomes, which appear in the fossil as dark spots, had been interpreted as either microbes or a decay-resistant material such as cartilage.

This new finding suggests that the simple eyes of living hagfish are not a trait passed down unchanged through the ages, but the result of degeneration over time, perhaps due to their no longer being needed for survival (much like the sense of smell in primates). What’s more, science has now lost its anchor point for the beginning of vertebrate-type eyes. If an organism with pigmented cells and a retina existed 530 million years ago, then these structures must have begun to develop significantly earlier, although until a fossil is discovered that shows an intermediate stage between Mayomyzon and primitive invertebrate eyes, we can only speculate as to how much earlier.

This discovery is intriguing because it shows how new evidence can sometimes remove some of those already-placed building blocks of knowledge, and how something as apparently minor as tiny dark spots on a fossil can cause us to have to reevaluate long-held assumptions.

Sources

  • Gabbott et al. (2016) Proc. R. Soc. B. 283: 20161151
  • Lamb et al. (2007) Nature Rev. Neuroscience 8: 960-975

*The image at the top of the page is of Pacific hagfish at 150 m depth, California, Cordell Bank National Marine Sanctuary, taken and placed in the public domain by Linda Snook.

Sex & the Reign of the Red Queen

Tenniel_red_queen_with_alice

“Now, here, you see, it takes all the running you can do to keep in the same place.”

From a simple reproductive perspective, males are not a good investment. With apologies to my Y chromosome-bearing readers, let me explain. Consider for a moment a population of clones. Let’s go with lizards, since this actually occurs in lizards. So we have our population of lizard clones. They are all female, and are all able to reproduce, leading to twice the potential for creating more individuals as we see in a species that reproduces sexually, in which only 50% of the members can bear young. Males require all the same resources to survive to maturity, but cannot directly produce young. From this viewpoint alone, the population of clones should out-compete a bunch of sexually-reproducing lizards every time. Greater growth potential. What’s more, the clonal lizards can better exploit a well-adapted set of genes (a “genotype”); if one of them is well-suited to survive in its environment, they all are.

Now consider a parasite that preys upon our hypothetical lizards. The parasites themselves have different genotypes, and a given parasite genotype can attack certain host (i.e. lizard) genotypes, like keys that fit certain locks. Over time, they will evolve to be able to attack the most common host genotype, because that results in their best chance of survival. If there’s an abundance of host type A, but not much B or C, then more A-type parasites will succeed in reproducing, and over time, there will be more A-type parasites overall. This is called a selection pressure, in favour of A-type parasites. In a population of clones, however, there is only one genotype, and once the parasites have evolved to specialise in attacking it, the clones have met their match. They are all equally vulnerable.

The sexual species, however, presents a moving target. This is where males become absolutely worth the resources it takes to create and maintain their existence (See? No hard feelings). Each time a sexual species mates, its genes are shuffled and recombined in novel ways. There are both common and rare genotypes in a sexual population. The parasite population will evolve to be able to attack the most common genotype, as they do with the clones, but in this case, it will be a far smaller portion of the total host population. And as soon as that particular genotype starts to die off and become less common, a new genotype, once rare (and now highly successful due to its current resistance to parasites), will fill the vacuum and become the new ‘most common’ genotype. And so on, over generations and generations.

Both species, parasite and host, must constantly evolve simply to maintain the status quo. This is where the Red Queen hypothesis gets its name: in Wonderland, the Red Queen tells Alice, “here, you see, it takes all the running you can do to keep in the same place.” For many years, evolution was thought of as a journey with an endpoint: species would evolve until they were optimally adapted to their environment, and then stay that way until the environment changed in some fashion. If this was the case, however, we would expect that a given species would be less likely to go extinct the longer it had existed, because it would be better and better adapted over time. And yet, the evidence didn’t seem to bear this prediction out. The probability of extinction seemed to stay the same regardless of the species’ age. We now know that this is because the primary driver of evolution isn’t the environment, but competition between species. And that’s a game you can lose at any time.

1280px-Passiflora_in_Canary_Islands
Passionflower. Photo by Yone Moreno on Wikimedia Commons.

Now the parasite attacking the lizards was just a (very plausible) hypothetical scenario, but there are many interesting cases of the Red Queen at work in nature. And it’s not all subtly shifting genotypes, either; sometimes it’s a full on arms race. Behold the passionflower. In the time of the dinosaurs, passionflowers developed a mutually beneficial pollinator relationship with longwing butterflies. The flowers got pollinated, the butterflies got nectar. But then, over time, the butterflies began to lay their eggs on the vines’ leaves. Once the eggs hatched, the young would devour the leaves, leaving the plant much the worse for wear. In response, the passionflowers evolved to produce cyanide in their leaves, poisoning the butterfly larvae. The butterflies then turned the situation to their advantage by evolving the ability to not only eat the poisonous leaves, but to sequester the cyanide in their bodies and use it to themselves become poisonous to their predators, such as birds. The plants’ next strategy was to mimic the butterflies’ eggs. Longwing butterflies will not lay their eggs on a leaf which is already holding eggs, so the passionflowers evolved nectar glands of the same size and shape as a butterfly egg. After aeons of this back and forth, the butterflies are currently laying their eggs on the tendrils of the passionflower vines rather than the leaves, and we might expect that passionflowers will next develop tendrils which appear to have butterfly eggs on them. These sorts of endless, millennia-spanning arms races are common in nature. Check out my article on cuckoos for a much more murderous example.

IMG_2933
Egg-like glands at the base of the passionflower leaf (the white dots on my index finger).

Had the passionflowers in this example been a clonal species, they wouldn’t likely have stood a chance. Innovations such as higher-than-average levels of cyanide or slightly more bulbous nectar glands upon which defences can be built come from uncommon genotypes. Uncommon genotypes produced by the shuffling of genes that occurs in every generation in sexual species.

And that, kids, is why sex is such as fantastic innovation. (Right?) Every time an illness goes through your workplace, and everybody seems to get it but you, you’ve probably got the Red Queen (and your uncommon genotype) to thank.

 

Sources

  • Brockhurst et al. (2014) Proc. R. Soc. B 281: 20141382.
  • Lively (2010) Journal of Heredity 101 (supple.): S13-S20 [See this paper for a very interesting full explanation of this links between the Red Queen hypothesis and the story by Lewis Carroll.]
  • Vanderplank, John. “Passion Flowers, 2nd Ed.” Cambridge: MIT Press, 1996.

*The illustration at the top of the page is by Sir John Tenniel for Lewis Carroll’s “Through the Looking Glass,” and is now in the public domain.

Floral Invasion

Onam_Flower_Arrangement            Throughout evolution, there have been, time and time again, key biological innovations that have utterly changed history thereafter. Perhaps the most obvious is the one you’re using to read this; the human brain. The development of the anatomically modern human brain has profoundly changed the face of the planet and allowed humans to colonize nearly every part of the globe. But an equally revolutionary innovation from an earlier time stares us in the face each day and goes largely unremarked upon. Flowers. (Stay with me here, guys… ) We think of them as mere window dressing in our lives. Decorations for the kitchen table. But the advent of the flowering plants, or “angiosperms”, has changed the world profoundly, including allowing those magnificent human brains to evolve in the first place.

 

Angiosperm percentage
From: Crepet & Niklas (2009) Am. J. Bot. 96(1):366-381

Having arisen sometime around the late Jurassic to early Cretaceous era (150-190 million years ago), angiosperms come in every form from delicate little herbs to vines and shrubs, to towering rainforest canopy trees. They exist on every continent, including Antarctica, which even humans have failed to develop permanent homes on, and in every type of climate and habitat. They exploded from obscurity to the dominant form of plant life on Earth so fast that Darwin himself called their evolution an “abominable mystery”, and biologists to this day are unable to nail down exactly why they’ve been so incredibly successful. Nearly 90% of all terrestrial plant species alive today are angiosperms. If we measure success by the number of species that exist in a given group, there are two routes by which it can be improved- by increasing the number of distinct species (“speciation”), or by decreasing the rate at which those species go extinct. Let’s take a look at a couple of the features of flowers that have likely made the biggest difference to those metrics.

Picture a world without flowers. The early forests are a sea of green, dominated by ferns, seed ferns, and especially, gymnosperms (that is, conifers and other related groups). Before the angiosperms, reproduction in plants was a game of chance. Accomplished almost exclusively by wind or water, fertilization was haphazard and required large energy inputs to produce huge amounts of spores or pollen grains in order that relatively few would make their way to the desired destination. It was both slow and inefficient.

1280px-Europasaurus_holgeri_Scene_2
The world before flowers. By Gerhard Boeggemann on Wikimedia Commons

The appearance of flowers drew animals into the plant reproduction game as carriers for pollen – not for the first time, as a small number of gymnosperms are known to be insect pollinated – but at a level of control and specificity never before seen. Angiosperms have recruited ants, bees, wasps, butterflies, moths, flies, beetles, birds, and even small mammals such as bats and lemurs to do their business for them. The stunning variety of shapes, sizes, colours, and odours of flowers in the world today have arisen to seduce and retain this range of pollinators. Some plant species are generalists, while others have evolved to attract a single pollinator species, as in the case of bee orchids, or plants using buzz pollination, in which a bumblebee must vibrate the pollen loose with its flight muscles. In return, of course, the pollinators are rewarded with nectar or nutritious excess pollen. Or are at least tricked into thinking they will be. Angiosperms are paying animals to do their reproductive work for them, and thanks to incentivisation, the animals are doing so with gusto. Having a corps of workers whose survival is linked to their successful pollination has allowed the flowering plants to breed and expand their populations and territory quickly, like the invading force they are, and has lowered extinction rates in this group well below that of their competitors. But what happens when you expand into new territory to find that your pollinators don’t exist there? Or members of your own species are simply too few and far between for effective breeding?

Selfing morphology
On the left, a typical outbreeding flower. On the right, a selfing flower of a closely related species. From: Sicard & Lenhard (2011) Annals of Botany 107:1433-1443

Another unique feature that came with flowers is the ability to self-fertilise. “Selfing”, as it’s called, is a boon to the survival of plants in areas where pollinators can be hard to come by, such as very high latitudes or elevations; pollen simply fertilises its own flower or another flower on the same plant. Selfing can also aid sparse populations of plants that are moving into new territories, since another of its species doesn’t need to be nearby for reproductive success. It even saves on energy, since the flower doesn’t have to produce pleasant odours or nectar rewards to attract pollinators. Around half of all angiosperms can self-fertilise, although only 10-15% do so as their primary means of reproduction. Why, you may ask, since it’s such an effective strategy? Well, it’s an effective short term strategy. Because the same genetic material keeps getting reused, essentially, in each successive generation (it is inbreeding, after all), over time the diversity in a population goes down, and harmful mutations creep in that can’t be purged via the genetic mix-and-match that goes on in normal sexual reproduction. Selfing as a sole means of procreation is a slow ticket to extinction, which is why most plants that do it use a dual strategy of outbreeding when possible and inbreeding when necessary. As a short term strategy, however, it can allow a group of new colonists to an area to survive long enough to build up a breeding population and, in cases where that population stays isolated from the original group, eventually develop into a new species of its own. This is how angiosperms got to be practically everywhere… they move into new areas and use special means to survive there until they can turn into something new. I’m greatly simplifying here, of course, and there are additional mechanisms at play, but this starts to give an idea of what an unstoppable force our pretty dinnertable centrepieces really are.

Angiosperms are, above all, adaptable. Their history of utilising all possible avenues to ensure reproductive success is unparalleled. As I mentioned, we have the humble flower to thank for our own existence. Angiosperms are the foundation of the human – and most mammal – diets. Both humans and their livestock are nourished primarily on grasses (wheat, rice, corn, etc.), one of the latest-evolving groups of angiosperms (with tiny, plain flowers that you barely notice and which, just to complicate the point I’m trying to make here, are wind-pollinated). Not to mention that every fruit, and nearly every other type of plant matter you’ve ever eaten also come from angiosperms. They are everywhere. So the next time you buy flowers for that special someone, spare a moment to appreciate this world-changing sexual revolution in the palm of your hand.

Sources

  • Armbruster (2014) AoB Plants 6: plu003
  • Chanderbali et al. (2016) Genetics 202: 1255-1265
  • Crepet & Niklas (2009) American Journal of Botany 96(1): 366-381
  • Endress (2011) Annals of Botany 107: 1465-1489
  • Sicard & Lenhard (2011) Annals of Botany 107: 1433-1443
  • Wright et al. (2013) Proc. Biol. Sci. 280(1760): 20130133

**Top image by Madhutvin on Wikimedia Commons **

Bee_Orchid_(Ophrys_apifera)_(14374841786)_-_cropped
Photo by Bernard Dupont on Wikipedia

The Cost of Colour

Sobo_1906_324Try to imagine a colour you’ve never seen. Or a scent you’ve never smelled. Try to picture the mental image produced when a bat uses echolocation, or a dolphin uses electrolocation. It’s nearly impossible to do without referring to a previous experience, or one of our other senses. We tend to tacitly assume that what we perceive of the world is more or less all there is to perceive. It would be closer to the truth to say that what we perceive is what we need to perceive. Humans don’t require the extraordinary sense of smell that wild dogs do in order to get by in the world. But it wasn’t always this way.

Scent molecules are picked up and recognized in our noses by olfactory receptors. Each type of receptor recognizes a few related types of molecules, and each type of receptor is written into our DNA as an olfactory receptor (OR) gene. In mammals, OR genes make up the largest gene family in our genome. There are over a thousand of them. Sadly for us, over 60% of these genes have deteriorated to the point of being nonfunctional. Why? In what must be a hard piece of news for X-Men fans, extra evolutionary features tend not to hang around unless they’re actively helping us to survive longer and breed more. If a gene can develop a fault that makes it useless without causing its host a major competitive disadvantage, it’ll eventually do so, and an incredible number of these broken genes – called “pseudogenes” – have built up and continue to sit in our genome. This isn’t specific to humans; cows, dogs, rats, and mice all have about 20% of their OR genes nonfunctional. But that still works out to a difference of hundreds of different types of scents that we can’t detect. Even compared to our closest relatives, the apes and old world monkeys, we have twice as many OR pseudogenes, and are accumulating random mutations (the cause of pseudogenes) at a rate four times faster than they are. This is all quite logical, of course; humans have evolved in such a way that being able to smell prey or potential mates from a distance just isn’t key to our survival.

Phylo tree image
From: Gilad et al. (2004) PLoS Biology 2(1): 0120

What’s more interesting is that when scientists looked at the OR genes of apes and old world monkeys (OWMs), they found elevated rates of deterioration there, too… about 32%, compared to only 17% in our next closest group of relatives, the new world monkeys (NWMs). So what happened between the divergence of one group of primates and the next that made an acute sense of smell so much less crucial? The answer came with the one exception among the NWMs. The howler monkey, unlike the rest of its cohort, had a degree of OR gene deterioration similar to the apes and OWMs. The two groups had one other thing in common: full trichromatic vision. Nearly all other placental mammals, including the NWMs, are dichromats, or in common parlance, are colourblind. Using molecular methods that look at rates of change in genes over time to determine when a particular shift happened, scientists determined that in both instances of full colour vision evolving, the OR genes began to deteriorate at about the same time. It was an evolutionary trade-off; once our vision improved, our sense of smell lost its crucial role in survival and slowly faded away. In apes and monkeys, this deterioration process seems to have come to a halt – at a certain point, what remains is still necessary for survival – but in humans, it is ongoing. We know this because of the high number of OR genes for which some individuals carry functional copies, and some carry broken copies. This variability in a population, called polymorphism, amounts to a snapshot of genes in the process of decay, since the broken copies are not, presumably, causing premature death or an inability to breed amongst their carriers. So as we continue to pay the evolutionary price for the dazzling array of colours we are able to perceive in the world, our distant descendants may live in an even poorer scentscape than our current, relatively impoverished one. There may be scents we enjoy today that will be as unimaginable to them as the feel of a magnetic field is to us.

As a quick final point, it turns out humans aren’t the only animal group to have undergone a widescale loss of OR genes. Just as full colour vision made those genes unnecessary for us, so moving into the ocean made them unnecessary for marine mammals. In an even more severe deterioration than that seen in humans, some whales and porpoises have nearly 80% OR pseudogenes. As you may already know, whales, dolphins, and other marine mammals evolved from land-dwelling, or terrestrial mammals (want to know more about it? Read my post here). Using methods similar to those mentioned above for primates, researchers found that at about the same time they were adapting anew to life in the ocean, their scent repertoire was beginning to crumble. And since anatomical studies show that the actual physical structures used to perceive scent, such as the olfactory bulb in the brain, are becoming vestigial in whales, it’s likely the loss isn’t finished yet. Interestingly, the researchers behind this study also looked at a couple of semi-marine animals, the sea lion and the sea turtle, which spend part of their time on land, and found that they have a sense of smell comparable to fully terrestrial animals, with no increased gene loss.

The widescale and ongoing loss of the sense of smell in certain animals, particularly ourselves, is a nice illustration of an evolutionary principle which can be summarized as “use it or lose it”, or more accurately, “need it or lose it.” We tend to think of evolution as allowing us to accrue abilities and features that are useful to us. But unless they’re keeping us and our offspring alive, they’re not going to stick around in the long term. Which makes you wonder, with humans’ incredible success in survival and proliferation on this planet, which relies overwhelmingly on our cognitive, rather than physical abilities, what other senses or abilities could we eventually lose?

Sources

*The image at the top of the page comes from Sobotta’s Atlas and Text-book of Human Anatomy (1906 edition), now in the public domain.

Movin’ On Up: Hermit Crabs & the World’s Only Beachfront Social Housing

(Via: )
(Via: onestopcountrypet.com)

Common Name: The Hermit Crab

A.K.A.: Superfamily Paguroidea

Vital Stats:

  • There are around 1100 species of hermit crabs in 120 genera
  • Range in size from only a few millimetres to half a foot in length
  • Some larger species can live for up to 70 years
  • Most species are aquatic, although there are some tropical terrestrial species

Found: Generally throughout the temperate and tropical oceans, in both shallow and deep areas (I was unable to find more specific data on this.)

Trop. & Temp. Oceans

It Does What?!

If there’s one thing nature loves, it’s symmetry. Sometimes radial symmetry, as we see in starfish or sea anemones; sometimes bilateral symmetry, as in mammals and insects, which have a right half and a left half. External asymmetry is extremely rare in living organisms, and when it does occur, it is generally in a minor form, such as a bird species with beaks bent to the side, or a type of flower with oddly distributed stamens. One of the very few groups with entire bodies that lack symmetry are the gastropods; specifically, the snails. They develop helical shells with asymmetrical bodies to match.

caption (Via: Wikimedia Commons)
The shells also hide how ridiculous they look naked.
(Via: Wikimedia Commons)

But this post isn’t about snails. It’s one thing to evolve an unusual asymmetrical bodyplan to go with your asymmetrical home. It’s another to evolve an asymmetrical bodyplan to go with somebody else’s home. Which brings us to the hermit crab. When snails die in ways that leave behind perfectly good shells on the beach, these guys literally queue up for the chance to move in. Hermit crabs are part of the decapod order of crustaceans, as crabs are, but are not in fact true crabs, and unlike most other crustaceans, they lack any kind of hard, calcified plating on their abdomens (think shrimp shells). Instead, they have a soft, spirally curved lower body that fits perfectly into a snail shell, with muscles that allow them to clasp onto the interior of the shell. Paleobiologists have found that hermit crabs have been living in found shells for over 150 million years, and that they made the move to snail shells when their original shell-producer, the ammonite, went extinct. Living in shells has strongly restricted their morphological evolution, meaning the crabs of aeons ago look pretty similar to the crabs of today, because their housing situation doesn’t allow a lot of change.

caption (Via: Telegraph.co.uk)
Housing shortages hurt everyone.
(Via: Telegraph.co.uk)

Back to those line-ups I mentioned. Unoccupied snail shells are a limited resource, and an unarmoured crustacean is an easy lunch, so of course a lot of fighting goes on over them; crabs will actually gang up on an individual with a higher quality shell and just yank the poor bugger out. But it actually gets much more complex than that… these little pseudo-crabs aren’t as dim and thuggish as you might think. You see, as a hermit crab grows over the course of its life, it needs a series of progressively larger shells in which to live. A crab stuck in an undersized shell is stunted in its growth and is much more vulnerable to predation, since it can’t fully withdraw into its armour. The easiest way to find your next home? Locate a slightly larger hermit crab about to trade up and grab its shell afterward. This is how the crabs form what are called “vacancy chains.” A series of individuals will line themselves up in order of size (I’ve seen groups of schoolchildren unable to perform this task), waiting for hours sometimes, and as the largest crab moves to its new shell, each successive crab will enter the newly vacated one. Brilliant… new homes for everybody, and no one gets hurt. In fact, if a given crab chances upon a new shell that it judges to be too large for its current size, it will actually wait next to the shell for a larger crab to come along and a vacancy chain to form. That’s pretty impressive reasoning for a brain smaller than a pea.

[Fun Fact: Larger aquatic hermit crabs sometimes form symbiotic relationships with sea anemones; the anemone lives on the crab’s shell, protecting its host from predators with its deadly sting, while the crab shares its food with the gelatinous bodyguard.]

Today in Words You Didn’t Think Existed:
carcinisation / car·si·nə·ˈzā·shən / n.
a process by which an organism evolves from a non-crablike form into a crablike form.

That’s right, glossophiles, thanks to a British zoologist, we actually have a specific word for turning into a crab. English rules.

Says Who?

  • Angel (2000) J. of Experimental Marine Biology and Ecology 243: 169-184
  • Cunningham et al. (1992) Nature 355: 539-542
  • Fotheringham (1976) J. of Experimental Marine Biology and Ecology 23(3): 299-305
  • Rotjan et al. (2010) Behavioral Ecology 21(3): 639-646
  • Tricarico & Gherardi (2006) Behav. Ecol. Sociobiol. 60: 492-500
Say hello to my little friend. (Via: dailykos.com)
“Say hello to my little friend.”
(Via: dailykos.com)