3D vision with the Pulfrich Effect

The Pulfrich Effect is an optical phenomenon where objects (or images) moving in a single plane can appear to be in 3D when the light reaching one eye is dimmed, e.g. with a filter. It also has a curious history – Carl Pulfrich (biography – pdf), who discovered the phenomenon, was blind in one eye and never observed it for himself, but nonetheless made many contributions to stereoscopy (the study of 3D vision) in both theory and the construction of apparatus.

Unlike other forms of stereoscopy, this only works with moving objects or animations; it does not work with still images! But what’s really cool is that you don’t need any special equipment to view it, beyond a piece of darkened glass or plastic to act as a filter. Videos exhibiting the Pulfrich effect can be viewed on a normal monitor or TV screen.

I’ve made my own Javascript animations as demos for the Pulfrich effect (posted as GitHub Gists and rendered by bl.ocks.org):

Screen Shot 2017-01-10 at 00.57.42.png

Screenshot from my animated explanation of the Pulfrich Effect

Continue reading

Visualize metagenomes in a web browser

In my day job I work with metagenomes from animals and protists that have bacterial symbionts, and I’ve blogged here before about why visualizations are so useful to metagenomics (mostly to flog my own R package). However most existing tools, including my own, require that you install additional software and all the libraries that come with them, and also be familiar with the command line. That’s pretty standard these days for anyone who wants to do serious work with such data, but it can be a big hurdle for teaching. Time in the classroom is limited, and ideally we want to spend more time teaching biology than debugging package installation in R.

I’ve therefore written up a simple browser-based visualization for rendering coverage-GC% plots, called gbtlite. There’s no need to mess around with data structures in R, or worrying about how to install required packages for your operating system. The visualization uses the D3.js Javascript library, which is popular for web applications. If you’ve played with infographics on the New York Times website, then you’ve probably seen something built with D3.

Continue reading

Plotting data as ratios? Think again

We are often interested in ratios between two quantities. As an example, let’s use data from a study on the sugar content of soft drinks, where the the sugar content declared on the drink label was compared to the actual sugar content measured in the laboratory (Ventura et al. 2010, Obesitypdf). The paper includes a nice table summarizing their measurements, which I have adapted to produce the plots shown here.

How can we present this data to get the most insight? In my opinion, presenting such data as ratios can obscure useful information; showing scatterplots of the two quantites can make it easier to spot patterns.
Continue reading

New gbtools release (v2.5.2)

Instead of working I’m procrastinating by fixing bugs in my software. The new version of gbtools includes two new features that improve the plotting of taxonomic markers, and some fixes to long-standing bugs. You can now adjust how many taxa are colored and included in the plot legend (thereby avoiding cluttered plots with too many colors to interpret), and also highlight individual taxa.

Wondering what gbtools is? Read my previous blog post, or the paper published last December.

Are your trees drawn correctly?

[Edited on 10 Feb to fix some errors and ambiguous wording pointed out by Lucas. Added text in blue – thanks Lucas!]

Most software used by academic scientists is made by other scientists and available for use free of charge, but the phrase caveat emptor – buyer beware – still applies. As end users, we trust them to do more or less what they say on the box, but this doesn’t always happen.

The Exelixis lab, makers of the popular phylogenetic tool RAxML, have recently released a preprint (bioRxiv) looking at whether phylogenetic tree drawing software draws the support values properly. Short version: they don’t always do it right! And these errors can, and have, creeped into the published literature. (Dendroscope, one of the tools compared, released a bug fix soon after this article came out.) Coincidentally I had met the lead author of the preprint, Lucas Czech at a conference recently, but only came across this article when I was searching for something else online.

The main reason for the problem is, as with most problems in bioinformatics, different file formats. Support values can be written in a file either as properties of nodes or of branches. If a tree file is formatted one way but the drawing program assumes the other, then the support value can end up being placed on the wrong location, especially if the tree is rerooted for drawing.


Branch labels vs node labels. Which is correct?

Support values should properly be considered properties of branches, not nodes. In case it isn’t completely clear why this should be the case, I’ve written a short explanation below.

Continue reading

Standards of evidence

Science and the law, and their respective practitioners, may seem to be as different as chalk and cheese, but they are both very much concerned with the evaluation of evidence. Scientists like to think of themselves as dispassionately weighing the objective facts arising from their experiments and observations, and using these to validate existing theories or to propose new ones. However, as any practicing scientist knows, we don’t always apply the same standards when weighing evidence.

For example, my field, environmental microbiology, relies heavily on observations and measurements made on wild organisms, rather than experiments on cultivated ones. The positive side is that there is so much diversity in wild organisms that there’s always something new to discover. However, not having them in cultivation means that whole classes of experiments, such as making knockout mutants to study particular pathways, are simply not possible. If I wanted to demonstrate that a particular microbe I’m studying is using a certain metabolic pathway, I can marshal all sorts of indirect evidence: the presence of key genes in the genome, expression of the corresponding mRNAs, chemical measurements of metabolic compounds unique to that pathway, and so on. Whereas with a “lab rat” organism like E. coli, I would have a more direct route: show that the key phenotype is affected in a targeted knockout, clone and heterologously express the gene. If I am working with such a “lab rat” for which genetic manipulation is possible, the indirect evidence that was acceptable previously would no longer be acceptable to most of my peers. They would instead demand the more stringent “gold standard”.

In American legal jargon, the “preponderance of evidence” is the burden of proof required for civil cases, whereas a stricter standard, “beyond a reasonable doubt”, applies to criminal ones. We sometimes hear people saying that scientists have “proven” this or that, but my impression from biology at least is that most scientific papers make their arguments from the preponderance of evidence, much less rigorous proof. In some types of experiments or analyses, it is possible to construct a formal statistical model to evaluate the probabilities. Does the preponderance standard correspond to P > 50%, as some sources suggest? And what is a reasonable doubt? If I am 99% sure, is that reasonable enough? Or is 95% sufficient? Rhetoric is important. That elusive quality, “relevance”, is conjured up by putting pieces of evidence in the frame of a larger narrative to hint at some deeper understanding in the works.

Does this mean that we should be stricter in what results we allow to be published, or that scientists should have argue like prosecutors in a death penalty case? I don’t think so. A single scientific project, whether on the scale of a PhD thesis or a large scale collaboration like the Large Hadron Collider, is usually an accretionary process. The pieces of the puzzle come out one at a time, and quite often we slot them together wrongly in the beginning. Ideally, each step of the way we strive towards reducing uncertainty. Artificial rigor would, in the words of the Street-Fighting Mathematician, induce rigor mortis and instead be a hindrance to scientific work.

Make your own Lantern Globe

People have been drawing maps for a long time, and one of the biggest problems is figuring out how to squeeze a curved surface (the Earth) onto a flat piece of paper. There will always be some kind of distortion, and no single type of projection is perfect – the only question is which kind of distortion you want to minimize.

My girlfriend and I recently visited the Mathematisch-Physikalischer Salon, a collection of historical scientific instruments at the beautiful Zwinger building in Dresden. There was an exhibition on globes, and coincidentally I had been reading on the train ride there a very informative book on map projections, Map Projections: A Working Manual by John Snyder, published by the US Geological Survey (freely available as a pdf).

One exhibit was a toy globe which didn’t have the usual spherical shape – instead it looked somewhat like a Chinese lantern. Unfortunately I didn’t take a picture of the original, but below is a picture of my version. It has a cylindrical middle part, conical upper and lower parts, and is flat at the poles. An ugly shape, but these are incidentally representing the three major families of map projections: Cylindrical, Conical, and Azimuthal (the flat part).


It’s a beautiful learning tool to illustrate the three types of projections and their relative strengths. Cylindrical projections are best suited for plotting areas near the equator. The Mercator projection (think Google Maps) is an example, and if you recall, is very distorted near the poles. Conical projections are suitable for intermediate latitudes. Many maps of the United States, for example, are cylindrically projected. Azimuthal projections, such as most maps of Antarctica, are suitable for the poles.

I have previously blogged about an open source cartography software called GMT (Generic Mapping Tools). This tool is great because it lets you plot maps in any projection you can imagine, add annotations, custom scales, etc. In this blog post I shall show how I made my own Three-Projection Globe (I don’t know the actual name), and provide a template that you can print out and build for yourself!

Template downloads:

Continue reading