Friday, December 21, 2012

Computing an empirical pFDR in R

The positive false discovery rate (pFDR) has become a classical procedure to test for false positive. It is one of my favourite because it rely on a re-sampling approach.

I base my implementation on John Storey PNAS paper and the technical report he published with Rob Tibshirani while at Stanford [1-2] (I find the technical report much more didactic than the PNAS paper).

I will not describe here why when considering multiple tests simultaneously you need to control for multiple hypothesis testing (here is a link for that: Wikipedia). However, I will re-state in the terms of Storey et al.[1] the definition of pFDR:
The pFDR is the expected quantity defined by the # of false positive / # of significant features conditional there is at least one significant results. which can be written as:
pFDR = E[F/S|S>0]
That said, the probability of having at least one significant value is almost certain when we have lots of features (genes). Meaning that the above equation can be re-written as
pFDR ~= FDR ~= E[F]/E[S].
What the pFDR is measuring is the probability that the feature considered is false positive at the p-value level of this feature. For example gene A has a p-value of 0.07 and a pFDR of 0.0001. If we consider gene A to be significant the chance that it is a false positive is very low. Storey wrote it like that:
"[...] the pFDR can be written as Pr(feature i is truly null | feature i is significant)[...]"
In short this is a p-value for your p-value.
Now let's compute the pFDR for a concrete example. Here, we will use a genome-wide gene expression study comparing two groups of patients (gene as row, patients as column). But the scenario can be applied to any type of data (SNPs, proteins, patient values...) as long as you have class labels. That said, it is a tiny bit idiotic to implement an FDR test for gene expression data as there are several R packages providing this functionality already but the aim here is to know how it works.

Practical thoughts to keep in mind regarding the FDR:
1. Because it relies on a random sampling procedure, results between runs will never exactly look the same. But increasing the number of random shuffling will generate more and more similar results.
2. It might be obvious to some but it is worth noting that if you do not have groups or an order in your columns (=samples) you will not be able to shuffle the labels and thus compute the FDR.

The work:
Starting from a normalized gene expression matrix generated like that:
From this small expression matrix of 2000 genes by 18 samples with two groups we will compute a p-value using a two-sided t-test.
We obtained a vector of p-values that we will use to obtain the q-values. To do that, we need to evaluate the number of false positive (pFDR ~= E[F]/E[S]).
We will achieve that by re-computing the p-value using shuffled class labels to see if just by random chance we can obtain lower p-values than with the original class label. The shuffling will re-label some ER+ samples as ER- and vice-versa.
In a way, we aim at estimating the robustness of the p-value we obtained.

Friday, September 21, 2012

Religious restrictions index: how do countries compare?

The Guardian DataBlog published yesterday an interesting article exploring graphically the religious intolerance across the world. The data are coming from a report published by Pew Research Center's Forum on Religion and Public Life. I like the philosophy DataBlog a lot, providing the raw data for everyone to look at.
However, I felt that the visualization could be improved. First the data are longitudinal and no temporal representation is provided. So I downloaded the Google Spreadsheet and worked it in R with googleVis. googleVis is the R API to the Google graphic library.
The data are composed of two data type:

  • The Government Restriction Index (GSI) [measures government laws, policies and actions that restrict religious beliefs or practices]
  • The Social Hostilities Index (SHI) [measures acts of religious hostility by private individuals,organizations and social groups]

The R code is the following:

I like it better to explore those data. Select a country of interest and follow it.

Tuesday, July 31, 2012

Twitter analysis of air pollution in Beijing

One of the air pollution detection machine in Beijing (at the American Embassy) is connected to Twitter and tweet about the air quality in real time. By default the machine in Beijing output the 24hr summary PM2.5 air pollution information. What is PM2.5 is define here
Next will be to compare the pollution level between different cities such as LA and Beijing. But it turns out the air quality data for California are not so easy to get programmatically.

Here is the code I used to produce this analysis:

Saturday, June 9, 2012

Rcpp vs. R implementation of cosine similarity

While speeding up some code the other day working on a project with a colleague I ended up trying Rcpp for the first time. I re-implemented the cosine distance function using RcppArmadillo relatively easily using bits and pieces of code I found scattered around the web. But the speed increase was not as much as I expected comparing the Rcpp code to pure R.
And here is the speed comparison...

Friday, June 8, 2012

A new approach to discover pain related genes

Our latest paper in PLoS Computational Biology is out.
The project spanned over 2 years starting at the end of my first year of postdoctoral training until now. It has been a truly collaborative endeavor across institutions but also across sub-disciplines using text-mining, leveraging public genomic data across diseases and genotyping a human twin cohort subjected to experimental pain. A big thank to all my collaborators.

Briefly, we successfully demonstrated that ranking diseases by pain level using a literature co-citation approach and then extracting the gene whose expression change is associated with this ranking lead to interesting new pain gene candidate.
The beauty of the approach is that it can be apply to other concept than pain. For example, we show in the paper that we can significantly prioritize genes involve in inflammation in a similar fashion.

Sunday, June 3, 2012

Obtaining a protein-protein interaction network for a gene list in R

Building a network of interaction between a bunch of genes can help a great deal in understanding the relationships between the seemingly disparate elements from your list. It can seems challenging at first to build such network but it's less complicated than it looks. Here is an approach I use.

Resources to obtain interactions information are numerous. Logically we think to go for the central repository if it exists. Unfortunately, for protein-protein interaction (PPI) there are severals (IntAct, BioGRID, HPRD, STRING...).
Using the API developed for these repo would require time and we usually don't have it. Fortunately, the gene web page from NCBI Entrez gene compile interactions from BioGRID and HPRD which seems like a reasonable and robust compromise. And on the other we can use the XML package to parse the web page.

First, we need a gene list, here I refer you to an earlier post where we extract a list 274 significantly differentially regulated genes.
Using the following little function you can scrap the interaction table from the NCBI web page.
[update: corrected bug where some genes returned an error]
Here is a quick example with the first 20 genes from my list. You obtain your edge list in the form of a data.frame. The NCBI2R package provides a similar function but there is a bug in GetInteractions().

You can write this dataframe to a text file and import it in Cytoscape directly but you can also display and work your network directly in R using the igraph package.

The network is simple and not fully connected but consider we obtained interaction for 5 genes out of 20 here only.

Sunday, May 20, 2012

Another look at over-representation analysis interpretation

Interpreting a list of differentially regulated genes can take many forms. One of the most widely used method is looking for enrichment of functional group of genes compared to a random sampling of gene from the same universe, namely an over-representation analysis (ORA).

The point I want to explore today is what is the best way to interpret the results of an ORA?
The list of GO categories one obtain often tells a complex message and leave us with a confuse feeling that we are cherry picking the categories that fit our hypothesis the best.

Let's have a look at an example. First, I extract a gene list from a publicly available experiment in Gene Expression Omnibus. I use GEOquery for that and obtain a list of 274 genes up- and down-regulated (code at the end).

From this gene list we can perform a GO ORA fairly easily using the GOstats package. I combined all the steps necessary in two functions (GO_over.r and write.GOhyper.r) that you can found on my GitHub repo. I usually download the functions directly from my R session using this function: (copy and paste it in your R session or save it to a file call source_https.r)
Here we are presented with a table of 59 GO categories that are all significant after multiple hypothesis testing correction. Cell adhesion, generation of neurons, cellular response to interferon-beta...

How to interpret this list?
One way to do that is to display the Directed Acyclic Graph (DAG) of the over-represented GO categories in the list. But in my opinion it is difficult to get a big picture of such representation. We know that the GO categories (and to a lower extend pathways) share common genes. My hypothesis is that visualizing the relationship between GO categories based on the amount of gene shared will likely help to interpret the results. So what I do, in addition, is to visualize the amount of gene shared between GO categories by plotting the results of the ORA using a heatmap (code below the plot).
Rows and columns are GO categories. The color of each square represents the percentage of gene shared between any two categories. Here we see that our gene list (274 genes) seems to preferentially contain genes from three ensembles of GO categories that are in yellow along the diagonal. Based on this observation we can interpret that the main events going on in these cells seems to be linked to regulation of metabolism, cytoskeleton re-organization and neurons development. Which make sense when you consider that we compared iPS cells to neurospheres cells.

I welcome comments about this approach (in fact this the purpose of this post). I would like to argue that such representation of a GO ORA is complementary to displaying a flat text table and plotting the DAG. Did anybody already used this approach to interpret GO ORA? Or has a better solution?
I acknowledge that it is not the perfect solution. For example, if a category does not share many genes with others it does not mean it is not worth investigating. It might even be the key to understanding the biological experiment but there are a lot of those categories... which one to pick? Plus, I think a GO ORA does not aim at fined grain analysis but at a global overview of the events.

Here is the code to produce the heatmap:

Thursday, May 17, 2012

Installing EUtils perl module

In my recent post on Using R to graph a subject trend in PubMed I used the EUtils Perl module. There are detailed general instructions on how to install Perl module here for all major OS. What I did on my Mac is that.
I downloaded the archive from and ran those commands.
tar -xzf TGen-EUtils-0.13.tar.gz
cd TGen-EUtils-0.13
## instruction are in the INSTALL file
perl Makefile.PL
make test
sudo make install
And that's it folks.

Tuesday, May 15, 2012

Using R to graph a subject trend in PubMed

The traditional way to show that your topic is worth studying in front of an audience is to show the state of the field based on a literature review. This is especially true if your subject is obscure except to a handful of scientists in the world.
I was confronted with this problem more than once and the last time I decided to plot the state-of-the-field using a few scripts.
I wrote three scripts for that: pubmed_trend.r that take your PubMed query and send it to the NCBI using the Eutils tools (Perl script). Then I plot the results. The details of the scripts are below but here is how you create your trend.
In this example, we plot the trend for the number of publications per year for papers annotated with MeSH terms for "sex characteristics" and "pain" and compare this search to the number of publication/year for "sex characteristics" and "Analgesics". We will run this search between 1970 and 2011. And here is the plot.
What we see here is that the number of publications per year talking about sex difference and pain or analgesics is growing but the number of publication per year is still small and more research is needed.
...and you are good to go, your talk is launched

Here are the details of the scripts and functions. The pubmed_trend.r takes a PubMed query string as you would type it in the search box through the web interface (space have to be replaced by '+').

Tuesday, May 8, 2012

Hello world

The Brain Chronicle blog is an attempt to share with the R and scientific community at large some methods, recipes and other thoughts that emerge from my day-to-day work as a computer biologist researcher.