Feature Selection Tutorial

In this Jupyter notebook, we’ll walk through the information-theoretic feature selection algorithms in PicturedRocks and demonstrate the interactive marker selection user interface.

If you are viewing this notebook inside the PicturedRocks documentation, the interactive marker selection tool will not work (as it needs a python backend to perform the computations. You can download this notebook from GitHub and run in on your own computer to try the interactive tool.)

import numpy as np
import scanpy as sc
import picturedrocks as pr
adata = sc.datasets.paul15()
WARNING: In Scanpy 0.*, this returned logarithmized data. Now it returns non-logarithmized data.
... storing 'paul15_clusters' as categorical
Trying to set attribute `.uns` of view, making a copy.
AnnData object with n_obs × n_vars = 2730 × 3451
    obs: 'paul15_clusters'
    uns: 'iroot'

The process_clusts method copies the cluster column and precomputes various indices, etc. If you have multiple columns that can be used as target labels (e.g., different treatments, clusters via different clustering algorithms or parameters, or demographics), this sets and processes the given columns as the one we’re currently examining.

This is necessary for supervised analysis and visualization tools in PicturedRocks that use cluster labels.

pr.read.process_clusts(adata, "paul15_clusters")
AnnData object with n_obs × n_vars = 2730 × 3451
    obs: 'paul15_clusters', 'clust', 'y'
    uns: 'iroot', 'num_clusts', 'clusterindices'

The make_infoset method creates a SparseInformationSet object with a discretized version of the data matrix. It is useful to have only a small number of discrete states that each gene can take so that entropy is a reasonable measurement. By default, make_infoset performs an adaptive transform that we call a recursive quantile transform. This is implemented in pr.markers.mutualinformation.infoset.quantile_discretize. If you have a different discretization transformation, you can pass a transformed matrix directly to SparseInformationSet.

infoset = pr.markers.makeinfoset(adata, True)

Because this dataset only has 3451 features, it is computationally easy to do feature selection without restricting the number of features. If we wanted to, we could do either supervised or unsupervised univariate feature selection (i.e., without considering any interactions between features).

# supervised
mim = pr.markers.mutualinformation.iterative.MIM(infoset)
most_relevant_genes = mim.autoselect(1000)
# unsupervised
ue = pr.markers.mutualinformation.iterative.UniEntropy(infoset)
most_variable_genes = ue.autoselect(1000)

At this stage we can slice our adata object as adata[:,most_relevant_genes] or adata[:,most_variable_genes] and create a new InformationSet object for this sliced object. We don’t need to do that here since there are not a lot of genes but will do so anyway for demonstration purposes.

Supervised Feature Selection

Let’s jump straight into supervised feature selection. Here we will use the CIFE objective

adata_mr = adata[:,most_relevant_genes].copy()
infoset_mr = pr.markers.makeinfoset(adata_mr, True)
cife = pr.markers.CIFE(infoset_mr)
array([0.95022366, 0.93749845, 0.88470651, 0.86819372, 0.8634894 ,
       0.80903075, 0.75775072, 0.75361203, 0.71991963, 0.7106652 ,
       0.70321104, 0.6821289 , 0.67109598, 0.65202536, 0.65192364,
       0.6458561 , 0.64569101, 0.63526239, 0.62452935, 0.62346646])
top_genes = np.argsort(cife.score)[::-1]
Index(['Mpo', 'Prtn3', 'Ctsg', 'Car2', 'Elane', 'Car1', 'Klf1', 'Blvrb',
       'Ermap', 'Mt2'],

Let’s select ‘Mpo’

ind = adata_mr.var_names.get_loc('Mpo')

Now, the top genes are

top_genes = np.argsort(cife.score)[::-1]
Index(['Car2', 'Car1', 'Gnb2l1', 'Fth1', 'Atpif1', 'AK158095', 'Ncl', 'Blvrb',
       'Rpl4', 'Atp5b'],

Observe that the order has changed based on redundancy (or lack thereof) with ‘Mpo’. Let’s add ‘Car1’

ind = adata_mr.var_names.get_loc('Car1')
top_genes = np.argsort(cife.score)[::-1]
Index(['Actb', 'Gpx1', 'Hsp90ab1', 'Ftl1', 'Ybx1', 'AK158095', 'Ncl', 'Rps3',
       'hnRNP A2/B1', 'Tuba1b'],

If we want to select the top gene repeatedly, we can use autoselect


To look at the markers we’ve selected, we can examine cife.S

[0, 5, 187, 23, 49, 931, 306]
Index(['Mpo', 'Car1', 'Actb', 'H2afy', 'Hsp90ab1', 'Gpr56', 'Ly6e'], dtype='object')

User Interface

This process can also done manually with a user-interface allowing you to incorporate domain knowledge in this process. Use the View dropdown to look at heatplots for candidate genes and already selected genes.

Normalize per cell and log transform the data. We are doing this here only to generate familiar features. We do not recommend performing these transformations before make_infoset.

im = pr.markers.InteractiveMarkerSelection(adata_mr, cife, ['tsne', 'violin'])
Running tsne on cells...

Note, that because we passed the same cife object, any genes added/removed in the interface will affect the cife object.

Index(['Mpo', 'Car1', 'Actb', 'H2afy', 'Hsp90ab1', 'Gpr56', 'Ly6e'], dtype='object')

Unsupervised Feature Selection

This works very similarly. In the example below, we’ll autoselect 5 genes and then run the interface. Note that although the previous section would not work without cluster labels, the following code will.

cife_unsup = pr.markers.CIFEUnsup(infoset)

If you ran the example above, this will load faster because the t_SNE coordinates for genes and cells have already been computed.

im_unsup = pr.markers.interactive.InteractiveMarkerSelection(adata, cife_unsup, ["tsne"])
Running tsne on cells...

Binary Feature Selection

We can also perform feature selection specifically for individual class labels (e.g., clusters). This is done by changing the SparseInformationSet’s y array. In the example below, we will target the class label “2Ery”. Notice that the features selected by MIM (MIM doesn’t consider redundancy) are only those that are informative about “2Ery” in particular.

Binary (i.e., not multiclass) feature selection can be performed with any information-theoretic feature selection algorithm (e.g., CIFE, JMI, MIM).

# since we are changing y anyway, the value of include_y (True in the line below) doesn't matter
infoset2 = pr.markers.makeinfoset(adata, True)
infoset2.set_y((adata.obs['clust'] == '2Ery').astype(int).values)
mim2 = pr.markers.mutualinformation.iterative.MIM(infoset2)
im2 = pr.markers.interactive.InteractiveMarkerSelection(adata, mim2, ["violin"])
[ ]: