Alien?

As a child, I was visited by an alien. I remember the sensation of not being able to move or speak and seeing this other-worldly face. Some time later, when I saw a documentary about people who had been visited by aliens, I felt a chill of recognition. Their experiences matched my own.

People all over the world describe the appearance of aliens in a similar way, and this is often cited as proof that they are among us. I think that there is a different and more plausible explanation for the universality of this phenomena.

The alien image above was generated automatically from a collection of human faces. How was that done? Basically, I train some “neurons” to capture as much information about human faces as possible. These neurons try to split up the work in an optimal way, and robustly we see a neuron that represents the alien face. When you add this together with a few other informative facial features, you can flexibly recognize many types of faces. Seeing this pattern causes a strong sense of recognition because it activates core features in our facial recognition circuitry. However, having this neuron fire by itself is unnatural – no human face would cause just this one pattern to fire without any accompanying ones. Therefore it also strikes us as alien.

Dreams and drugs cause random firing patterns that sometimes activate unusual combinations of our facial recognition circuitry. The universality of people’s perceptions of alien faces only reflects the universal principles underlying the circuits in our head. Now we just have to reconstruct these principles: the truth is out there (in here?).


IMG_0041Once upon a time, a boy from a farm in Iowa got an exciting opportunity to move to the west coast. While many new experiences awaited him there, he found himself imprisoned in a cage made of cars. After many years, he had never managed to escape the prison of cars to do simple things like learning to surf or exploring the natural beauty nearby. Later he realized that the prison was in his mind and driving to Joshua Tree is really not that big a deal. And it’s totally worth it.


In academic work, page restrictions in publications often mean that there is not enough space to explore interesting but tangential relationships between ideas or to give more than bare-bones proofs of mathematical ideas. I am hoping to remedy this, at least initially, with a webcast series of three talks at ISI. I’ll post the links here. These are on the technical side. The eHarmony talk is a little more general of an introduction.

Learning Succinct and Informative Representations: Background and Big Picture

This talk contains some background about information theory and a few famous ideas on how to use it for learning.

  • InfoMax   This famous principle says something very intuitive. A good representation should have maximal mutual information with the data. I briefly discuss why this is wrong. (The short version: maximizing mutual information is like memorizing, and memorizing is not the way to build powerful representations with layers of abstraction.)
  • Information decomposition   This is one of my favorite topics. I talk about some classic Venn diagrams in information theory, Partial Information Decomposition from Beer/Williams, and a classic result from Watanabe about how multivariate mutual information can be decomposed. This decomposition motivates an interpretation of CorEx as hierarchical information decomposition.
  • Information bottleneck  The principle behind the bottleneck is to lossily compress data in a way that minimizes some distortion measure. In this case, they focus on supervised learning and take maximizing relevance about labels as a distortion measure. CorEx can be viewed as a compression with an unsupervised distortion measure where we try to retain the most redundant information in the data.
  • Independent component analysis  This one was at the end and got short shrift. ICA also has a compression interpretation. CorEx finds successively less dependent components at each layer (same with the information sieve, which we’ve used for discrete ICA).
  • Generative models  A popular way to do learning is to assume some generative model and then fit parameters to maximize the likelihood of the data. This requires a lot of up front assumptions. The perspective we take is the opposite: you say what type of computational structure you can support (i.e., calculating some probabilistic functions in parallel), and then optimize an informational objective with those resources. This doesn’t require model assumptions and, depending on the objective, has an operational meaning even if your model is mis-specified.

Part 2 will have a quick recap, filling in some things I missed in part 1. Then we’ll get into some in-depth derivations and implementation details about using CorEx to learn representations. Part 3 will get into the new directions (information sieve, temporal representations, …).

CorEx special case

This shows the Venn diagram for a special case where Y explains correlations between two variables, X1, X2. In that case, the objective reduces to the triple information and is maximized if X1 and X2 are conditionally independent given Y.


I just put up a new paper with the (hopefully) intriguing title “The Information Sieve“. The motivation is that when we humans look at the world, we tend to identify a new pattern or learn a new trick, and then we move on to the next thing. There are two amazing things about this:

1. We don’t learn everything at once.

2. We don’t re-learn the same thing over and over again (usually).

These may seem inconsequential, but it turns out to be very difficult to get machines to learn in this way.

The information sieve introduces a new way of learning things piece by piece. There is some amount of information in whatever data we are looking at, but we don’t know how much (and it’s usually impossible to exactly find out because of limited data/computation). We pass the data through the first layer of the sieve to extract the “most informative” pattern in the data, the data is transformed and the remaining information trickles down to the next layer of the sieve. This “remainder information” contains all the information from the original data except for what was already learned. This allows incremental learning that is guaranteed to improve at each step, and to never duplicate effort by re-learning what is already known.

The bonus eigen-faces below are not in the paper, but they show what the sieve extracts at various layers (when looking at a classic dataset called the Olivetti faces). Any face can “activate” any of these 10 learned factors. The blue/red shows how different pixels in the image contribute to whether that factor is activated. One seems to correspond to faces looking left or right (bottom, second from right). Others seem to focus on different parts of the face reflecting facial expressions. Anyway, there is more to be done to make this method practical on larger datasets, but this seems to be a promising first step. (The paper also shows how this method applies to lossy and lossless compression and independent component analysis, in case that is your bailiwick.)

Some eigen-faces learned with the information sieve.

Some eigen-faces learned with the information sieve.


I admit that this is a bit of a melodramatic title. I was actually a little surprised that, before I used it, the phrase “deep learning for insights” did not exist in google. I gave a talk at eHarmony with this title, for the LA machine learning group. The video is posted here. The original announcement also has links to the slides.

The point of the title was that deep learning as we know it is amazing as a black box that does certain types of prediction (e.g. object recognition in images), but if you feed in a messy dataset and then look inside the box it’s difficult to gain any understanding of the data from that. Generally, this is a consequence of the fact that the optimization is “global”: every hidden unit contributes collectively towards doing a better job at predicting labels. There is no reason to expect an individual hidden unit to have an interesting meaning. In contrast, for “maximally informative representations”  each layer and hidden unit has a quantifiable contribution towards the information it contains about the data.


I’ve been working on a series of posts about an exciting line of work I’m pursuing. The groundwork is in this paper. The basic idea is that any thing we learn from inputs should be considered a representation. What would happen if we searched over the space of all representations for one that is most informative about the inputs? It turns out we can do that efficiently and it leads to a nice hierarchical structure that does a great job at learning from diverse data from gene expression, finance, language, human behavior, and more.

I’m in the process of preparing a sequence of in-depth posts on this new direction, how it fits into the deep learning landscape, what the practical implications are, and the implications for understanding intelligence. In the meantime, here is another cute picture (produced with 1-click from 100 samples of hand-written digits with no other prior information and no hyper-parameters).

unsupervised digit clustering


A “real” blog update will be some time coming. In the meantime, there are some pretty pictures from preliminary results if you click around here.
The picture below is a cool result where our method took results from survey questions (like “Are you the life of the party?”) and automatically discovered that there should be five traits that explain the correlations between how people answered questions. It turns out that the result perfectly matched the “big 5” personality traits. We also got cool (preliminary) results using data from DNA, gene expression, and human language.

Big 5 personality traits