Teaser: learning with informative representations

10Mar15

I’ve been working on a series of posts about an exciting line of work I’m pursuing. The groundwork is in this paper. The basic idea is that any thing we learn from inputs should be considered a representation. What would happen if we searched over the space of all representations for one that is most informative about the inputs? It turns out we can do that efficiently and it leads to a nice hierarchical structure that does a great job at learning from diverse data from gene expression, finance, language, human behavior, and more.

I’m in the process of preparing a sequence of in-depth posts on this new direction, how it fits into the deep learning landscape, what the practical implications are, and the implications for understanding intelligence. In the meantime, here is another cute picture (produced with 1-click from 100 samples of hand-written digits with no other prior information and no hyper-parameters).

unsupervised digit clustering



No Responses Yet to “Teaser: learning with informative representations”

  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s


%d bloggers like this: