Information Transfer Between Humans

31Jan13

Can we measure the effect of one person’s words on another person? I want to describe the main idea behind my recent paper, Information-Theoretic Measures of Influence Based on Content Dynamics, which I’m presenting at WSDM 2013.

Ostensibly, information theory is a dry, academic subject about how to send 0’s and 1’s through a noisy channel (like browsing the web using your cellular signal). The goal is to encode the information so that all the bits can be recovered after they are sent through a noisy channel. Shannon’s theory tells us that the amount of information we can send through the noisy channel is related to a quantity called “mutual information”.

noisy channel

What does this have to do with influence, human speech, or social media? This abstract framework is remarkably flexible. What if the input is some statement made by Alice. Then the “noisy channel” consists of (e.g.) sound waves, the ear drum, and the brain of Bob. Now Bob “outputs” some other statement. In the example below, Bob has said something very relevant to Alice’s statement: WSDM is in Rome, so Alice should definitely have some coffee while she’s there. Bob’s statement gives us some information about what Alice’s original statement was.

cappucino

If, on the other hand, Bob had proclaimed his love of borscht, it’s not obvious that this has anything to do with Alice’s statement. Clearly, Bob lives in his own universe, day-dreaming of borscht. His statements carry no information about Alice’s statements.

borscht

How do we measure this notion of whether Bob’s statements carry information about Alice’s? We just use the standard information-theoretic notion of mutual information.

probability

Unfortunately, this quantity depends on us being able to determine the probability of every pair of statements we might see Alice and Bob utter. Unfortunately, at best we might have observed a few hundred statements from Alice and Bob (on Twitter, for example). We have to do two things:

1. Simplify our representation of these statements (or “reduce the dimensionality of the signal”). That means Alice’s statement is reduced to some keywords that it might be about. (Some details: we use a popular technique called topic modeling to achieve this.)

2. Estimate mutual information using these simplified signals. (Some details: we actually use a more nuanced measure called conditional mutual information, or transfer entropy. We use non-parametric entropy estimators that avoid the step of estimating high dimensional probability distributions.)

Surprisingly, we were able to carry this procedure out for some users on Twitter and detect signals in human speech! One way to understand this result is to say that we could find user pairs where Bob’s statements could be predicted better if we knew Alice’s recent statements. It will come as no surprise that human speech is nuanced and complex, so we were only able to detect very predictable relationships. For instance, one triad of strongly connected users were three tri-lingual sisters. If one of the sisters tweeted to the others in a specific language, the others would always respond in the same language. Other strong relationships included news dissemination and web site promotion. Can we do better and detect more complex forms of influence? Sure, we need either more data or better representation of content or better entropy estimators. We’re working on all three!



No Responses Yet to “Information Transfer Between Humans”

  1. Leave a Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: