Q: What’s the relationship between entropy in the information-theory sense and the thermodynamics sense?

Physicist: The term “Entropy” shows up both in thermodynamics and information theory, so (since thermodynamics called dibs), I’ll call thermodynamic entropy “entropy”, and information theoretic entropy “information”.

I can’t think of a good way to demonstrate intuitively that entropy and information are essentially the same, so instead check out the similarities!  Essentially, they both answer the question “how hard is it to describe this thing?”.  In fact, unless you have a mess of time on your hands, just go with that.  For those of you with some time, a post that turned out to be longer than it should have been:


Entropy!) Back in the day a dude named Boltzmann found that heat and temperature didn’t effectively describe heat flow, and that a new variable was called for.  For example, all the air in a room could suddenly condense into a ball, which then bounces around with the same energy as the original air, and conservation of energy would still hold up.  The big problem with this scenario is not that it violates any fundamental laws, but that it’s unlikely (don’t bet against a thermodynamicist when they say something’s “unlikely”).  To deal with this Boltzmann defined entropy.  Following basic probability, the more ways that a macrostate (things like temperature, wind blowing, “big” stuff with lots of molecules) can happen the more likely it is.  The individual configurations (atom 1 is exactly here, atom 2 is over here, …) are called “microstates” and as you can imagine a single macrostate, like a bucket of room temperature water, is made up of a hell of a lot of microstates.

Now if a bucket of water has N microstates, then 2 buckets will have N2 microstates (1 die has 6 states, 2 dice have 36 states).  But that’s pretty tricky to deal with, and it doesn’t seem to be what nature is concerned with.  If one bucket has entropy E, you’d like two buckets to have entropy 2E.  Here’s what nature seems to like, and what Bolzmann settled on: E = k log(N), where E is entropy, N is the number of microstates, and k is a physical constant (k is the Boltzmann constant, but it hardly matters, it changes depending on the units used, and the base of the log).  In fact, Boltzmann was so excited about his equation and how well it works that he had it carved into his head stone (he used different letters, so it reads “S = k \cdot \log{(W)}“, but whatever).  The “log” turns the “squared” into “times 2”, which clears up that problem.  Also, the log can be in any base, since changing the base would just change k, and it doesn’t matter what k is (as long as everyone is consistent).

This formulation of entropy makes a lot of sense.  If something can only happen in one way, it will be unlikely and have zero entropy.  If it has many ways to happen, it will be fairly likely and have higher entropy.  Also, you can make very sensible statements with it.  For example: Water expands by a factor around 1000 when it boils, and it’s entropy increases 1000 fold.  That’s why it’s easy to boil water in a pot (it increases entropy), and it’s difficult to condense water in a pot (it decreases entropy).  You can also say that if the water is in the pot then the position of each molecule is fairly certain (it’s in the pot), so the entropy is low, and when the water is steam then the position is less certain (it’s around here somewhere), so the entropy is high.  As a quick aside, Boltzmann’s entropy assumes that all microstates have the same probability.  It turns out that’s not quite true, but you can show that the probability of seeing a microstate state with a different probability is effectively zero, so they may as well all have the same probability.


Information!) In 1948 a dude named Shannon (last name) was listening to a telegraph line and someone asked him “how much information is that?”.  Then information theory happened.  He wrote a paper worth reading, that can be understood by anyone who knows what “log” is and has some patience.

Say you want to find the combination of a combination lock.  If the lock has 2 digits, there are 100 (102) combinations, if it has 3 digits there are 1000 (103) combinations, and so on.  Although a 4 digit code has a hundred times as many combinations as a 2 digit code, it only takes twice as long to describe.  Information is the log of the number of combinations.  So I = \log_b{(N)} where I is the amount of information, N is the number of combinations, and b is the base.  Again, the base of the log can be anything, but in information theory the standard is base 2 (this gives you the amount of information in “bits”, which is what computers use).  Base 2 gives you bits, base e (the natural log) gives you “nats”, and base \pi gives you “slices”.  Not many people use nats, and nobody ever uses slices (except in bad jokes), so from now on I’ll just talk about information in bits.

So, say you wanted to send a message and you wanted to hide it in your padlock combination.  If your padlock has 3 digits you can store I = log2(1000) = 9.97 bits of information.  10 bits requires 1024 combinations.  Another good way to describe information is “information is the minimal number of yes/no questions you have to ask (on average) to determine the state”.  So for example, if I think of a letter at random, you could ask “Is it A?  Is it B? …” and it would take 13 questions on average, but there’s a better method.  You can divide the alphabet in half, then again, and again until the letter is found.  So a good series of questions would be “Is is A to M?”, and if the answer is “yes” then “Is it A to G?”, and so on.  It should take log2(26) = 4.70 questions on average, so it should take 4.7 bits to describe each letter.

In thermodynamics every state is as likely to come up as any other.  In information theory, the different states (in this case the “states” are letters) can have different likelyhoods of showing up.  Right of the bat, you’ll notice that z’s and q’s occur rarely in written English (this post has only 4 “non-Bolzmann” z’s and 16 q’s), so you can estimate that the amount of information in an English letter should be closer to log2(24) = 4.58 bits.  Shannon figured out that if you have N “letters” and the probability of the first letter is P1, of the second letter is P2, and so on, then the information per digit is I = \sum_{i=1}^N P_i \log_2{\left(\frac{1}{P_i}\right)}.  If all the probabilities are the same, then this summation reduces to I = log2(N).

As weird as this definition looks, it does makes sense.  If you only have one letter to work with, then you’re not sending any information since you always know what the next letter will be (I = 1 log(1) +0log(0) + … + 0log(0) = 0).  By the same token, if you use all of the letters equally often, it will be the most difficult to predict what comes next (information per digit is maximized when the probability is equal, or spread out, between all the letters).  This is why compressed data looks random.  If your data isn’t random, then you could save room by just describing the pattern.  For example: “ABABABABABABABABABAB” could be written “10AB”.  There’s an entire science behind this, so rather than going into it here, you should really read the paper.


Overlap!) The bridge between information and entropy lies in how hard it is to describe a physical state or process.  The amount of information it takes to describe something is proportional to its entropy.  Once you have the equations (“I = log2(N)” and “E = k log(N)”) this is pretty obvious.  However, the way the word “entropy” is used in common speech is a little misleading.  For example, if you found a book that was just the letter “A” over and over, then you would say that it had low entropy because it’s so predictable, and that it has no information for the same reason. If you read something like Shakespeare on the other hand, you’ll notice that it’s more difficult to predict what will be written next.  So, somewhat intuitively, you’d say that Shakespeare has higher entropy, and you’d definitely say that Shakespeare has more information.

As a quick aside, you can extend this line of thinking empirically and you’ll find that you can actually determine if a sequence of symbols is random, or a language, etc.  It has been suggested that an entropy measurement could be applied to post modernist texts to see if they are in fact communicating anything at all (see “Sokal affair“).  This was recently used to demonstrate that the Indus Script is very likely to be a language, without actually determining what the script says.

In day to day life we only describe things with very low entropy.  If something has very high entropy, it would take a long time to describe it so we don’t bother.  That’s not a indictment of laziness, it’s just that most people have better things to do than count atoms.  For example: If your friend gets a new car they may describe it as “a red Ferrari 250 GT Spyder” (and congratulations).  The car has very little entropy, so that short description has all the information you need.  If you saw the car you’d know exactly what to expect.  Later it gets dented, so they would describe it as “a red Ferrari 250 GT Spyder with a dent in the hood”.

Bueller?

Easy to describe, and soon-to-be-difficult to describe.

As time goes on and the car’s entropy increases, and it takes more and more information to accurately describe the car.  Eventually the description would be “scrap metal”.  But “scrap metal” tells you almost nothing.  The entropy has gotten so high that it would take forever to effectively describe the ex-car, so nobody bothers to try.

By the by, I think this post has more information than any previous post.  Hence all the entropy.

This entry was posted in -- By the Physicist, Entropy/Information, Equations, Math, Philosophical, Physics. Bookmark the permalink.

12 Responses to Q: What’s the relationship between entropy in the information-theory sense and the thermodynamics sense?

  1. Pingback: Q: Is the total complexity of the universe growing, shrinking or staying the same? « Ask a Mathematician / Ask a Physicist

  2. Frank says:

    Note that the connection between the second law of thermodynamics and information theory has been formed in detail in the book by A. Ben-Naim entitled: “A Farewell to Entropy: Statistical Thermodynamics Based on Information”.

  3. Arch says:

    It appears that the author forgot the prefix “http://” for the hrefs of the anchors that are supposed to direct to the paper. Nonetheless the links at cm.bell-labs.com gives a 404 error.

    For anyone who is interested in checking out the paper simply google “A Mathematical Theory of Communication by Shannon”.

  4. Pingback: Entropi – Rastlantı ve Zorunluluk

  5. Leo says:

    Physicist wrote: ‘In thermodynamics every state is as likely to come up as any other. In information theory, the different states (in this case the “states” are letters) can have different likelyhoods of showing up’.

    Why in thermodynamics every state is as likely to come up as any other? Can this be proved?

  6. Error: Unable to create directory uploads/2024/12. Is its parent directory writable by the server? The Physicist says:

    @Leo
    Generally speaking, thermodynamics applies to large systems with a lot of (more or less) identical parts. For example, a room full of air molecules. Systems like this are subject to the “Asymptotic Equipartition Property“, which is essentially just a dressed up version of the law of large numbers.
    Basically, if roll only a handful of dice you might not get any 5’s, but if you dump out a bucket of dice you can be sure that almost exactly 1 in 6 of those dice will be 5’s.

  7. b.morgan says:

    Thank you for this article. It answers a question that I think about a lot while falling asleep at night, but today I decided to *look it up.* “Is informational entropy directly proportional to thermodynamic entropy?” Luckily I hit a great explanation on my first hit: your explanation. What are the chances?! 🙂

  8. D. Barr says:

    Thank you for the article—the example of the Ferrrari really clicked for me—my interpretation is that as entropy increases, we are missing more and more information, not that we have more information— is this correct?

  9. Josh says:

    Hey there, I saw the other comment about things you think about before you fall asleep and there must be an entropic explanation for that!

    Anyway, I’ve been thinking a lot about entropy and it’s counterpart “emergence” which basically describes uniformity from chaos. I can draw parallels to these two systems in countless ways but my current question is about the two states of light (a packet and a waveform). It almost feels as though entropy & emergence are like quantum states. For example; as entropy increases, emergence occurs, and order is instilled until entropy increases again. So is it possible that a photon is oscillating between both an entropic and emergent state at a quantum level and therefore defined or perceived as both a packet and a wave?

    Can you think of any other coexisting examples of both entropy and emergence? Is there an explanation of this phenomenon in a law of physics greater than solely the laws of thermodynamics?

    In any case, thanks for the mind-bend!

  10. Koca Kelle says:

    It appears, speaking logically as well as realistically, as if the information passed on from a handful of totally irresponsible lackeys of the states, industries, money tout court, to yet another bunch that is identical to the previous handful is identified to the humanity as a whole. This is a sleigh of brain if there ever was one.
    You might as well say the humanity as a whole is as rich as 8-10 advertising and sellers more and more consumer goods corporations, politely called information techs.
    If the information is not passed on from a generation to following generation intact at each passage with whatever added to it, then to talk about increasing entropy in the sense used is even worse Sokal’s clowns do. But then Sokal himself is a dupe of Marxism, as is shown repeatedly, is a worship of same Capital without capitalists. The fight between two pimps between us and the life, the state and the merchants goes back to the first Civilization.
    Bref, ignorance is bliss.

  11. Monique McLucas says:

    Hi there!

    We are offering brand new and professionally built websites for as low as $247, for real.

    We are a professional web agency in Southern California with a risk free policy, if you do not love your website, don’t pay!

    100% money back guarantee and you can take that to the bank.

    Learn more at https://simplybuiltdigital.com/

    Bryce Wilkins
    Website Guru & Consultant

  12. Charles Harris says:

    Hi – I recently found your site.

    Just wondering, if it’s possible to increase your lead generation for your business, are you interested?

    Let me know, and we could explore this further.

    Warm regards,
    Charles

Leave a Reply

Your email address will not be published.