Q: Is the total complexity of the universe growing, shrinking or staying the same?

The complete question was:

If you were to look at the universe as an organism, was the early universe a simpler organism than the present-day organism?  Is the total complexity of the universe growing, shrinking or staying the same?  And how do you measure that?


Physicist: Absolutely.  The total complexity of the universe is increasing, due to the inevitable march of entropy (or information), which is exactly the measure of complexity.  A more intuitive way to talk about complexity and entropy is: can you predict what you’ll see next?  If you look at part of a checker board, you can probably guess what the whole thing looks like, so the board is predictable and has low entropy.  In the early universe matter was distributed pretty uniformly, almost all of it was hydrogen, almost everything was the same temperature, and there were no complex chemicals of any kind (going back far enough everything was ionized).  So if you’d seen one part of the universe, you’ve pretty much seen all of it.

This is actually a chess board.

No surprises.

Nowadays the universe is full of a wide variety of different elements with very complicated ways to combine together, matter shows up hot, cold, as plasma, as proteins, in stars, and clouds, and not at all.  The amount of data it would take to accurately describe the universe as it is now utterly dwarfs the amount that it would take to describe the early universe.  On an atom-by-atom basis, in the early universe you could grab an atom at random and feel fairly confident that: it’s hydrogen, it’s ionized, it’s about “yay” far away from the other nearby hydrogen, etc.  Today you’d probably be right if you guessed “hydrogen” (about 3/4 of the universe’s mass is still hydrogen), but you’d have a really hard time predicting anything beyond that.

Oddly enough, life is surprisingly uncomplex compared to say, dirt or sea water.  If you look at a single cell in your body, you’ve already got a pretty good idea of what you’ll see everywhere else in your body.  Admittedly, we are more complex than single celled life, but most of that is a symptom of being physically bigger.

Posted in -- By the Physicist, Biology, Entropy/Information | 9 Comments

Q: If two trains move towards each other at certain velocities, and a fly flies between them at a certain constant speed, how much distance will the fly cover before they crash?

The brain teaser comes in a many variations. For example:

Trains A and B, 700 miles apart, are heading toward each other on a straight piece of track. Train A is going 85 mph while train B is going 55 mph. At the same moment, a bee that flies 110 mph is sitting on the nose of train A and begins flying toward train B. When it reaches train B it makes an instantaneous reversal of direction and flies back toward train A. It continues to change direction every time it runs into a train until both trains and the bee meet in a spectacular crash. What total distance did the bee fly before the big collision?


Mathematician: The difficult way to solve this problem is to figure out how much distance the bee (or fly) traveled before turning around each time it approached a train, and then sum these distances together. The easy way to solve it is simply to figure out how long it took the trains to crash, and then calculate how far the bee, which travels at a constant speed, must have gone during this amount of time.

More specifically: The bee always travels at the same speed V. If we can figure out how much time, T, the bee flew before the trains collided with each other, then the total distance D it flew will just be V T, the product of the velocity and time. We know V, so all that remains is to figure out T. To do this, we just need to calculate how long it takes for the trains to crash. If the first train has velocity v1 and the second v2, and the distance between them initially is d, then the time T before the crash will just be d/(v1+v2), which is equivalent to the amount of time that it takes a train going velocity v1+v2 to travel the distance d. The total distance traveled by the bee is given by:

D = V d / (v1+v2)

= (700 miles)  * (110 mph)/((55 mph)+(85 mph))

= 550 miles

Posted in -- By the Mathematician, Brain Teaser, Math | 13 Comments

Q: Why does oxygen necessarily indicate the presence of life?

Physicist: Short answer: Life is the only thing that makes lots of oxygen.

This question comes in the context of a conversation about the Kepler mission.  So far (as of January 11, 2010) 424 “exoplanets” have been discovered and confirmed in orbit around other stars (update, Nov 10, 2010: 495 planets).  It’s worth pausing to take a minute and say, “holy shit!”.  Most civilizations throughout the ages have been aware of Mercury, Venus, Mars, Jupiter, and Saturn, and so was it for tens of thousands of years.  Between 1781 and 1930 we found 4 more: Uranus, Ceres, Neptune, and Pluto.  (It’s been slow.)

Since 1992 we’ve found over 400 new planets around other stars and, depending on where you draw the line, between 7 and several dozen new dwarf planets around our star.  It may have nothing to do with the question, but I think it’s worth knowing.

Pluto has friends.

New Dwarf Planets

Unpause.  Due to the difficulties in measurement, the vast majority of the exoplanets discovered so far are bigger than Jupiter, and orbit their parent star closer than Earth orbits the Sun.  Kepler holds the promise of detecting Earth sized planets, and brings us a step closer to detecting life around other planets (even if there were life on gas giants, it would be so alien we wouldn’t know what to look for).  Kepler works by waiting for eclipses.  When a planet passes in front of its star, the star appears to dim a little (if someone around another star were to see Earth do this, the Sun would appear to dim by about one part in 10,000).  Even better, by staring really (really) hard you can actually see light that has passed through the atmosphere of those planets, and now you’re talking chemical analysis!

If you look around at the other planets in our solar system you’ll notice that they all have something in common: their atmospheres are all chemically stable.  The other atmospheres, CO2, Hydrogen, Helium, Methane, etc., don’t do much more than just blow around.  You can’t start a fire anywhere other than Earth.

This is why we can't have nice things

Oxygen: The jerk of the elements. It's corrosive, burns like crazy, and is generally reactive and unstable.

Oxygen, on the other hand, is about as stable as a drunk unicyclist.  When you find oxygen in nature (and by “nature” I mean “other than Earth”) it’s always already tied up in the molecules of something else (such as in water or granite).  As soon as oxygen is released it tends to immediately combine with things around it.  It has been estimated that, left on it’s own, atmospheric oxygen will be completely absorbed by chemical processes within a few hundred years, and that’s not including big fires and whatnot.

The only known process that actually releases O2 into the air in any real quantity is photosynthesis.  So, observing oxygen in the atmosphere of other planets implies photosynthetic life.

Posted in -- By the Physicist, Astronomy, Biology, Physics | 3 Comments

Q: What’s the relationship between entropy in the information-theory sense and the thermodynamics sense?

Physicist: The term “Entropy” shows up both in thermodynamics and information theory, so (since thermodynamics called dibs), I’ll call thermodynamic entropy “entropy”, and information theoretic entropy “information”.

I can’t think of a good way to demonstrate intuitively that entropy and information are essentially the same, so instead check out the similarities!  Essentially, they both answer the question “how hard is it to describe this thing?”.  In fact, unless you have a mess of time on your hands, just go with that.  For those of you with some time, a post that turned out to be longer than it should have been:


Entropy!) Back in the day a dude named Boltzmann found that heat and temperature didn’t effectively describe heat flow, and that a new variable was called for.  For example, all the air in a room could suddenly condense into a ball, which then bounces around with the same energy as the original air, and conservation of energy would still hold up.  The big problem with this scenario is not that it violates any fundamental laws, but that it’s unlikely (don’t bet against a thermodynamicist when they say something’s “unlikely”).  To deal with this Boltzmann defined entropy.  Following basic probability, the more ways that a macrostate (things like temperature, wind blowing, “big” stuff with lots of molecules) can happen the more likely it is.  The individual configurations (atom 1 is exactly here, atom 2 is over here, …) are called “microstates” and as you can imagine a single macrostate, like a bucket of room temperature water, is made up of a hell of a lot of microstates.

Now if a bucket of water has N microstates, then 2 buckets will have N2 microstates (1 die has 6 states, 2 dice have 36 states).  But that’s pretty tricky to deal with, and it doesn’t seem to be what nature is concerned with.  If one bucket has entropy E, you’d like two buckets to have entropy 2E.  Here’s what nature seems to like, and what Bolzmann settled on: E = k log(N), where E is entropy, N is the number of microstates, and k is a physical constant (k is the Boltzmann constant, but it hardly matters, it changes depending on the units used, and the base of the log).  In fact, Boltzmann was so excited about his equation and how well it works that he had it carved into his head stone (he used different letters, so it reads “S = k \cdot \log{(W)}“, but whatever).  The “log” turns the “squared” into “times 2”, which clears up that problem.  Also, the log can be in any base, since changing the base would just change k, and it doesn’t matter what k is (as long as everyone is consistent).

This formulation of entropy makes a lot of sense.  If something can only happen in one way, it will be unlikely and have zero entropy.  If it has many ways to happen, it will be fairly likely and have higher entropy.  Also, you can make very sensible statements with it.  For example: Water expands by a factor around 1000 when it boils, and it’s entropy increases 1000 fold.  That’s why it’s easy to boil water in a pot (it increases entropy), and it’s difficult to condense water in a pot (it decreases entropy).  You can also say that if the water is in the pot then the position of each molecule is fairly certain (it’s in the pot), so the entropy is low, and when the water is steam then the position is less certain (it’s around here somewhere), so the entropy is high.  As a quick aside, Boltzmann’s entropy assumes that all microstates have the same probability.  It turns out that’s not quite true, but you can show that the probability of seeing a microstate state with a different probability is effectively zero, so they may as well all have the same probability.


Information!) In 1948 a dude named Shannon (last name) was listening to a telegraph line and someone asked him “how much information is that?”.  Then information theory happened.  He wrote a paper worth reading, that can be understood by anyone who knows what “log” is and has some patience.

Say you want to find the combination of a combination lock.  If the lock has 2 digits, there are 100 (102) combinations, if it has 3 digits there are 1000 (103) combinations, and so on.  Although a 4 digit code has a hundred times as many combinations as a 2 digit code, it only takes twice as long to describe.  Information is the log of the number of combinations.  So I = \log_b{(N)} where I is the amount of information, N is the number of combinations, and b is the base.  Again, the base of the log can be anything, but in information theory the standard is base 2 (this gives you the amount of information in “bits”, which is what computers use).  Base 2 gives you bits, base e (the natural log) gives you “nats”, and base \pi gives you “slices”.  Not many people use nats, and nobody ever uses slices (except in bad jokes), so from now on I’ll just talk about information in bits.

So, say you wanted to send a message and you wanted to hide it in your padlock combination.  If your padlock has 3 digits you can store I = log2(1000) = 9.97 bits of information.  10 bits requires 1024 combinations.  Another good way to describe information is “information is the minimal number of yes/no questions you have to ask (on average) to determine the state”.  So for example, if I think of a letter at random, you could ask “Is it A?  Is it B? …” and it would take 13 questions on average, but there’s a better method.  You can divide the alphabet in half, then again, and again until the letter is found.  So a good series of questions would be “Is is A to M?”, and if the answer is “yes” then “Is it A to G?”, and so on.  It should take log2(26) = 4.70 questions on average, so it should take 4.7 bits to describe each letter.

In thermodynamics every state is as likely to come up as any other.  In information theory, the different states (in this case the “states” are letters) can have different likelyhoods of showing up.  Right of the bat, you’ll notice that z’s and q’s occur rarely in written English (this post has only 4 “non-Bolzmann” z’s and 16 q’s), so you can estimate that the amount of information in an English letter should be closer to log2(24) = 4.58 bits.  Shannon figured out that if you have N “letters” and the probability of the first letter is P1, of the second letter is P2, and so on, then the information per digit is I = \sum_{i=1}^N P_i \log_2{\left(\frac{1}{P_i}\right)}.  If all the probabilities are the same, then this summation reduces to I = log2(N).

As weird as this definition looks, it does makes sense.  If you only have one letter to work with, then you’re not sending any information since you always know what the next letter will be (I = 1 log(1) +0log(0) + … + 0log(0) = 0).  By the same token, if you use all of the letters equally often, it will be the most difficult to predict what comes next (information per digit is maximized when the probability is equal, or spread out, between all the letters).  This is why compressed data looks random.  If your data isn’t random, then you could save room by just describing the pattern.  For example: “ABABABABABABABABABAB” could be written “10AB”.  There’s an entire science behind this, so rather than going into it here, you should really read the paper.


Overlap!) The bridge between information and entropy lies in how hard it is to describe a physical state or process.  The amount of information it takes to describe something is proportional to its entropy.  Once you have the equations (“I = log2(N)” and “E = k log(N)”) this is pretty obvious.  However, the way the word “entropy” is used in common speech is a little misleading.  For example, if you found a book that was just the letter “A” over and over, then you would say that it had low entropy because it’s so predictable, and that it has no information for the same reason. If you read something like Shakespeare on the other hand, you’ll notice that it’s more difficult to predict what will be written next.  So, somewhat intuitively, you’d say that Shakespeare has higher entropy, and you’d definitely say that Shakespeare has more information.

As a quick aside, you can extend this line of thinking empirically and you’ll find that you can actually determine if a sequence of symbols is random, or a language, etc.  It has been suggested that an entropy measurement could be applied to post modernist texts to see if they are in fact communicating anything at all (see “Sokal affair“).  This was recently used to demonstrate that the Indus Script is very likely to be a language, without actually determining what the script says.

In day to day life we only describe things with very low entropy.  If something has very high entropy, it would take a long time to describe it so we don’t bother.  That’s not a indictment of laziness, it’s just that most people have better things to do than count atoms.  For example: If your friend gets a new car they may describe it as “a red Ferrari 250 GT Spyder” (and congratulations).  The car has very little entropy, so that short description has all the information you need.  If you saw the car you’d know exactly what to expect.  Later it gets dented, so they would describe it as “a red Ferrari 250 GT Spyder with a dent in the hood”.

Bueller?

Easy to describe, and soon-to-be-difficult to describe.

As time goes on and the car’s entropy increases, and it takes more and more information to accurately describe the car.  Eventually the description would be “scrap metal”.  But “scrap metal” tells you almost nothing.  The entropy has gotten so high that it would take forever to effectively describe the ex-car, so nobody bothers to try.

By the by, I think this post has more information than any previous post.  Hence all the entropy.

Posted in -- By the Physicist, Entropy/Information, Equations, Math, Philosophical, Physics | 12 Comments

Q: Would it be possible to kill ALL of Earth’s life with nuclear bombs?

Physicist: Probably not.  We could kill all of the large (insects and up) life no problem.  Hell, we’re doing all right by mistake so far.  There are about 30,000 nuclear weapons in the world today, so in what follows I’ll assume the worst case scenario; that all of them are evenly spaced across the Earth’s land masses and set off.  That should put them about 70km apart (in a grid).

Certainly everything on the surface within several dozen km of a nuke will be dead (like, really dead) but surprisingly, several feet of dirt or stone offer remarkable protection from the light and fire of the initial blast.  Not directly under the explosion, but pretty close.  It takes an amazing amount of energy to heat up and/or move dirt, so while the surface may be heated to red hot, the ground underneath can stay surprisingly cool.

So sure, you’ve kicked the legs out from under the ecosystem, but how do you ensure that you get everything?  Fall out and nuclear winter are a good place to start.  Nuclear winter  is caused by dust thrown up in the air blocking out sunlight.  The “sunlight blocking” shouldn’t last for more than a few weeks, but it takes very little time to starve all the plants and plankton that rely on sunlight.  Or really just plankton, since you’re not going to find plants left standing within 35km of a nuke.  Now, whatever survives (burrowing critters, seeds) will have to contend with ash instead of food, and radioactive fallout.

Modern weapons are fairly efficient, in that they use up almost all of their fissionable material when detonating.  The initial flash involves a lot (as in “holy shit”) of radiation that mostly takes the form of gamma rays.  Gamma rays are just high energy photons, so they’re gone immediately.  Unfortunately, when fissionable stuff splits it breaks up into smaller isotopes which also tend to be highly radioactive.  Most of these by products have short half lives.  There’s a strong correlation between an isotope having a short half-life and the isotope radiating especially high energy crap when it decays.  So most of the nasty stuff goes away pretty quick.  The glaring exceptions to this are Caesium-137 and Strontium-90, which both have half-lives of about 30 years (and are delicious).  Today the background radiation of Hiroshima is due primarily to Caesium, and that accounts for very little radiation total.

Basically, in order to survive the worst case scenario you have to: 1) live under ground or underwater, 2) be highly resistant to buckets of radiation, 3) not be particularly bothered by losing the sun for a while, and 4) not be particularly sad about the surface of the Earth burning and then freezing (or continuing to burn, just not as much.  Some of the jury is still out).

A creepy blind fish from an Australian cave, a Pompeii Worm from a black smoker vent, and a Tardigrade (Water Bear) from freaking everywhere.  The last two are harder to kill than werewolves.

We live in the largest ecosystem on the planet, but we definitely don’t live in the only one.  There are fungus driven ecosystems deep in caves scattered around the world for example that may be safe.  If however those caves can exchange air with the outside (or are forced to by a bomb for example), then the radiation would probably wipe out everything in there too.  At the bottom of the ocean you can find black smokers, usually at the edge of tectonic plates.  Black smokers are vents that spew out super-heated acid water laced with poison.  I can only assume that the creatures that live down there must have been kicked out of every other clubhouse on the planet.  These ecosystems depend only on heat and material from beneath the Earth’s crust, and as such are completely independent of the Sun.  Although, poetically, since they depend on the nuclear decay of heavy metals in the Earth that were produced in at least one supernova more than 5 billion years ago, they still rely on a Sun, just not our Sun.  The creatures in the black smoker ecosystems have to deal with radioactive crap flying out of the vents all the time, so they may be able to put up with fallout that manages to drift all the way down to them.  Also, back in the 1950’s a bacterium called “Deinococcus radiodurans” was discovered that flourishes in radiation upto 3 million rads.  By comparison, 1000 rads is usually fatal to people.  3,000,000 rads means that the glass of the test tube you’re keeping this bacteria in is going to turn purple and fall apart long before the bacteria dies.

I mean, how does that evolve?  Where in the hell is this bacteria finding an environment that horrible?

Finally, Water Bears.  God damn.  Those guys don’t die.  Ever.  You can freeze them (-272°C), boil them (151°C), dry them out, irradiate them (500,000 rads), and even chuck them into space (seriously… space!), and they couldn’t care less.

So as long as there’s liquid water somewhere on Earth (even ultra-high pressure acid water) there will almost certainly be life.  We would probably be more successful (at killing everything) with toxins and run-away global warming.  So, if we could turn Earth into another Venus.

It worth noting that if this post seems a little “guessy”, it is.  A lot of research has been done on the subject.  The United States alone has detonated at least 1,054 weapons in tests, injected at least 18 people with plutonium, and exposed many more to radiation.  The exact results of all these tests are largely classified (as in fact were the tests themselves).  And of course, the world has never been destroyed by an all encompassing nuclear disaster.  Hence the guess work.

However, we have fossil evidence of microbial life dating back about 3.8 billion years, and the moon’s marias were still being created (by really, really big impacts) until about 3 billion years ago.  So we can expect that the Earth was subject to several ocean-boiling impact events since life started, and we’re still here (suck on that, space!).

Posted in -- By the Physicist, Biology, Physics | 20 Comments

Q: Will black holes ever release their energy and will we be able to tell what had gone into them?

Physicist: In any reasonable sense the answer to both of these questions is a dull “nope”.  In theory however, the answer is an excitable “yup”!

Blackholes lose energy through “Hawking Radiation”, which is a surprising convergence of general relativity, quantum mechanics, and thermodynamics.  Hawking (and later others) predicted that a blackhole will have a blackbody spectrum.  That is, it will radiate like people, the sun, or anything that radiates by virtue of having heat.  Hawking also calculated what temperature a blackhole will appear to be radiating at.  He found that for a blackhole of mass M: T = \frac{\hbar c^3}{8 \pi G k M}, where everything other than M is a physical constant (even the 8, depending on who you talk to).  A more useful way to write this is to plug in all the constants to get:

T = \frac{1.21 \times 10^{23}}{M}, where M is in kilograms and T is in degrees Kelvin.  That “10^{23}” makes it seem like blackholes should be really hot, and in fact small ones (like those we hope to see at CERN) are crazy hot.  However, if the Sun (M = 2 \times 10^{30} kg) were a blackhole its temperature would be about 60 nK (nano Kelvin).  …!  You wouldn’t want to lick it, or your tongue would stick.

Here’s the point.  Deep space glows.  It has a temperature of about 2.7K, which means that any blackhole that could reasonably form (M > 10^{31} kg, or several Suns) is going to be way colder than that.  Since the blackhole is colder it will actually absorb more energy than it emits.  In order for a blackhole in the universe today to actually shrink it must have a temperature above 2.7K, and so it must have a mass less than 4.5 \times 10^{22}, or around half the mass of the Moon.  Alternatively, you could wait several trillion years for the universe to cool down, and then the blackholes would start to evaporate.

As for the second half of the question: General relativity would suggest that when things fall into a blackhole they are erased.  Once they fall in, there’s no way to tell the difference between a ton of Soylent Green and a ton of Pogs (metric tonnes of course).  This makes quantum physicists really uncomfortable, because in addition to all the usual conservative laws (energy, momentum, drug policy) quantum physicists have “conservation of information”.  Lucky for them they also get to play with entanglement.  So if you chuck in a copy of War and Peace the blackhole will radiate thermally (which is the most randomized way to radiate) and will seem to scramble everything about Tolstoy’s pivotal work.  If you look at one outgoing photon at a time you’ll gain almost zero information.  If however, you can gather every outgoing photon, interfere them with each other and analyze how they are entangled you could (in theory) reconstruct what fell in.  However, you’d need to catch at least half of the photons before you could demonstrate that they hold any information at all.

This view of blackholes, that they hide information in the “quantum entanglement” between all of their radiated photons, makes them suddenly far more interesting.  Without going into to much detail, if you have N non-entangled 2-state particles you can have N bits of information, but if you have N entangled 2-state particles you can have 2N bits of information.  Allowing for entanglement frees up a lot of “extra room” to put information.

Suddenly, you’ll find that most (as in “almost all”) of the entropy in the universe is tied up in blackholes.  Also (again in theory), a carefully constructed blackhole can be the fastest and most powerful computer that it will ever be possible to create.

So, yes, blackholes will release all their energy, but you have to wait for the universe to cool down almost completely.  And, yes, we can tell what went into them, but we’ll have to wait for them to evaporate completely (after the universe has cooled down) and catch, without disturbing, almost every single particle that comes out of them.

Posted in -- By the Physicist, Astronomy, Entropy/Information, Physics, Quantum Theory | 4 Comments