Q: Since the real-world does all kinds of crazy calculations in no time, can we use physics to calculate stuff?

The original question was: I’ve heard somewhere that they’re also trying to build computers using molecules, like DNA. In general would it work to try and simulate a factoring algorithm using real world things, and then let the physics of the interactions stand-in for the computer calculation? Since the real-world does all kinds of crazy calculations in no time.


Physicist: The amount of time that goes into, say, calculating how two electrons bounce off of each other is very humbling.  It’s terribly frustrating that the universe has no hang ups about doing it so fast.

In some sense, basic physical laws are the basis of how all calculations are done.  It’s just that modern computers are “Turing machines“, a very small set of all the possible kinds of computational devices.  Basically, if your calculating machine consists of the manipulation of symbols (which in turn can always be reduced to the manipulation of 1’s and 0’s), then you’re talking about Turing machines.  In the earlier epoch of computer science there was a strong case for analog computers over digital computers.  Analog computers use the properties of circuit elements like capacitors, inductors, or even just the layout of wiring, to do mathematical operations.  In their heyday they were faster than digital computers.  However, they’re difficult to design, not nearly as versatile, and they’re no longer faster.

Nordsieck's Differential Analyzer was an analog computer used for solving differential equations.

Any physical phenomena that represents information in definite, discrete states is doing the same thing a digital computer does, it’s just a question of speed.  To see other kinds of computation it’s necessary to move into non-digital kinds of information.  One beautiful example is the gravity powered square root finder.

Newtonian physics used to find the square root of numbers. Put a marble next to a number, N, (white dots) on the slope, and the marble will land on the ground at a distance proportional to √N.

When you put a marble on a ramp the horizontal distance it will travel before hitting the ground is proportional to the square root of how far up the ramp it started.  Another mechanical calculator, the planimeter, can find the area of any shape just by tracing along the edge.  Admittedly, a computer could do both calculations a heck of a lot faster, but they’re still descent enough examples.

Despite the power of digital computers, it doesn’t take much looking around to find problems that can’t be efficiently done on them, but that can be done using more “natural” devices.  For example, solutions to “harmonic functions with Dirichlet boundary conditions” (soap films) can be fiendishly difficult to calculate in general.  The huge range of possible shapes that the solutions can take mean that often even the most reasonable computer program (capable of running in any reasonable time) will fail to find all the solutions.

Part of Richard Courant's face demonstrating a fancy math calculation using soapy water and wires.

So, rather than burning through miles of chalkboards and a swimming pools of coffee, you can bend wires to fit the boundary conditions, dip them in soapy water, and see what you get.  One of the advantages, not generally mentioned in the literature, is that playing with bubbles is fun.

Today we’re seeing the advent of a new type of computer, the quantum computer, which is kinda-digital/kinda-analog.  Using quantum mechanical properties like super-position and entanglement, quantum computers can (or would, if we can get them off the ground) solve problems that would take even very powerful normal computers a tremendously long time to solve, like integer factorization.  “Long time” here means that the heat death of the universe becomes a concern.  Long time.

Aside from actual computers, you can think of the universe itself, in a… sideways, philosophical sense, as doing simulations of itself that we can use to understand it.  For example, one of the more common questions we get are along the lines of “how do scientists calculate the probability/energy of such-and-such chemical/nuclear reaction”.  There are certainly methods to do the calculations (have Schrödinger equation, will travel), but really, if you want to get it right (and often save time), the best way to do the calculation is to let nature do it.  That is, the best way to calculate atomic spectra, or how hot fire is, or how stable an isotope is, or whatever, is to go to the lab and just measure it.

Even cooler, a lot of optimization problems can be solved by looking at the biological world.  Evolution is, ideally, a process of optimization (though not always).   During the early development of sonar and radar there were (still are) a number of questions about what kind of “ping” would return the greatest amount of information about the target.  After a hell of a lot of effort it was found that the researchers had managed to re-create the sonar ping of several bat species.  Bats are still studied as the results of what the universe has already “calculated” about optimal sonar techniques.

You can usually find a solution through direct computation, but sometimes looking around works just as well.

Posted in -- By the Physicist, Computer Science, Engineering, Entropy/Information, Physics | 8 Comments

Q: Is there some way to actually play quidditch?

Physicist: Using magic?  No.  But people do try.

There are no physical theories that allow for the existence of magic, as described in Harry Potter™ and or other creative ventures of that ilk.  However, if you had an amazing amount of money and energy, you might be able to set up some kind of magnetic system that allowed people, while within the arena, to be propelled around on things the size of broomsticks.

Quidditch: Nope.

Bill Gates could probably get something set up, but he’s too busy curing malaria or something.

I wish I could say something about how awesome superconductors are, or that they might be useful, but all they’d really be useful for is moving the players around as though they were on tracks.  If you really want to control how everything moves around you’d have to use “servo-mechanistic electromagnetic suspension” (that’s a made up phrase, but it is a decent description).  This is how maglev trains today work.

Normally, a pair of magnets will either snap together or fly apart.  To suspend one magnet above another (without snapping or flying), or move one magnet around in a controlled way, requires the use of one or several tightly controlled electromagnets, that vary their strength very quickly to react to the position of the magnet they’re suspending.  This is a “servomechanism“.

It would take stunning buckets of power to create a magnet field the size of an arena, that’s capable of suspending a person (and broom).  But the real difficulty is in coming up with a way of targeting 15 separate objects: two teams of 7 players, and the “snitch”.  In theory, you could put together some kind of extremely sophisticated array of small coils in the ground and in the stands that could target a magnetic field to relatively small areas in space (person sized).  This sort of technique is used for things like “hypersonic sound“, which is a “beam of sound”.

However, there’s a drawback.  In order to get a wave phenomena (sound waves, ocean waves, magnetic, whatever) to stay stay confined (like a beam) and not just go wherever, you need the wavelength to be fairly short.  A good rule of thumb is that the wavelengths need to be about the size of the region in question or shorter.  This is just another incarnation of the uncertainty principle (which is all about waves).  At about the same time that you’re using wavelengths short enough to (theoretically) target individual people and brooms, and not all the objects in the arena, you’ll find that your previously gentle magnetic field is now made up of microwaves (microwaves, and light in general, are just high frequency electromagnetic fields).

So, after years of effort and (let’s say) billions of dollars in R&D, you’ll find that what you’ve really made is the most whimsical death machine ever constructed.  Real-world Quidditch would a very short, but spectacular game.

Posted in -- By the Physicist, Physics | 17 Comments

Q: Can you poke something that’s far away with a stick faster than it would take light to get there?

The original question was: If I had a really long (about 500,000 km) and really stiff stick would I be able to send information faster than light by moving it quickly by 1 cm and poking someone on the other side or pushing a button?


Physicist: This is a classic thought experiment!  If you had an infinitely rigid stick, then you could definitely send poking information faster than light.  In fact, this is one argument for why you’ll never find perfectly rigid materials.

However, in any real material the “push” information travels from atom to atom at the speed of light or slower.  That is, there are electromagnetic forces holding the pole’s atoms together, and the fastest that changes in those forces can be felt is the speed of light (this is true for any force).

A perfect poking pole pokes preternaturally fast. A poking pole composed of particles pokes slow.

So, you push on one atom, it moves, then the next atom notices after a while (as much time as it would take light to cover the distance between them, or longer) and then it moves, then a little later the next atom…

The best you can do is push suddenly, and create a compression wave that travels down the pole at nearly light speed (or more likely, just split the pole).  There’s nothing special about pokin’ sticks, by the way.  Every time anything gets pushed the “push information” has to travel through the object, generally at a rate much slower than light speed, but still pretty fast.  I mean, outside of Jello, it’s hard to notice.

Posted in -- By the Physicist, Physics, Relativity | 17 Comments

Q: Does how you deal cards affect how random they are?

The original question was: [My wife and I] always shuffle the deck at least 4-5 times before dealing.  But I’m sure that’s not enough and need some guidance.  Does the game previously played influence the amount of shuffling required to ensure a random deck?

But the bigger debate is that my wife prefers to deal two cards at a time so as to make the dealing go faster, while I prefer to deal the cards one at a time even though it takes longer.  My preference for one card at a time is entirely based on the belief that it creates a more random distribution of cards, a cleaner deal.  What is the impact of dealing one vs two cards at a time?

Can any of this be proven mathematically?


Physicist: Unless one of you is a card shark, you should be fine.  The short answer is; random is random, and how you deal has no impact on that.

Starting with a perfectly ordered deck you can randomize it, about as much as you’d need, in 6-7 perfect shuffles.  This is enough to get cards that start right next to each other to end up anywhere in the deck (each shuffle doubles the distance between cards).  There are some subtle patterns still retained by perfect shuffles (that are unlikely to affect any game likely to be played), so if you want to avoid them you can purposely mess up a shuffle, or cut the deck at random after a shuffle or two.

The standard, fresh-pack, card ordering.

The way that these patterns would be noticeable is if you play a game that completely orders the deck somehow, like klondike, in which case a series of perfect shuffles does create a fairly random deck, but it will be the same random deck every time.

A fresh pack after 7 perfect “pharaoh” shuffles.  There are some patterns, but not the kind of patterns that you’d notice in most games.

The way you’d notice (assuming you’re proficient enough to shuffle perfectly, every time) is you’ll find yourself playing the same game over and over.  However, adding a few genuinely random elements to a few shuffles: cutting the deck at a random location in between shuffles, not doing the shuffle perfectly, or “mixing” the cards on the table, do a really good job killing off any remotely recognizable patterns that might carry over from a previous game.  But for most games, at the end of a hand/round the cards are already patternless enough that a few shuffles is more than fine.  “Awesome handedness” isn’t a pattern that survives shuffling.

“Awesome handedness” isn’t really a pattern in itself.  Sometimes you win or lose fair and square.

Once a deck has been randomized all dealing patterns are equivalent.  Every set of cards is just as likely to show up as every other whether you deal one card at a time, 5 cards at a time, deal clockwise/counter-clockwise, or just deal as weirdly as possible.  It’s fair to say that if someone gets 3 royal flushes in a row, it’s not because the cards are retaining a pattern.  That dude’s just cheating.

In general, you can create noise from pattern, but not pattern from noise.  In other words, to create patterns on purpose, you’d need to actually sort the cards by hand.

By shuffling you can sort the cards on the top, because you’ll know when to stop.  But there’s no way to sort the cards on the bottom.

There are big branches of mathematics that cover this sort of thing (not cards specifically).  While the patterns in the cards never quite go away completely, you can certainly say “more than good enough” or “it would take billions of games to notice anything” after not too many shuffles.


Answer gravy: If you really want to, there are ways to construct fairly artificial examples in which patterns do last from one game to the next.  These are also the basis of several “trickless” card tricks.  For example:

You have exactly 2N players, each with H cards in their hand, who play a round and then place their hands on the top of the deck.  The dealer does exactly N pharaoh shuffles and deals again, one card at a time.  If H\times 2^N < 52, then you’ll find that the hand that was on the very top of the deck will be dealt to the last person in the dealing order, and everyone else will end up with new cards.

So if there are 4 players, each with 5 cards, the last last hand to be put on the top of the deck will be in positions 1, 2, 3, 4, 5.  The first pharaoh shuffle will move these to 2, 4, 6, 8, 10 and the second will move them to 4, 8, 12, 16, 20.  When dealing;

player one gets cards 1, 5, 9, 13, 17

player two gets cards 2, 6, 10, 14, 18

player three gets cards 3, 7, 11, 15, 19

player four get cards 4, 8, 12, 16, 20 (the last hand from the previous game)

However, this “trick”, and pretty much every other, is defeated immediately by having someone cut the deck.  Also, don’t ever play cards with Magicians, they think about this stuff more than you might suspect.

Posted in -- By the Physicist, Entropy/Information, Math | 11 Comments

Q: Will CERN awaken the Elder Gods?

Physicist: At the risk of over-simplifying the situation; yes, absolutely.

For years dark and unexplainable events have been the norm in particle accelerator laboratories around the world.  However, it was not until a meta-analysis of the 2001 “blood running up the walls incident” in Fermilab, the 1983 “unfortunate undead incident” in Daresbury, the 2011 “faster-than-light neutrino incident” at CERN, and the “howling vortex of pain effect” common to employees of KEK, that a subtle underlying pattern was uncovered.

In addition, some concerns were voiced when it became apparent that the user’s manual for the Large Hadron Collider had been written, entirely by accident, as a nearly verbatim translation of the Necronomicon.

But ultimately, the call for a comprehensive study by the NSF was sparked by an experiment by Dr. Damian “The Unblinking” Shoggoth (PhD., occult particle cosmology, music history), involving the ALTAS detector at CERN, and the still-beating hearts of three virgin employees.  While Dr. Shoggoth’s experiment successfully demonstrated an effective means of tauon generation, it also created a dark fissure between the Mortal World and the Dreaming.

Left: Director General of CERN, Rolf-Dieter Heuer. Right: the only known photograph of Damian Shoggoth

According to Rolf-Dieter Heuer, the director general of CERN, “… strange experimental artifacts are just a part of doing [science].  When you hear screaming every time the beam is active you tend … to assume that it’s the result of some unknown mechanism; metal fatigue, poorly tuned resonators.  One-time events, like the combined terror of every human sacrifice … bleeding through from a dimension of pure agony is difficult to take into account, given our current scientific model.”.

For thousands of years particle accelerators have been used by isolated groups of occultists to commune with their Dark Lords.  The Toltec ziggurat reactor, the Stone Henge synchrotron, the Nuclear Pile at Miskatonic, and the Nazca linear accelerators are just a few examples.  But of course, with ever greater technology comes ever more violent violations of the natural order.  It wasn’t until accelerators capable of giga-electronvolt power came into common use, for example, that secret physicist covens discovered reliable methods for creating weakly-interacting neutrinogolems, to slay their enemies.

A neutrino-golem causing as much devastation as it can.

The findings of the NSF study show that when CERN is finally brought to full power it will almost certainly summon the Outsider, and begin the Age of Torment.

Said Heuer, “We’re living at the dawning of a new age of discovery, understanding, and unending darkness.  With any luck, by 2013 we’ll have confirmed or discomfirmed the existence of the Higgs boson and been devoured by our dread master, Lord Cthulhu.”

An awakened Cthulhu: the inevitable result of fundamental research.

But, I suppose time (what’s let of it) will tell.  The joy of science is in the surprise and the eldritch horror.

The “Tentacle Lovecraft” photo is from here.

The “Cthulhu rising” is from here.

Posted in -- By the Physicist, April Fools | 23 Comments

Q: The information contained in a big system isn’t the same as the amount of information in its parts. Why?

The original question was: Information doesn’t behave like any other quantifiable property. For example, to describe something which can be in any of 8 states, I would need 3 bits of information. To describe something which can be in any of 5 states you also need 3 bits of information. This implies that the amount of information needed to describe a compound system is not a function of the amount of information needed to describe each component system.


Physicist: The amount of information in a compound system is the sum of the amounts of information needed to describe its components (assuming those components are independent).  However, describing information in bits (0’s and 1’s), gives you the impression that the number of bits in something should be a whole number.  After all, how can something have 1.5 bits of information?  Either there’s a single 0 or 1, or there are two 0’s or 1’s.  As it happens, you can have a non-integer number of bits, but the reasoning behind exactly how is a little subtle.

In the 8-state example it takes exactly 3 bits to specify a state.  Three bits are needed to specify any one state, and any one state can specify any 3 bit combination.

Two 8-state systems together are a 64-state system that takes 6 bits to describe (6=3+3).  Easy nuff.

It takes 3 bits to distinguish between 5 things, but 3 bits contain a little more information; enough to distinguish between 8 things.

In the 5 state example it takes 3 bits to specify the state, however the state cannot specify any 3 bit combination.  This means that a 5-state system contains less than 3 bits and yet somehow more than 2.

In general, 1 bit can distinguish between 2 things, 2 bits can distinguish between 4, 3 can distinguish between 8, and N bits can distinguish between 2N things.  So if you’ve got M states, and M < 2N, then N bits will be enough to distinguish between them.  For example, it takes 5 bits to specify a Greek, English, or Arabic letter (24, 26, and 28 characters) and 6 bits to specify a Cyrillic letter (33 characters).  Taking the log base 2 of both sides turns M < 2N, into log2(M) < N.  That is, the number of bits you need to distinguish between M states is about log2(M).

Two 5-state systems together are the same as a 25-state system.  It only takes 5 bits to describe a 25-state system (25 < 32 = 25), as opposed to the 6 you might guess from the fact that 3 bits are needed to describe each of the 5-state systems individually.

A pair of 5-state systems can be arranged in 25 combinations and can be described by 5 bits (which have 32 combinations).

Keep in mind that information theory originally branched off of communication theory.  So rather than thinking about describing a single system, information theorists think about describing lots of systems, one after another, like letters or numbers (like what you’re reading now), and then conveying that description to someone else.  Often, you can save a little data by describing several states at the same time.  In this case, by describing pairs of 5-state systems, instead of describing each system one at a time, you save 1 bit.

The more M-state systems you describe all at once the closer you’ll get to log2(M).  If you have a lot of 5 state systems and you describe them together you find that it’ll take approximately log2(5) = 2.3219 bits (22.3219 ≈ 5) to describe each system on average.  For example, if you describe 100 5-state systems together you’re describing a 5100-state system (forgive me for not writing out the entire number), and you’ll need 233 bits.  2.33 bits per system, by the way, is pretty close to log2(5).


Answer Gravy: Even cooler, since information is defined in terms of a system being described over and over, it’s affected by probabilities (a single instance of something doesn’t really have probabilities).  If the system you’re trying to describe tends to show up in some states more than others (like “e” in English), then it would be nice to describe it with a shorter series of bits than rarer states (like “q” in English).  It turns out (and it’s not immediately obvious why, so don’t stress) that the information, I, of a system with states 1, 2, 3, …, n, where the probability of each state is given by p1, p2, p3, …, pn, is I = -p1log2(p1) – p2log2(p2) – … – pnlog2(pn).  When the probabilities are all the same (p=\frac{1}{n}), then this reduces to what the post has been talking about so far: I = log2(n).

So, for example, if you see A 1/2 the time, and B and C 1/4 of the time, then in bits, the average amount of information in the system is

\begin{array}{ll}I = -\frac{1}{2}\log_2{\left(\frac{1}{2}\right)} - \frac{1}{4}\log_2{\left(\frac{1}{4}\right)}-\frac{1}{4}\log_2{\left(\frac{1}{4}\right)}\\= -\frac{1}{2}(-1)- \frac{1}{4}(-2)-\frac{1}{4}(-2)\\= \frac{1}{2}+\frac{1}{2}+\frac{1}{2}\\=\frac{3}{2}\end{array}

You can optimally encode this system into bits using “Huffman coding“:  A=0, B=10, and C=11.  So for example, 0111101110001010010 = ACCACBAABBAB.  The average length of the code for each letter is 1\cdot\frac{1}{2}+2\cdot\frac{1}{4}+2\cdot\frac{1}{4}=\frac{3}{2} (length of code times probability), which is a little shorter than log2(3) = 1.585.  In Huffman coding the most commonly seen state is always assigned “0”, leading to devastatingly funny comp-sci jokes like “0 = screw you”.

The effect of probabilities on written English (like, e’s are common, q is always followed by u, every word has a vowel, etc.), according to Shannon’s somewhat ad-hoc experiments, reduces the information from the expected 4.7 bits (log2(26)) per letter to only about 2.3 bits per letter.

So, you can increase information density by shortening the code for the most commonly used things.  This happens very naturally in everyday speech.  Those things more likely to be talked about tend to have shorter names.  For example, “the world wide computer network that the kids today are so excited about”, “the currently presiding president of the United States of America”, and “the dog with whom I live, for which I feel nothing, that from time to time responds to the name ‘Professor Emeritus Meowington III, esq.’ “, are almost always referred to in much more compact forms like “the net”, “the president”, and “the dog”.

The dog.

It’s from here.

Posted in -- By the Physicist, Entropy/Information, Math | Leave a comment