Q: Does the 2nd law of thermodynamics imply that everything must eventually die, regardless of the ultimate fate of the universe?

Physicist: The 2nd law of thermodynamics states that in any closed system entropy will increase over time. The exact rate at which entropy increases is situation dependent (e.g., being on fire or not).

As a quick aside, one of my favorite Creationist (pardon, “Intelligent Design”) arguments uses the second law.  That is, a living Human body has far less entropy than an equivalent amount of (most) inorganic matter, so how could living things have come from non-living things?  Well, that’s a stunningly hard question, and we’re working on it.  Patience.  However, entropy isn’t a problem here, because the system of the biosphere is not closed.  We get a constant supply of low-entropy visible light from the Sun, and the Earth in turn sprays out a hell of a lot of high-entropy infra-red light.  For every one photon we get from the Sun we re-emit about twenty randomly into space.  That huge entropy sink is more than enough to offset all life, and a lot more.

The Bowhead Whale and the Galápagos Tortoise: two species lucky enough to live for a couple hundred years.

Back to the point.  Is the long, grinding, inevitable decay of the body inevitable in theory as well as in practice?  Nope.

Nothing survives the heat death of the universe of course, but there’s strong evidence that, if the environment stayed more or less the way it is today, then something “Human-ish” could live (maybe) indefinitely.  Single-celled organisms never die of old age, they either die for environmental reasons or tiny murder.  Instead of dying when they get old, they split in half and each half then grows to full size and repeats the process.  In a very literal sense, we’re all just different parts of the same, still-living, ancient primordial life form (much love, Chopra).

Single celled organisms: kinda immortal.

There are (very) living examples of creatures today that just don’t die on their own, such as the Turritopsis Nutricula jellyfish (which has been shown to indefinitely cycle between it’s adult and adolescent forms) and maybe (but probably not), the Hydra genus.  The point is; dying of old age is not a written-in-stone requirement for life.

The hydra which might be immortal, and the Turritopsis nutricula jellyfish which almost certainly is.

So, things don’t grow old and die due to entropy (strictly).  The effect of entropy seems to take the form of the accumulation of injuries, toxins, parasites, mutations, and general wear and tear.  The “choice” that a species has to make is between fixing bodies as they accrue damage, or shitcanning them and starting over.  By “shitcan and start over” I mean put a lot of energy into perfectly maintaining a few hundred cells (the “germ line“) and fixing most of the damage throughout the rest of the body.  New creatures that grow out of the germ line (babies) start with a damage-free blank slate.

Also; bonus!  Dying of old age helps clear the way for evolution to do it’s thing.  The young (and slightly different) merely have to compete with each other, instead of well established and ancient creatures.  Without natural death Earth might be home to nothing more interesting than mold (which is boring).

Now consider this: the statement that “entropy always increases” is just a fancy way of saying that “the world tends toward the most likely/stable end” or “the world tends to be in a state that has the most ways of happening”.  In this case, there are a lot more ways to be dead than alive.  As a result, you may have noticed that there are plenty of ways to accidentally die, but really just the one way to accidentally come to life.  Statisticians (being weird and morbid) have figured out that if Humans were biologically immortal the average lifespan would be around 600-700 years.  It takes about that long to slip in the shower or something (statistically).

Posted in -- By the Physicist, Biology, Entropy/Information, Evolution, Paranoia, Philosophical | 9 Comments

Q: What is The Golden Ratio? How is it used in Mathematics?

Physicist: The golden ratio, g, is g=\frac{1+\sqrt{5}}{2}\approx1.62.
The golden ratio is defined in many (equivalent) ways but the best known is: if A and B are two numbers such that the ratio of A+B to A is equal to the ratio of A to B, then g=A/B.

A rectangle, where the ratio of the long side to the short side is g, is called the “golden rectangle”.  The ancient Greeks, being creepy numerologists with nothing better to do, were really excited about the golden rectangle and worked it into a lot of their art and architecture.

The Golden Rectangle: Clearly, it's the most perfect rectangle ever.

Setting the ratios equal to each other you can solve for g (approx. 1.62): \begin{array}{ll}\frac{A+B}{A}=\frac{A}{B}\Rightarrow 1+\frac{B}{A}=\frac{A}{B}\Rightarrow 1+\frac{1}{g}=g\Rightarrow g+1=g^2\\\Rightarrow 0=g^2-g-1\end{array}

It’s this definition, the solution of this equation (0=x2-x-1), that is usually the vehicle that brings g into the conversation (In math circles they usually say “mathversation”).  Or, to put it another way, it shows up by coincidence.

For example; the Fibonacci sequence is a string of numbers, F0, F1, F2, … such that FN = FN-1 + FN-2 and F0 = 0 and F1 = 1.  You can see quickly that the string of numbers is 0, 1, 1, 2, 3, 5, 8, 13, 21, …

It takes a bit of work, but (after some math happens) it turns out that F_N \approx \frac{1}{\sqrt{5}}g^N (round to the nearest integer).  So what does the golden ratio have to do with the Fibonacci sequence?  Not a damn thing, really.  It’s just that about halfway through the derivation you’ll find yourself staring at a “0=x2-x-1″.

“g” showed up a lot for the ancient Greeks because they spent a lot of time playing around with straight edges and compasses.  The equation that describes a circle is quadratic, and the equation that describes a straight line is linear.  So when you’re trying to figure out where an intersection is, or how long a line segment is, you’ll be solving quadratic equations and “0=x2-x-1″ (being simple) is one of the equations you’ll frequently see.

One example of the relationship between straight-edge-and-compass geometry, and the exciting world of quadratic equations.

Finally, g showed up again a couple hundred years ago in the study of “continued fractions“.  However, I personally have only seen continued fractions used exactly once (in the context of rational number approximation) and g was no where to be seen.  The numbers that mathematicians are most excited about these days are: 0, 1, e, and π.

Posted in -- By the Physicist, Equations, Geometry, Math, Philosophical | 8 Comments

Q: Why can’t you have an atom made entirely out of neutrons?

Physicist: The short answer is; you probably can, but not for long.

If you’ve taken a little chemistry you probably know that the electrons in an atom “stack up” in energy levels.  The more electrons you add, the higher the electrons have to stack.  The same is true for the protons and neutrons inside the nucleus of the atom.  What’s a little surprising is that the stack for the protons and the stack for the neutrons are independent of each other.

In a stable atom the energy in the proton stack will be about the same as the energy in the neutron stack.  If they’re unequal, then a neutron will turn into a proton (\beta^- decay), or a proton will turn into a neutron (\beta^+ decay) to even out the levels.  The greater the difference the sooner the decay, and the more radioactive the atom.  There are plenty of exceptions (e.g., Uranium 237 vs. 238), but the pattern usually holds.

Carbon 14 is radioactive because it has too many neutrons. Neutronium has the same problem.

An atomic nucleus made entirely out of neutrons (known to some sci-fi aficionados as “neutronium”) would be completely imbalanced and would decay instantly.  It would be tremendously radioactive.

Chemistry nerds may have noticed that heavier elements have more neutrons than protons.  For example, Iron 58 has 26 protons, 32 neutrons, and is stable.  Forcing protons together takes a lot of energy (likes repel), so after hydrogen, proton energy levels get higher, quicker, than than neutron energy levels.

Exception!  In neutron stars you have the added component of lots of gravity.  When a proton and an electron fuse into a neutron they take up less room, and since gravity wants to crush everything together, this is a lower energy state.  So neutron stars are the one example of stable neutronium.  However, most people would say that calling an ex-star an atomic nucleus is pushing the definition of “atomic” a bit far.  Even more exciting; neutron stars may also contain the only naturally occurring stable lambda particles!

The picture of the completely ordinary chemist is from here.

Posted in -- By the Physicist, Particle Physics, Physics | 17 Comments

Q: What is the physical meaning of “symmetries”? Why is there one-to-one correspondence between laws of conservation and symmetries? Why is it important that there is such correspondence?

Physicist: This is the shortest answer yet: “Noether“.

When a physicist talks about symmetry, they don’t usually mean symmetry the way everyone else in the world does.  The backbone of mechanics (both classical and quantum) is the “Lagrangian”, \mathscr{L}.  Basically, the Lagrangian is \mathscr{L}=T-V where T is kinetic energy and V is potential energy (some systems are easier to describe than others).  You can use the Lagrangian as a shortcut for describing all kinds of physical phenomena and dynamics, using the principle of least action.  This is a course on its own (several courses), so I won’t go into it.  If you can change some variable without changing the dynamics the Lagrangian describes, then you’ve found a “symmetry”.

For example, near the Earth’s surface, gravitational potential energy is given by V=mgz (mass times gravity times height), and as always kinetic energy is given by T=\frac{1}{2}mv^2 (one half mass times velocity squared).  So \mathscr{L}=\frac{1}{2}mv^2-mgz.

Right off the bat you’ll notice that \mathscr{L} has no x or y (just z), so if you change the x or y it doesn’t change \mathscr{L} at all.  Symmetry!  It turns out that this gives you conservation of momentum in the x and y directions (that’s not obvious, btw).

You’ll also notice that there is a z in \mathscr{L}.  As a result, changing z changes \mathscr{L}, and you don’t have conservation of momentum in the z direction (up-down direction).  Try holding something out and letting it go, it will suddenly start moving toward the ground (as if by magic!), which is a definitely not conserved momentum.

Notice also there are no “t’s” involved (no time dependence) in \mathscr{L}.  This one gives you conservation of energy!

This was an example of a very straightforward, simple Lagrangian.  But, with a bit of slickness, and some well written (really nasty complex) Lagrangians, you can find dozens of symmetries that lead to conservation laws for energy, momentum, angular momentum, electric charge, particle number, Baryon number, Lepton flavor, all kinds of stuff!

The theorem that describes the correspondence between symmetries and conservation laws is “Noether’s theorem“.  It’s arguably one of the most important theorems evers.

Emmy Noether: crazy smart

The dynamics of a system are completely governed by the Lagrangian of that system which, frankly, you can often guess (“pulled it out of my keister” is a standard technique in physics circles).  This makes things easy.  When you hear about the “aesthetics of equations” physicists are often talking about Lagrangians.

However, there’s a big difference between having an equation, and having a solved equation.  So, being unable to find a solution, if you can find a symmetry (and thus a conserved quantity) you’re a lot closer to being able to model your system.

The Lagrangian that describes the gravitational interaction of all the stars in our galaxy includes a different term for every pair of stars.  So since there are 500,000,000,000 stars in our galaxy (give or take), there are approximately 12,500,000,000,000,000,000,000,000 terms in \mathscr{L}.  Most of these terms are really small, but still…  Luckily, this Lagrangian still has the symmetries that lead to conservation of momentum, angular momentum, and energy, which is enough to build pretty solid computer models.  Huzzah for Noether!

If you’d really like to learn how to use Lagrangians, then 1) learn some basic calculus of variations and Euler-Lagrange, and 2) get ready to use the principle of least action without knowing why it works (spookiest damn thing in physics).

Posted in -- By the Physicist, Equations, Philosophical, Physics | 1 Comment

Q: Why does energy have to be positive (and real)?

The original question was: I was reading an article about tachyons in Wikipedia and stumbled upon this sentence: “Because the total energy must be real then the numerator [mc^2] must also be imaginary”.  I’m confused by the fact that in the article they discuss imaginary mass, but don’t even consider imaginary energy.

My question is why energy is bound to be real?  Is there any law that precludes energy from having an imaginary value?  Perhaps this somehow follows from the law of conservation of energy?

Also, if you don’t mind, could you please discuss negative energy.  Is there a law that prevents it from existing?  What would be the implications if imaginary or negative energy has existed?


Physicist: This’ll be a little disappointing.

All physical “laws” are just observed patterns.  In every case energy has always been conserved and real.

However, we’ve made observations that imply that energy can be a little negative for a very short time (negative energy virtual particles), but actual negative energy has never been directly observed.  Virtual particles (by definition) can never be observed, so the same is likely to be true of negative energy.

So the physical law that forces energy to be real is: “energy is always real”.

You can re-write/change/make-up the laws of the universe to permit energy to have any complex values without running into any particularly nasty problems, although the universe would be extremely, incomprehensibly different.  That’s the context that the wikipedia article is written in: made-up physics (There is no reason to think that tachyons exist).  And sadly, imagining a thing just doesn’t make it so (I’m talking to you, “The Secret“).

More to the point: when you write down the equation of energy for most systems you find that “the energy is quadratic”.  For example:

\begin{array}{ll}\textrm{Kinetic Energy}&\frac{1}{2}mv^2\\\textrm{Occilator Energy}&\frac{1}{2}kA^2\\\textrm{E/M Field Energy}&\frac{E^2+B^2}{8\pi}\end{array}

Where m and k (mass and spring constant) are positive.  Since everything else is squared (quadratic) the energy must be positive.  The non-quadratic parts (in these cases m and k) always seem to be positive.

With regard to negative energy: if you could get a lot of it, and condense it into negative matter, you could make some serious money.  Negative matter, often called “exotic matter” and not to be confused with anti-matter, does exactly the opposite of what ordinary matter does.  For example, it repels normal matter and it radiates coldness (as opposed to a lack of heat). But what’s really exciting about it is that it twists up spacetime in weird ways.

The equations that describe the curvature of spacetime are dependent on the distribution of matter and energy in space.  You can turn those equations on their head and ask “what is the distribution of matter that would lead to a spacetime shaped like ____?”.  Sometimes the result is a distribution of positive mass (and so is possible), and other times the solution requires negative matter (which is a no go).

In this way we’ve managed to figure out: how to arrange matter to force a region of space to move quickly through time (possible with normal matter), stabilize a wormhole (requires lots negative matter), and even build a warp drive (negative matter again).

The warp drive in practice.

There are plenty of people excited about negative energy (so explore around), but don’t expect any of it to pan out.

Posted in -- By the Physicist, Particle Physics, Physics | 6 Comments

Q: How does the Twin Paradox work?

The original question was: I have a question about the twin paradox.  Is it true that faster aging of the twin who stayed at home happens only when the other twin’s spaceship is accelerating/deceleration (btw, does it matter whether he is accelerating or decelerating?)?  Consequently, do they age at the same rate when the spaceship moves inertially?


Physicist: The very short answer is: geometry works different in spacetime than it does in just space.

The twin paradox is a result from special relativity that states that if one person, Alice, remains “stationary” and another person, Bob, takes any kind of round trip, then the stationary Alice will experience more time.  The twin paradox isn’t a paradox at all, it’s just strange and off-putting (like twins).
In relativity (that is to say: “in reality”) there’s no difference between being stationary and having smooth (non-accelerating) movement.  On the surface of it, the only difference between Alice and Bob is that, in order to return home, Bob has to accelerate (turn around) at some point.  So is acceleration the secret to the twin paradox?  Nope.

In all of the pictures that follow the “time direction” is up, and one of the (three) space directions is left/right.

Out and back: In both situations Bob (blue) experiences the same acceleration, while Alice (aqua) sits around on Earth. The only difference is that the second situation involves twice the distance, and twice the difference in experienced time.  Acceleration is not what’s important.

The trick is: spacetime doesn’t obey the “triangle inequality”. As a result, the bendier a path is, the shorter it is (that shouldn’t make any sense, so please read on).

The triangle inequality says that (in space) the sum of any two sides of a triangle is greater than or equal to the third side. In spacetime the inequality can be reversed. One side is often longer than the other two: a round-about route is shorter than the direct route.  In this case, A+C<B.

The equation for distance that we’re used to is: D^2=\Delta x^2+\Delta y^2+\Delta z^2 (this is just the Pythagorean theorem).  But you find that when you start involving time and movement, this isn’t a particularly good measure of the distance between two points. Specifically, it’s different for different observers because of length contraction.
It so happens that the effects of length contraction and time dilation cancel each other perfectly, so that we can use a new (better) measure for spacetime distance, called the “Interval” or “spacetime interval” or “Lorentz interval”:

L^2=c^2\Delta t^2-\Delta x^2-\Delta y^2-\Delta z^2

(as often as not the sign on the right hand side is reversed, not to worry)

The advantage to the Interval is that, no matter what, the Interval between any two points in spacetime (two locations and times) is always the same, despite relativistic weirdness.  Here’s another bonus!  The Interval of a path is the same as the amount of time experienced on that path!

No one every really feels like their own position is changing, so: L^2=c^2\Delta t^2-0^2-0^2-0^2\Rightarrow L=c\Delta t

Now all that’s left is to draw a picture and do a little calculating.  Here’s an example situation from Alice’s perspective, and then Bob’s (initial) perspective.  The difference between Alice and Bob’s velocity is 0.6C (60% of light speed).

The same situation from two perspectives, but since L is invarient you get the same values.  Alice experiences 10 units of time while Bob experiences 8 units (since 5^2-3^2=16=4^2).

Posted in -- By the Physicist, Physics, Relativity | 57 Comments