Q: How fast are we moving through space? Has anyone calculated it?

The original question was: Considering the spin of the earth, it’s orbit around the sun, the sun’s orbit around the Milky Way and the Milky Way’s journey through interstellar space, has anyone calculated our speed though the universe?


Physicist: The short answer is “yes”, and the long answer is “well… yes”.

The problem with motion is that “true motion” doesn’t exist.  The best we can do is talk about “relative motion” and that requires something else to reference against.  What you consider to be stationary (what you chose to define your movement with respect to) is a matter of personal choice.  The universe isn’t bothered one way or the other.

Relative to your own sweet self: Zero.  This sounds silly, but it’s worth pointing out.

Relative to the Earth: The Earth turns on its axis (you may have heard), and that amounts to about 1,000 mph at the equator.  The farther you are from the equator the slower you’re moving.  This motion can’t be “ignored using relativity”, since relativity only applies to constant motion in a straight line, and movement in a circle is exactly not that.  This motion doesn’t have much of an effect on the small scale (people-sized), but on a planetary scale it’s responsible for shaping global air currents (including hurricanes!).

Relative to the Sun: The Earth orbits the Sun at slightly different speeds during the year; fastest around new years and slowest in early July (because it’s farther from or closer to the Sun respectively).  But on average it’s around 66,500 mph.  By the way, the fact that this lines up with our calendar year (which could be argued to be based on the tilt of the Earth, which dictates the length of the day) to within days is a genuine, complete coincidence.  This changes slowly over time, and in several thousand years from now it will no longer be the case.  Fun fact.

Relative to the Milky Way: The Sun moves through the galaxy at somewhere around 52,000 mph.  This is surprisingly tricky to determine.  There’s a lot of noise in the the speed of neighboring stars (It’s not unusual to see stars with a relative speed of 200,000 mph) and those are the stars we can see the clearest.  Ideally we would measure our speed relative to the average speed of the stars in the galactic core (like we measure the speed at the equator with respect to the center of the Earth), however that movement is “sideways” and in astronomy it’s much much easier to measure “toward/away” speed using the Doppler effect.  Of the relative speeds mentioned in this post, the speed of our solar system around the galaxy is the only one that isn’t known very accurately.

Relative to the CMB: The Milky Way itself, along with the rest of our local group of galaxies, is whipping along at 550 km/s (1.2 million mph) with respect to the Cosmic Microwave Background.  Ultimately, the CMB may be the best way to define “stationary” in our corner of the universe.  Basically, if you move quickly then the light from in front of you becomes bluer (hotter), and the light from behind you gets redder (colder).  Being stationary with respect to the CMB means that the “color” of the CMB is the same in every direction or more accurately (since it’s well below the visual spectrum) the temperature of the CMB is the same in every direction (on average).

Posted in -- By the Physicist, Astronomy, Physics | 43 Comments

Q: If you flip a coin forever, are you guaranteed to eventually flip an equal number of heads and tails?

The original question was: Lets say we have a fair coin that is flipped a hundred times and at the end of the trial there have been 40 tails and 60 heads. At this time there have been 20 more heads than tails and it could be said that heads is “dominant”.

Is it inevitable that, given enough time to occur, eventually a “switch-over” in dominance will occur and tails will be dominant, or is it the case that dominance by heads or tails can carry on indefinitely?

If a “self correction” is inevitable, how long, on average, would a correction take (the equalizing of the number of heads and tails)? Is it something you will see by the end of the day or something you won’t see in your lifetime. (on average of course, I accept that both are theoretically possible)?

What is the likely number of flips between each “switch-over” and is a “switch-over” guaranteed?


Physicist: In the very specific case of this question: yes.  In fact (for this very specific case) any given tally (e.g., 7,823 more heads than tails) will happen eventually, regardless of what the imbalance presently is.  Since that includes #-of-heads = #-of-tails, then eventually (in less than infinite time) you’ll always flip the same number of heads as tails.  Even “better”, this will happen an infinite number of times.  If you really keep at it.

However, this is entirely do to the fact that this coin example is the same as the “1 dimensional discrete random walk”.  If you assign “+1” to heads and “-1” to tails, then you can keep a running tally with one number.  The fact that, starting with a tally of zero, you’ll eventually return to zero isn’t any kind of “self correcting” property of the universe.  Even slightly different situations (such as one side of the coin being even a tiny, tiny bit more probable) will create “non-recurrent” strings of coin flips.

The tally changes by about the square root of the number of flips (on average), and this can be either an increase or decrease (equal chance).  This is an important part of the “central limit theorem“.  Starting at zero, if you’re flipping 10 coins a minute, then at the end of the day you can expect the difference between the number of heads and tails to be in the neighborhood of \sqrt{10\times60\times24} = 120.

Three days spent filling and counting coins will produce running tallies like this.

Just so you don’t have to do it: Three days spent flipping and counting coins will produce running tallies like this.  The curves are the square root of the number of flips, and most of the time (~64%) the tally will stay between these lines.

Anything between, say, +200 heads or +200 tails is pretty normal, which is a little surprising considering that 200 is pretty small for a full day of coin flipping.  During the course of that day there’s a good chance you’ll “pass zero” several times.

The question of how long it will take for you to get back to the same tally is a little subtle.  If you’re close to zero, then there’s a good chance you’ll hit zero several more times in rapid succession.  But one of those zeros will be the last for a while since inevitably (and completely at random) you’ll get a string of tails or heads that carries you away from zero.  Every possible tally is achieved eventually, and since there are arbitrarily large numbers, there are arbitrarily long return-to-zero times.  That isn’t a rigorous proof, but it turns out that the average return time is infinite.

So basically, if you’re flipping a coin and the total number of heads is equal to the total number of tails, then the same is likely to be true soon.  But if it doesn’t happen soon, then it probably won’t happen again a long time.  Really, the best you can say is that “square root thing”, which is that the tally will usually be within \pm\sqrt{N}, where N is the number of flips.


Answer gravy: For those of you who wanted the details behind that last bit, it turns out to be not too difficult to figure out how long it will take you to get back to zero.

However you get back to zero, you’ll have to flip the coin 2N times (you can’t have an equal number of heads and tails if you flip an odd number of times).  Now say you have a series of tallies that starts at zero, then stays greater than or equal to zero until the 2N flip.  The number of possible ways for this to happen is the Nth Catalan number, CN, which can be described as “the number of ways of arranging N pairs of parenthesis so that they make sense”.  Turns out that C_N = \frac{1}{N+1}{2N \choose N} (this is a choose function).

The probability that the 2Nth coin flip is the first that brings the tally back to zero, P(2N), is the number of paths that didn’t come back to zero before, divided by the total number of paths.

CN is the number of ways for the tally to be greater than or equal to zero.  To be strictly greater than zero, we need a little trick.  Say the first coin flip takes the tally to +1.  If the 2Nth coin flip brings the tally to zero (for the first time), then on the 2N-1th coin flip the tally was +1 as well.  So, the total number of paths starting with “heads” that makes 2N the first return to zero is CN-1 (the number of paths greater than or equal to one, between the first flip and the second to last flip).  Same thing works if the first flip is tails, which means we need to multiply the number of paths by two: 2CN-1.

The total number of possible paths is easy: 22N.  So,

P(2N) = \frac{2C_{N-1}}{2^{2N}} = \frac{1}{N 2^{2N-1}}{2(N-1) \choose N-1} = \frac{1}{2^{2N-1}}\frac{(2N-2)!}{(N-1)!(N-1)!}

Just to double check: notice that P(2) = 1/2, which is exactly what you’d expect (“TH” and “HT”, but not “HH” and “TT”).  If you’re close to, or at zero, then there’s a good chance you’ll be there again in short order, and you’re guaranteed to come back eventually.

The average number of flips to return to zero is:

\sum_{N=1}^\infty 2N \,P(2N) = \sum_{N=1}^\infty \frac{2N(2N-2)!}{2^{2N}(N-1)!(N-1)!}

However!  This is what folk in the math biz call a “divergent series” because it “adds up to infinity”.  Each term in the summation are about equal to \frac{2N(2N-2)!}{2^{2N}(N-1)!(N-1)!} \approx \frac{2}{\sqrt{\pi}}\sqrt{\frac{1}{N}} (using Stirling’s approximation).  While these do add up slower and slower (because you’re more likely to return to zero sooner than later), they still add up to infinity.

So, in a very technical/mathematical sense, if you flip a coin forever, then eventually you’ll get the same number of heads as tails.  Technically the average return time is infinite.  In practice 90% of the time you’ll get an equal number within 64 flips (fact!), and if you don’t then stop anyway.

Posted in -- By the Physicist, Math, Probability | 39 Comments

Q: What is radioactivity and why is it sometimes dangerous?

Physicist: Here’s every particle you’ve ever interacted with: protons, neutrons, electrons, and photons*.  Dangerous radiation is nothing more mysterious than one of those particles moving crazy fast.

The nucleus of some kinds of atoms are unstable and will, given enough time, re-shuffle and settle into a lower-energy, more stable form.  The instability comes from an imbalance in the number of neutrons vs. protons.

The most common forms of “radioactive decay” are beta+ and beta-, and these happen because a nucleus has either too many protons or too many neutrons.  Beta- is a neutron turning into a proton, an electron, and some extra energy.  Beta+ is a proton turning into a neutron, an anti-electron, and some extra energy.  The protons and neutrons stay in the nucleus, and the new electron or anti-electron takes most of that new energy and flies away.  That fast electron or anti-electron is the radiation.

Tritium is a type of radioactive hydrogen that has one proton and two neutrons.

Tritium is a type of radioactive hydrogen that has one proton and two neutrons.  Occasionally, one of those neutrons will “pop” and turn into another proton and an electron.  The result is a helium-3 atom, and a really fast electron.

Sometimes a neutron is ejected (neutron radiation), usually when the entire nucleus breaks apart (which is nuclear fission).  Neutron radiation is exciting for physicists because neutrons are nice, no-fuss particles.  Without a charge, neutrons are about as close to atomic bullets as you can get.

The most common source of neutron radiation is fission.

The most common source of neutron radiation is fission, which generally spits out a few extra neutrons.  This picture is of a controlled reaction (activated by a nearby neutron source).

Finally, an “alpha particle” is sometimes ejected.  Alpha particles are a pair of protons, and a pair of neutrons, that are stuck together.  This is the same as a helium nucleus, so this is basically “high-speed-helium”.  “Alpha decay” is why there’s helium on Earth.  The helium that was around during the Earth’s formation found it’s way to the top of the atmosphere, and from there was knocked into space by solar radiation and wind.  Unlike hydrogen, which can bond to things chemically (the “H” in “H2O”), helium is a noble gas and doesn’t stick to anything.  All the helium that slowly bubbles out of the ground is from the radioactive decay of heavier elements inside of the Earth.  So, when you fill a balloon with helium, you’re literally filling it with what used to be radiation.  Fun fact: that slow dribble of helium doesn’t amount to much, and we’re about to run out.

The most common and dangerous kind of radiation is high-frequency light.  High-energy light is called “x-rays” and above that “gamma-rays”, and it tends to punch through shielding a lot better than the other kinds of radiation (that’s why x-rays can be used to look through things).  Alpha, beta, and neutron radiation are made of matter and they tend to bump into things and slow down.  A couple pieces of paper do a pretty decent job stopping alpha particles, and a few inches of water stop neutron and beta radiation remarkably well.  Gamma rays, on the other hand, are what lead shielding is for.

Radiation is dangerous because it can ionize, which breaks apart chemical bonds.  If that happens enough inside a living cell, then the cell will be killed.  With enough broken chemical “parts” they just stop working.  “Radiation poisoning” is what happens when you’ve suddenly got way too many dead cells in your body, and too many of the cells that remain are too damaged to reproduce.

When you get a sun burn you’re suffering from a little radiation damage.  UV light has enough of a punch to kill cells, which is a big part of why our outer layers of skin are just a bunch of dead skin cells: radiating dead cells doesn’t do much, so the body keeps them around to protect the living layers underneath.  This is also why really tiny bugs don’t like direct sunlight (they’re smaller than the protective layer they would need).

For those of you worried about radiation: wear sunscreen.  You’re far more likely to be harmed by chemical pollutants.  Other forms of light, like radiowaves, microwaves, or even visible light, don’t have enough power in each photon to ionize.  As a result, all they do is heat things up (not blast them apart).  In order for your cell phone to do any damage to you it would have to literally cook your head, as in “increase the temperature until such time as you are dead”.  In that respect, a warm room is far more “dangerous”.

Radiators: far more dangerous than cell phones.

Radiators: far more dangerous than cell phones or radio towers.

Every living thing on the planet has developed at least some ability to deal with low-level radiation, which is unavoidable (some are ridiculously good at dealing with it).  Each cell in your body has error correcting mechanisms that deal with genetic damage (DNA blown apart by ionizing radiation), and even when a small fraction of your cells die: no problem.  They’re just put in the bloodstream, filtered out, and poo’d.  As it happens, dead red blood cells are a major contributor to the color of poo!  Chances are you’ll remember that particular and unappetizing fact forever, so… sorry.

You’re struck by about 1 particle of ionizing radiation per square cm per second.  More at high altitudes, and more during the day.  By far, the most dangerous source of radiation that you’re likely to come across (outside of a hospital), is the Sun.  Luckily, the Sun is easy to spot, and easy to avoid.  Shade and sunscreen.  Easy.

The beta particle picture is from here, the fission picture is from here, and the radiator picture is from here.


*There are other particles beyond those four, such as gluons or W bosons or even higgs bosons, that show up all the time. But, they’re kinda “behind-the-scenes” particles that only show up for insignificant fractions of a second and are virtual.  If you find yourself in a situation where you’re interacting with these rarer particles, then you probably work at CERN and should know better.

Posted in -- By the Physicist, Biology, Particle Physics, Physics | 6 Comments

Q: How do we know that π never repeats? If we find enough digits, isn’t it possible that it will eventually start repeating?

Physicist: In physical sciences we catalog information gained through observation (“what’s that?”), then a model is created (“I bet it works like this!”), and then we try to disprove that model by using experiments (“if we’re right, then we should see this weird thing happening”).  In the physical sciences the aim is to disprove, because proofs are always out of reach.  There’s always the possibility that you’re missing something (it’s a big, complicated universe after all).

Mathematics is completely different.  In math (and really, only in math) we have the power to prove things.  The fact that π never repeats isn’t something that we’ve observed, and it’s not something that’s merely “likely” given that we’ve never observed a repeating pattern in the first several trillion digits we’ve seen so far.

The digits of pi never repeat because it can be proven that π is an irrational number and irrational numbers don’t repeat forever.

If you write out the decimal expansion of any irrational number (not just π) you’ll find that it never repeats.  There’s nothing particularly special about π in that respect.  So, proving that π never repeats is just a matter of proving that it can’t be a rational number.  Rather than talking vaguely about math, the rest of this post will be a little more direct than the casual reader might normally appreciate.  For those of you who just scrolled down the page and threw up a little, here’s a very short argument (not a proof):

It turns out that \pi = 4\left(1-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\frac{1}{9}-\cdots\right).  But this string of numbers includes all of the prime numbers (other than 2) in the denominator, and since there are an infinite number of primes, there should be no common denominator.  That means that π is irrational, and that means that π never repeats.  The difference between an “argument” and a “proof” is that a proof ends debates, whereas an argument just puts folk more at ease (mathematically speaking).  The math-blizzard below is a genuine proof.  First,

Numbers with repeating decimal expansions are always rational.

If a number can be written as the D digit number “N” repeating forever, then it can be expressed as N\times 10^{-D} + N\times 10^{-2D} + N\times 10^{-3D}+\cdots.  For example, when N=123 and D=3:

\begin{array}{ll}0.123123123123123\cdots\\=0.123+0.000123+0.000000123+\cdots\\=123\times 10^{-3} + 123\times 10^{-6} + 123\times 10^{-9}+\cdots\end{array}

Luckily, this can always be figured out exactly using some very old math tricks.  This is just a geometric series, and N\times 10^{-D} + N\times 10^{-2D} + N\times 10^{-3D}+\cdots = N\frac{10^{-D}}{1-10^{-D}} = N\frac{1}{10^{D}-1}.  So for example, 0.123123123123123 = 123\frac{1}{10^3-1} = \frac{123}{999}=\frac{41}{333}.

Even if the decimal starts out a little funny, and then settles down into a pattern, it doesn’t make any difference.  The “funny part” can be treated as a separate rational number.  For example, 5.412123123123123\cdots = 5.289 + 0.123123\cdots = \frac{5289}{1000} + \frac{41}{333}.  And the sum of any rational numbers is always a rational number, so for example, \frac{5289}{1000} + \frac{41}{333} = \frac{5289\cdot333 + 41\cdot1000}{1000\cdot333} = \frac{1802237}{333000}.

So, if something has a forever-repeating decimal expansion, then it is a rational number.  Equivalently, if something is an irrational number, then it does not have a repeating decimal.  For example,

√2 is an irrational number

So, in order to prove that a number doesn’t repeat forever, you need to prove that it is irrational.  A number is irrational if it cannot be expressed in the form \frac{A}{B}, where A and B are integers.  √2 was the first number shown conclusively to be irrational (about 2500 years ago).  The proof of the irrationality of π is a little tricky, so this part is just to convey the flavor of one of these proofs-of-irrationality.

Assume that \sqrt{2} = \frac{A}{B}, and that A and B have no common factors.  Then it follows that 2B^2 = A^2.  Therefore, A is an even number since A^2 has a factor of 2.  But if \frac{A}{2} is an integer, then we can write: 2B^2 = 4\left(\frac{A}{2}\right)^2 and therefore B^2 = 2\left(\frac{A}{2}\right)^2.  But that means that B is an even number.

This is a contradiction, since we assumed that A and B have no common factors.  By the way, if they did have common factors, then we could cancel them out.  No biggie.

So, √2 is irrational, and therefore its decimal expansion (√2=1.4142135623730950488016887242096980785696…) never repeats.  This isn’t just some experimental observation, it’s an absolute fact.  That’s why it’s useful to prove, rather than just observe, that

π is an irrational number

The earliest known proof of this was written in 1761.  However, what follows is a much simpler proof written in 1946.  Unfortunately, there don’t seem to be any simple, no-calculus proofs floating around, so if you don’t dig calculus and some of the notation from calculus, then you won’t dig this.  Here goes:

Assume that \pi = \frac{a}{b}.  Now define a function f(x) = \frac{x^n(a-bx)^n}{n!}, where n is some positive integer, and that excited n, n!, is “n factorial“.  No problems so far.

All of the derivatives of f(x) taken at x=0 are integers.  This is because f(x) = \frac{x^n(a-bx)^n}{n!} = \sum_{j=0}^n \frac{a^{n-j}(-b)^j}{n!} x^{n+j} (by the binomial expansion theorem), which means that the kth derivative is f^{(k)}(x) = \sum_{j=0}^n \frac{n!}{j!(n-j)!}\frac{a^{n-j}(-b)^j}{n!}(n+j)(n+j-1)\cdots(n+j-k+1) x^{n+j-k} = \sum_{j=0}^n \frac{n!}{j!(n-j)!}\frac{a^{n-j}(-b)^j}{n!}\frac{(n+j)!}{(n+j-k)!} x^{n+j-k} = \sum_{j=0}^n a^{n-j}(-b)^j\frac{(n+j)!}{j!(n-j)!(n+j-k)!} x^{n+j-k}

If k<n, then there is no constant term (an x0 term), so f^{(k)}(0) =0.  If n≤k≤2n, then there is a constant term, but f^{(k)}(0) is still an integer.  The j=k-n term is the constant term, so:

\begin{array}{ll}f^{(k)}(0)=\sum_{j=0}^n a^{n-j}(-b)^j\frac{(n+j)!}{j!(n-j)!(n+j-k)!} 0^{n+j-k}\\[2mm]=a^{2n-k}(-b)^{k-n}\frac{k!}{(k-n)!(2n-k)!0!}\\[2mm]=a^{2n-k}(-b)^{k-n}\frac{k!}{(k-n)!(2n-k)!}\end{array}

a and b are integers already, so their powers are still integers.  \frac{k!}{(k-n)!(2n-k)!} is also an integer since \frac{k!}{(k-n)!(2n-k)!}=\frac{k!}{(k-n)!n!}\frac{n!}{(2n-k)!} = {k \choose n}\frac{n!}{(2n-k)!}.  “k choose n” is always an integer, and \frac{n!}{(2n-k)!} = n(n-1)(n-2)\cdots(2n-k+1), which is just a string of integers multiplied together.

So, the derivatives at zero, f^{(k)}(0), are all integers.  More than that, by symmetry, f^{(k)}(\pi), are all integers.  This is because

\begin{array}{ll}f(\pi-x)=f\left(\frac{a}{b}-x\right)\\[2mm]=\frac{\left(\frac{a}{b}-x\right)^n(a-b\left(\frac{a}{b}-x\right))^n}{n!}\\[2mm]=\frac{\left(\frac{a}{b}-x\right)^n(a-\left(a-bx\right))^n}{n!}\\[2mm]=\frac{\left(\frac{a}{b}-x\right)^n(bx)^n}{n!}\\[2mm]=\frac{\left(\frac{1}{b}\right)^n\left(a-bx\right)^n(bx)^n}{n!}\\[2mm]=\frac{\left(a-bx\right)^n x^n}{n!}\end{array}

This is the same function, so the arguments about the derivatives at x=0 being integers also apply to x=π.  Keep in mind that it is still being assumed that \pi=\frac{a}{b}.

Finally, for k>2n, f^{(k)}(x)=0, because f(x) is a 2n-degree polynomial (so 2n or more derivatives leaves 0).

After all that, now construct a new function, g(x) = f(x)-f^{(2)}(x)+f^{(4)}(x)-\cdots+(-1)^nf^{(2n)}(x).  Notice that g(0) and g(π) are sums of integers, so they are also integers.  Using the usual product rule, and the derivative of sines and cosines, it follows that

\begin{array}{ll}\frac{d}{dx}\left[g^\prime(x)sin(x) - g(x)cos(x)\right]\\[2mm]  = g^{(2)}(x)sin(x)+g^\prime(x)cos(x) - g^{\prime}(x)cos(x)+g(x)sin(x)\\[2mm]  =sin(x)\left[g(x)+g^{(2)}(x)\right]\\[2mm]  =sin(x)\left[\left(f(x)-f^{(2)}(x)+f^{(4)}(x)-\cdots+(-1)^nf^{(2n)}(x)\right)+\left(f^{(2)}(x)-f^{(4)}(x)+f^{(6)}(x)-\cdots+(-1)^nf^{(2n+2)}(x)\right)\right]\\[2mm]  =sin(x)\left[f(x)+\left(f^{(2)}(x)-f^{(2)}(x)\right)+\left(f^{(4)}(x)-f^{(4)}(x)\right)+\cdots+(-1)^n\left(f^{(2n)}(x)-f^{(2n)}(x)\right)+(-1)^nf^{(2n+2)}(x)\right]\\[2mm]  =sin(x)\left[f(x)+(-1)^nf^{(2n+2)}(x)\right]\\[2mm]  =sin(x)f(x)  \end{array}

f(x) is positive between 0 and π, since (a-bx)^n>0 when 0<x<\frac{a}{b}=\pi.  Since sin(x)>0 when 0<x<π as well, it follows that 0<\int_0^\pi f(x)sin(x)\,dx.  Finally, using the fundamental theorem of calculus,

\begin{array}{ll}\int_0^\pi f(x)sin(x)\,dx\\[2mm]= \int_0^\pi \frac{d}{dx}\left[g^\prime(x)sin(x) - g(x)cos(x)\right]\,dx\\[2mm]= \left(g^\prime(\pi)sin(\pi) - g(\pi)cos(\pi)\right) - \left(g^\prime(0)sin(0) - g(0)cos(0)\right)\\[2mm]= \left(g^\prime(\pi)(0) - g(\pi)(-1)\right) - \left(g^\prime(0)(0) - g(0)(1)\right)\\[2mm] = g(\pi)+g(0)\end{array}

But this is an integer, and since f(x)sin(x)>0, this integer is at least 1.  Therefore, \int_0^\pi f(x)sin(x)\,dx>1.

But check this out: if 0<x<\pi=\frac{a}{b}, then sin(x)f(x)=sin(x)\frac{x^n(a-bx)^n}{n!}\le\frac{x^n(a-bx)^n}{n!}\le\frac{\pi^n(a-bx)^n}{n!}\le\frac{\pi^na^n}{n!}.

Therefore, \int_0^\pi f(x)sin(x)\,dx<\int_0^\pi \frac{\pi^na^n}{n!}\,dx=\frac{\pi^{n+1}a^n}{n!}.  But here’s the thing; we can choose n to be any positive integer we’d like.  Each one creates a slightly different version of f(x), but everything up to this point works the same for each of them.  While the numerator, π(πa)n, grows exponentially fast, the denominator, n!, grows much much faster for large values of n.  This is because each time n increases by one, the numerator is multiplied by πa (which is always the same), but the denominator is multiplied by n (which keeps getting bigger).  Therefore, for a large enough value of n we can always force this integral to be smaller and smaller.  In particular, for n large enough, \int_0^\pi f(x)sin(x)\,dx<\frac{\pi^{n+1}a^n}{n!}<1.  Keep in mind that a is assumed to be some definite number, so it can’t “race against n”, which means that this fraction always becomes smaller and smaller.

Last step!  We can now say that if π can be written as \pi =\frac{a}{b}, then a function, f(x), can be constructed such that \int_0^\pi f(x)sin(x)\,dx\ge 1 and \int_0^\pi f(x)sin(x)\,dx<1.  But that’s a contradiction.  Therefore, π cannot be written as the ratio of two integers, so it must be irrational, and irrational numbers have non-repeating decimal expansions.

Boom.  That’s a proof.

For those of you still reading, it may occur to you to ask “wait… where did the properties of π get into that at all?”.  The proof required that sine and cosine be derivatives of each other, and that’s only true when using radians.  For example, \frac{d}{dx}sin(x) = \frac{\pi}{180}cos(x) when x is in degrees.  So, the proof requires that \frac{d}{dx}sin(x) = cos(x), and that requires that the angle is given in radians.  Radians are defined geometrically so that the angle is described by the length of the arc it traces out, divided by the radius.  Incidentally, this definition is equivalent to declaring the small angle approximation: sin(x)\approx x.

The radian.

The radian.

This defines the angle of a full circle as 2π radians and as a result of geometry and the definitions of sine and cosine, sin(π radians) = 0, cos(π radians) = -1, \frac{d}{dx}sin(x) = cos(x), and that’s enough for the proof!

Subtle, but behind all of that algebra, is a bedrock of geometry.

Posted in -- By the Physicist, Geometry, Logic, Math | 40 Comments

Q: Why does carbon dating detect when things were alive? How are the atoms in living things any different from the atoms in dead things?

Physicist: As far as carbon dating is concerned, the difference between living things and dead things is that living things eat and breathe and dead things are busy with other stuff, like sitting perfectly still.  Eating and breathing is how fresh 14C (carbon-14) gets into the body.

Carbon-14: if you eat living things, then you're eating fresh carbon-14.

If you eat recently-living things, then you’re eating fresh carbon-14.

The vast majority of carbon is 12C (carbon-12) which has 6 protons and 6 neutrons (12=6+6).  14C on the other hand has 6 protons and 8 neutrons (14=6+8).  Chemically speaking, those 6 protons are far more important since they are what makes carbon act like carbon (and not oxygen or some other element).  The extra pair of neutrons do two things: they make 14C heavier (by about 17%), and they make it mildly radioactive.  If you have a 14C atom it has a 50% chance of decaying in the next 5730 years (regardless of how old it presently is).  That 5730 year half-life is what allows science folk to figure out how old things are, but it’s also relatively short.

This begs the question: why is there any 14C left?  There have been about 1,000,000 half-lives since the Earth first formed, which means that there should only be about \frac{1}{2^{1000000}} of the original supply, which even google considers too small to be worth mentioning.  The answer is that 14C is being continuously produced in the upper atmosphere.

Our atmosphere is brimming over with 14N.  Nitrogen-14 has 7 protons and 7 neutrons, and is about four fifths of the air you’re breathing right now.  In addition to all the other reasons for not hanging out at the edge of space, there’s a bunch of high-energy radiation (mostly from the Sun) flying around.  Some of this radiation sometimes takes the form of free neutrons bouncing around, and when nitrogen-14 absorbs a neutron it sometimes turns into carbon-14 and a spare proton (“spare proton” = “hydrogen”).

This new 14C gets thoroughly mixed into the rest of the atmosphere pretty quickly, and carbon in the atmosphere overwhelmingly appears in the form of carbon dioxide.  It’s here that the brand-new 14C enters the carbon cycle.  Living things use carbon a lot (biochemistry is sometimes called “fun with carbon”) and this carbon enters the food chain through plants, which pull it from the air.  Any living plant you’re likely to come across is mostly made of carbon (and water) it’s absorbed from the air in the last few years, and any living animal you come across is mostly made of plants (and other animals) that it’s eaten in the last few years.

With the notable exception of the undead, when things die they stop eating or otherwise absorbing carbon.  As a result, the body of something that’s been dead for around 5700 years (the 14C half-life) will have about half as much 14C as the body of something that’s alive.  Nothing to do with being alive per se, but a lot to do with eating stuff.

Dracula is dead, but still part of the carbon cycle since he eats.  Therefore, we can expect that carbon dating

Dracula is dead, but still part of the carbon cycle since he eats (or at least drinks).  Therefore, we can expect that carbon dating would read him as “still alive”, since he should have about the same amount of carbon-14 as the people he imbibes.

There are some difficulties with carbon dating.  For example, nuclear tests or unusual solar weather can change the rate of production.  Also, any attempt to measure things that have been dead for more than several half-lives (tens of thousands of years) are subject to a lot of statistical noise.  So you can carbon date woolly mammoths, but definitely not dinosaurs.  Aside from that, carbon dating is a decently accurate way of figuring out how long ago a thing recused itself from the carbon cycle.


Answer Gravy: This was subtle, and would have derailed the flow of the post, but extremely A-type readers may have noticed that adding a neutron to 14N (7 protons, 7 neutrons) leaves 15N (7 protons, 8 neutrons).  But 15N is stable, and will not decay into 14C, or anything else.  So why does the reaction “n+14N → p+14C” happen?  It turns out that nuclear physics is more complicated than you might expect.

The introduced neutron can carry a fair amount of kinetic energy, and this extra energy can sometimes make the nucleus “splash”.  It’s a little like pouring water into a glass.  If you pour the water in slowly, then nothing spills out and the water-in-glass system is stable.  But if you pour the same amount of water into the glass quickly, then some of it is liable to splash out.  Similarly (maybe not that similarly), introducing a fast neutron to a nucleus can have a different result than introducing a slow neutron.

Dealing with complications like this is why physicists love themselves some big computers.

Posted in -- By the Physicist, Biology, Particle Physics, Physics | 1 Comment

Q: What role does Dark Matter play in the behavior of things inside the solar system?

Physicist: To a stunningly good approximation: zero.

The big difference between dark matter and ordinary matter is that dark matter is “aloof” and doesn’t interact with other stuff.  Instead, it cruises by like “ghost particles”.  Matter on the other hand smacks into itself and clumps together.  The big commonality is that both of them create and are affected by gravity.

If you have a big ball of matter both ordinary and dark matter will be pulled by its gravity.

If you have a big ball of matter (doesn’t matter what kind), then both ordinary and dark matter will be pulled by its gravity.  However, there’s no reason for the dark matter to ever fall out of orbit since there’s nothing around to stop its motion.  Normal matter tends to “get in its own way”.

In fact, if it weren’t for the gravitational influence of dark matter, we would have no reason to suspect its existence at all.  Because dark matter doesn’t clump it stays really spread out and forms one big, roughly spherical, cloud around the galaxy.  Matter has more of a “big-clump-or-nothing” deal going on.  If you start with a big cloud of ordinary matter, then eventually (it can take a while) you’ll have one or two huge chunks (stars, binary stars, that sort of thing) and the few crumbs that escape tend to end up clumping together themselves (planets, moons, comets, your mom, etc.).  If you feel like impressing people at your next science party, this is called “accretion“.

Any attempt to picture the Sun and nearby stars to scale look like nothing at all.

Any attempt to picture the Sun and nearby stars to scale look like nothing.  This is an attempt where every square is 10 times the size of the previous square (1000 times the volume).  Point is, when ordinary matter concentrates it really, really, really concentrates.

In the above picture the dark matter is spread out uniformly.  Overall there’s a lot more of it (about 10 times as much, give or take), but here in the solar system the balance is tipped overwhelmingly in favor of ordinary matter.  But more than that, since dark matter is spread evenly (and thinly) all around us, it doesn’t pull in any particular direction.  There’s about the same amount in every direction you point, so there’s very little net pull in any direction.  Until you start considering galactic scales at least.

Here on Earth we can point straight at a few big collections of matter.  The most important is straight down, and the

Ordinary matter clusters in big blobs, so when it pulls it tends to pull in one direction (right).  Dark matter does pull, but it pulls on every particle evenly in every direction, which is a lot like not pulling at all (left).

If you do consider things on a galactic scale (~100,000 lightyears), then there’s more dark matter in the direction of Sagittarius (in December this is overhead around midnight).  Technically, since we’re most of the way to the edge of the galactic disk, and the center of the galaxy is behind the stars in Sagittarius, most of the stuff in the Milky Way is more or less in that direction.  That imbalance makes the Sun and all the other nearby stars (“nearby” = “visible to the eye”) orbit the galaxy, but it also helps Earth and everything else around us do the same.  Astronauts in orbit appear weightless because their ship and their bodies are both orbiting the Earth.  They are both in “free-fall”.  Similarly, the Earth, the Sun, and even everything in our stellar neighborhood are all in free-fall around the galaxy.  So while the preponderance of dark matter in the galaxy does cause the solar system to slowly sweep out a seriously huge circle (the “galactic year” is about 250 million Earth years), it does not cause things in the solar system to move with respect to each other.

Hopefully, dark matter has more tricks than just gravity.  If it has no other way of interacting with stuff, then that makes it really difficult to study.  We can study things like stars, rocks, and puppies because they’re all “strongly interacting”.  Shine light on them?  Sure.  Poke them?  Why not.  But dark matter (whatever it is) is light-proof and poke-proof, and that’s deeply frustrating.

Posted in -- By the Physicist, Astronomy, Physics | 22 Comments