Q: Is the universe infinitely old?

Physicist: Normally this question is only used to start fights.

Some theories posit that the big bang was the beginning of everything, and that it doesn’t make sense to talk about anything earlier, while others say that it may be impossible for the universe to have a beginning or end.  Both options are terrible.

We can track the progress of the universe all the way back to only moments after the big bang, but no further.  So we’ll never actually be able to see what happened at the instant of the big bang.  However, there are some pretty slick tools that allow us to extract results by studying the crap out of how spacetime behaves today.  Notably the Hawking Singularity Theorem.  We may be able to determine if the the big bang was a singularity (with nothing before), or just a severe “pinching” of the universe (with stuff happening before).

You can picture this as something like trying to figure out whether or not your sheets are pinned together in one place based on how they’re folded and bunched up in another place.  It’s tricky.

Bed sheets: More complicated than spacetime.

However, for all practical purposes the universe is about 15 billion years old (give or take).  If there was anything around before then, it got messed up good by the big bang.

Posted in -- By the Physicist, Astronomy, Physics | 4 Comments

Q: Have aliens ever visited Earth?

Physicist: No.

Space is big.  The distances involved are ridiculous, the energies are ludicrous, the costs are somethingelseous.  The New Horizons probe (due to reach Pluto in 2015) is the fastest vehicle ever created.  At its top speed of 16.26 km/s (36,400 mph) New Horizons would take over 80,000 years to reach Alpha Centauri (the nearest star system).

(The following paragraph is wrong.  There’s a redaction here: My bad: Have aliens ever visited Earth?)

A quick calculation shows that the fastest that a ship can reasonably travel, using its own fuel, is about 11% of the speed of light (0.11C).  This calculation assumes that the ship is essentially all hydrogen tanks, that all of this hydrogen is fused into helium, and that all of the resulting energy is converted into thrust.  However, even at this speed, it would take about 40 years to get to Alpha Centauri.  Any realistic method used to get from one star system to another involves millennia of travel time.

(This last paragraph was wrong.)

Adding to that, beyond exploration, there aren’t any good reasons to leave your home solar system.  The resources you’re likely to find are about the same as the resources you’d find in your home system (nearby stars tend to form in regions with similar chemical make ups), while the costs of getting there are amazingly high.

It has been proposed that aliens may have a technology that would allow them to travel faster than light (FTL).  Sadly, there’s a lot wrong with that.  Unlike the sound barrier, which was an engineering problem, the light barrier is written into the universe.  In fact, based on everything we know about time and distance today (I’m talking relativity here), the question of FTL travel doesn’t even make logical sense.  However, if you’re interested in technology based on unknown (contradictory) science, then there are plenty of experts to talk to.  Scientists almost never tell you what you want to hear.

Expert.

On a personal note, if I made the trip to another star system and found intelligent life, I would spend the rest of my life there telling everybody about it and bragging to the native population.  If aliens have been to Earth, they’ve been suspiciously cool about it.


Mathematician: I disagree with the physicist’s answer in one significant way. While it’s true that the distances in our universes are insanely large, and that the speed of travel is capped by the speed of light, the theory of relativity can actually step in to save the day (when it comes to aliens traveling enormous distances).

If aliens were able get up to speeds approaching that of light (using, say, a Bussard ramjet) then they would experience a twin paradox type situation (essentially, time would slow down on their ship compared to the passage of time on Earth). Hence, while a trip from Alpha Centauri to Earth (traveling at almost the speed of light) would take something like 4.5 years from the point of view of us earthlings, to the aliens such a journey could take very little time (thank you time dilation and length contraction). The closer they got to the speed of light, the less time would elapse (for them) on their journey (and the more energy such a trip would require, with an infinite amount of energy being necessary to travel exactly at light speed). If they ever did travel at exactly the speed of light (which is almost certainly impossible) the entire trip would take exactly 0 time from the viewpoint of the spaceship. Due to these relativistic effects, Aliens could theoretically travel enormous interstellar distances while experiencing very little aging. The members of their species on their home planet(s), however, would continue aging at the usual pace, so long journeys (of, say, thousands of light years, which is still not much compared to the hundred thousand light year diameter of the milky way galaxy) would presumably lead to the death of all of one’s planet bound acquaintances. Therefore, such a journey would only make sense for an explorer who had no intentions of ever returning home, or for large star ships that acted as their own colonies. Perhaps it’s also worth mentioning that when aliens actually arrived at Earth after such a trip, our civilization would have changed dramatically from the time they departed. See here for an article that discusses some of these questions in greater detail.

Posted in -- By the Mathematician, -- By the Physicist, Astronomy, Physics, Relativity | 45 Comments

Q: Why is the sky blue?

Physicist: The blue of the sky is sunlight that has been scattered by the air in a process called Rayleigh scattering.  The probability that a photon of frequency \omega is scattered is proportional to \omega^4.  So purple (the highest frequency we can see) gets scattered about 15 times as much as red (the lowest).  The derivation of the “fourth power law” isn’t even a little bit obvious, and explains almost nothing.

So if purple is the most scattered color, then why is the sky blue?  While sunlight is a combination of all colors, it isn’t an even combination of all colors.  Near the top of the visible spectrum the intensity of sunlight drops approximately exponentially with increasing frequency.  These two effects mean that both lower frequencies and very high frequencies won’t be seen in the color of the sky, and in between is a surprisingly sharp “sky blue”.

The sky on Mars as seen by Viking 1 in August 1976.

One of the results of the above argument is that the color of the sky is dependent only on the spectrum of the incoming light, not on the composition of the atmosphere, and as such the sky of every planet in our solar system (all those planets with transparent atmospheres at least) is the same blue.  The contrast, however, is dependent on the density of the atmosphere.

Posted in -- By the Physicist, Physics | 8 Comments

Q: Is there a formula for how much water will splash, most importantly how high, and in what direction from the toilet bowl when you *ehem* take a dump in it ?

Physicist: If it weren’t for imponderables like this, we’d have finished science years ago.  During an “impact event” water generally moves outward to the sides.  What you really need to worry about is the dreaded “water spike”.

Ejecta, spike, ejecta, and spike.  (The artwork in the upper-left is by Chrstara, Copyrighted ©  http://abstract.desktopnexus.com/wallpaper/27127/. These pictures may not be reproduced, copied, edited, published, or uploaded to any Site(s) including Blogs without his written permission.)

The physics behind water spikes is remarkably complicated and only recently has their formation been accurately described and simulated.  So, like any physicist presented with an insurmountable problem, I’ll make some unreasonable assumptions and cheat (an experimentalist would then drink and make prank calls).

One of the classic cheats is making a list of everything you think your equation should depend on, and then balance the units.  Based only on the vague hope that water spikes scale (same shape regardless of size), the energy E of a spike that rises to height H should be  E \propto gHM_{spike} \propto gH(\rho H^3) = g \rho H^4, where g is the acceleration due to gravity, \rho is the density of water, and “\propto” means “proportional to”.  The energy of a falling *ehem* object is E = gdM_{"object"}, where d is the drop height.  These energies should be proportional.  Seems reasonable…  So solving for H:

H=c\left(\frac{M_{"object"}}{\rho}d\right)^{\frac{1}{4}}

Here c is some constant that would need to be found experimentally.  The graph of x^{\frac{1}{4}} increases sharply from zero, and then sorta levels off.  So don’t expect to have to much influence on the height of the spike given that this already shot-in-the-dark equation is not strongly influenced by small changes in the variables away from zero.

Your best bet is to avoid generating the spike in the first place.  Water spikes are the result of a symmetric air-cavity collapse just below the surface.  If the cavity isn’t symmetric, you shouldn’t get a spike.  So as you make your Deposit, make sure to wave your butt around.  Please let us know how it works out.

Here’s another example of frequently gross marriage between super-computers and fluid dynamics.

Posted in -- By the Physicist, Paranoia, Physics | 4 Comments

Q: What is the meaning of the term “random”? Can thinking affect the future?

The complete question was:

1 – When probability is invoked it commonly implies or states that something “will probably” or “is likely” to happen. Doesn’t this suggest that we can predict the future or just by thinking about it effect the future?

2 – What is the meaning of the term “random” as it is used in mathematics/statistics? Does randomness actually exist?


Physicist: It does suggest that we can predict the future.  In fact, some people formulate the definition (and purpose) of science as an attempt to predict the future.

Thought alone has absolutely no impact on probability or possibility.
The predictive power of probability is fairly useless when applied to individual events.  That is to say, if someone tells you that the next time you roll a die you’ll get a 5, then they’re full of it.  How would they know that?  However, if they say that the next several times you roll a die, about 1/6 of the time you’ll get a 5, then they’ll be correct.
In that last sentence the word “about” is underlined because pretty much all of the mathematical theory behind probability is wrapped up in figuring out the details of that “about” (probability distributions, variance, etc.).
Probabilities are found by experiment.  So when you say that “each side of a die will be rolled with a probability of 1/6”, what you’re really saying is “looking at all the dice in the past, each side came up 1/6th of the time”.
“Randomness”, at least until Bell screwed everything up, is nothing more than a reflection of one’s lack of knowledge.  In practice I think it would be fair to define something as random if no one can think of a predictive model that does any better than just picking the most likely outcome.  For example, if you’re looking at coin flips and one person picks “heads” every time, and another gets a super computer, some psychics, and Mensa™ to pick sides, then they’ll both get the correct answer 50% of the time.
The interplay between perception and probability is subtle, especially where quantum mechanics is concerned.  It takes a surprising amount of study and contemplation to see where the weirdness of quantum mech enters the scene, so I’ll restrict this discussion to strictly classical (non-quantum) effects.  If you love quantum effects so much you want to marry them, then take a look at “Q: Do physicists really believe in true randomness?” and “Q: What is the connection between quantum physics and consciousness?“.  This has been a popular line of questions.
Why thought and perception don’t do a damn thing, but seem to:  Say you lose your dog, Kolmogorov, in Madrid.  By the end of a week the dog could be almost anywhere in Spain.

The probability distribution of the dog before and after an observation.

Then someone spots Kolmogorov in Avila, and calls to tell you the next day.  Suddenly (by dog-spotting alone mind you) the probability distribution of Kolmogorov’s location changes dramatically.

This new probability distribution is called a “conditional distribution”.  You can tell when someone is using a conditional distribution because they’ll say things like “the probability, given that …”.  It may seem as though perceiving the dog has changed something in the universe, but keep in mind what comes first.  The dog’s location dictates where you’ll see it, not the other way around.

Another classic is the phenomenon of “hot” and “cold” tables.  You’ll find that sometimes a craps, blackjack, etc. table will suddenly be especially lucky or unlucky for a while.  It’s extremely common for people to gravitate toward tables during a winning streak, or, if things have been going well for a while, to get nervous and leave.  Both are examples of the belief that perception affects reality, or at the very least, that random things aren’t actually random.

Examples of random walks, specifically the “Wiener Process”.  Sometimes it looks like there’s a pattern, but there’s not.  And yes, that’s really his name; “Norbert Wiener“.

Here’s a thought experiment (or if you have the time, actual experiment): Have a friend flip a coin.  Try to guess what it is.  How often do you expect to get it right?  Do the same thing, but this time have your friend look at the coin first before you guess.  Do your guesses get any better or worse?  Does someone “out there” with information have any influence on probability?  Hells nope.

At the end of the day, every single experiment that you can do that involves chance, and somebody having some knowledge about results, will still act exactly the same as an experiment where nobody knows anything.

For no reason, here’s a quote about Wiener: “Gifted for abstract sciences, philosophy and literature, he also had an inclination to the fine arts. These tendencies were certainly enhanced by a meditative temperament partly due to his ungainliness and myopia, which disqualified him for the usual games of physical skill popular among youngsters…

Posted in -- By the Physicist, Philosophical, Probability | Leave a comment

Q: Is it possible to choose an item from an infinite set of items such that each one has an equal chance of being selected?

The complete question was: The other day I was trying to explain the difference between “impossible” and “with probability zero” to a friend. I remarked “if you pick an integer, and I pick an integer, the mathematical probability that we picked the same number is zero, but it’s certainly not impossible.” A seemingly harmless statement, but only later did I think, what does it even mean to “pick ANY integer with EQUAL probability?” Is there any meaning to a random variable that can take on an infinite number of values with equal probability? It would be like a uniform distribution that has one or both bounds stretched to infinity. Does this kind of object have a name? What kind of mathematical properties would this even have? Oh dear, this is blowing my mind just thinking about it. Please help!


Mathematician: Developing a consistent theory of probability for sets with an infinite number of elements (like the set of all integers, the set of all real numbers, or the set of real numbers between 0 and 1) requires dealing with a handful of subtle and tricky problems. In fact, to determine whether it is possible to choose one object from an infinite number of objects such that each object has the same probability of being chosen, we must delve into what we really mean by “possible”. Various interpretations of this question are:


1. Can we work with probabilities (as they are defined in a formal, mathematical way) on infinite sets of items, and if so, can we assign equal probabilities to each item?

2. Can a (formal, mathematical) procedure be devised to sample uniformly at random from an infinite set?

3. Can an algorithm be designed that can carry out this sampling procedure on a real computer such that the algorithm will terminate in a finite amount of time?

Each of these questions, which I’ll address one by one, raises interesting considerations.

Can a formal theory of probability be developed for infinite sets of items? Absolutely. For example, the Gaussian distribution (often known as the bell curve or normal distribution) is defined on the set of real numbers (so when you draw a sample according to this distribution, you get some real number). Strictly speaking, it assigns a probability of zero to each of the individual real numbers, but assigns a non-zero probability to subsets of real numbers. For example, if we sample from a Gaussian distribution, there is a formula that can tell us how likely the number is to be less than any particular value X (so the set of all numbers less than X is assigned a positive probability). Why does the gaussian distribution not assign non-zero probabilities to the actual numbers themselves? The reason (loosely speaking) is because the probability of getting ANY number when we sample from a distribution must be 1 (which just means that some number must always occur). On the other hand, the set of real numbers contains so many numbers that, if each of them had a non-zero probability of occurring, it would not be possible for the total probability (which is, essentially, just the sum or integral of all of the numbers’ probabilities) to be 1. Another, more intuitive way to think about this is to consider a dart board. If we throw a dart at the board and are willing to assume that matter is infinitely divisible (sorry, Physicist) then we will always hit some point on the board (assuming our aim isn’t too terrible). But, at the same time, the chance of hitting any particular point is negligibly small (zero in fact) since there are so many possible points. So clearly, to describe the probability of hitting this board’s points, it is not sufficient to consider only the probability of each individual point being hit, but rather we have to consider how likely we are to hit various regions of the board, such as the region constituting the bullseye. Even though each particular point has a zero probability of being hit, some point is always hit in practice, and the set of points that make up the bullseye together have a positive probability of being hit with each throw.

Okay, so we can define probabilities on an infinite set, though as the Gaussian distribution case shows, we may have to actually let the probabilities be assigned to subsets of our original set, rather than to every object in the set itself. But can we do this in such a way that every object in our set is sampled with equal likelihood? The answer is again yes, though with some caveats. For instance, if we limit ourselves to the real numbers that are between 0 and 1, we can assign a uniform distribution to these numbers which will give them each an equal probability. The uniform distribution basically says that the probability that a given sampled number  will be between a and b (for 0<a<b<1) is equal to b-a. This fact implies that all numbers are equally likely to be sampled from this distribution.

Fine, but can we define a probability distribution on the set of integers (rather than the real numbers between 0 and 1) such that they each occur with equal probability (i.e. does a uniform distribution on the integers exist)? The answer, sadly, is no. A probability mass function (which is the kind of probability distribution we need in this case) is defined to be a positive function that has a sum of values equal to 1. But any positive function that assigns an equal value to each integer must have probabilities that sum to either infinity or zero, so the desired distribution is impossible to construct. As a technical side note though, people sometimes try to get around this issue in Bayesian analysis by applying what are known as improper priors. Attempting to define a uniform distribution on the full set of real numbers also fails, for a very similar reason that it doesn’t work on the integers (it can not be the case that each real number (or equally sized interval of real numbers) has the same probability and the probability density function integrates to 1).

On to our second question of whether is it possible to come up with a formal mathematical procedure for sampling from infinite sets. The answer is yes, if we have an unlimited amount of time to spare. For real numbers between 0 and 1, we can use the following sampling procedure:

i.  Start with the number 0.0, and set n=1

ii. Set the nth digit of our number after the decimal point to a random number from 0 to 9.

iii. Increase n by 1 and return to step ii.

If this procedure were iterated forever, it would produce a single random number between 0 and 1, and all real numbers between 0 and 1 are equally likely to be generated. Essentially we are just constructing a number by choosing each of it’s decimal digits randomly.

But what if we wanted to carry out this procedure for the set of all integers? Here things get stranger. The natural choice is the following procedure:

i.  Start with the number 0.0, and set n=1

ii. Set the nth digit of our number BEFORE the decimal point to a random number from 0 to 9.

iii. Increase n by 1 and return to step ii.

If this procedure were carried out forever, it would seem as though it would produce an integer, and that this integer would have an equal probability of being any integer. This is true, in the sense that all integers that the algorithm produces have an equal likelihood of being produced (i.e. when it DOES produce integers those integers are each equally likely). But the algorithm does not actually do what we would like. We begin to see the problem when we pick any integer X, and ask the question, “what is the probability that this procedure would produce a number that is greater than X?”. The answer, is that there is a probability of 1 (i.e. a 100% chance), NO MATTER WHAT X IS. This makes sense, given that the integers stretch off to infinity, and that the number of integers close to 0 will always be dwarfed by the number of them far from it. But how can this procedure (when run forever) produce one, single integer, while at the same time having a probability of 1 of producing a number bigger than any particular number we choose? Well, each integer can be thought of as having an infinite sequence of zero digits to the left of its first non-zero digit. This algorithm will have a probability of 0 of producing a number with an infinite successive sequence of zeros, and therefore will have a zero probability of producing an integer! In other words, there is a probability of 1 that each number it produces will be (in a certain sense) “infinite”, so it does not serve the purpose that we hoped.

This brings us to our final question, regarding whether a terminating algorithm (that can be run on a real computer) can be created that will sample uniformly at random from an infinite set of numbers. The answer to this question is no, but with the footnote that this does not matter too much in practice (for reasons to be discussed). One way to understand why this is impossible is to consider how many bits it would take to transmit the number produced by such a sampling procedure. We would have to transmit some kind of code (agreed upon in advance) that represents the number we got from sampling, but since there are an infinite number of possible outcomes and since we have no knowledge (in advance) of what number will occur, we would need to have an infinite number of codes to represent all the outcomes. Hence, if  we think of the codes as numbers, and choose some number N, then some of the codes must contain more than N digits (since N digits is only enough to describe a finite number of different codes). But since this holds for all N no matter how large, this means that, on average, it would take literally forever to transmit one of these codes. But, if the number cannot be transmitted, no computer could ever make a copy of it, which implies that no computer could ever generate such a number. What this confusing, convoluted argument is getting at is the fact that a uniform distribution on an infinite set of items (if it existed) would have an infinite entropy, so the numbers sampled from such a procedure could never be transmitted or stored (as doing so, on average, would require an infinite number of bits) so there is no way that such an algorithm could be used in real life. One way to see that the entropy of such a distribution would be infinite is to note that if we define a uniform distribution on n items, that as we let n go to infinity the entropy of the distribution approaches infinity.

Despite these problems, some infinite sets have a nice property that the procedure of sampling (with equal probability) from them can be nicely approximated on a computer. For example, if we want to sample a real number between 0 and 1, we can approximate this procedure by limiting ourselves only to numbers with at most 40 digits after the decimal point, and then sampling uniformly at random from this restricted set. While this procedure is not perfect, it will produce numbers that (for most purposes) look like those we would get if we truly sampled from all real numbers between 0 and 1. On the other hand, sampling uniformly at random from the set of all integers cannot be approximated in any nice way (and hence, is in some sense an inaccessible procedure). The problem here is that, as noted, if you fix any number X that you like, 100% of integers are greater than that X, no matter what X is. Since real computers are limited in the size of the numbers they can store, any attempt to approximate the procedure of sampling from all integers will be limited to sampling from integers less than some number X, despite the fact that 100% of integers truly are above X. If we try to sample uniformly at random from the set of all integers (or the set of real numbers, for that matter) we are doomed to complete failure.


Posted in -- By the Mathematician, Math, Probability | 18 Comments