Q: How hard would it be to make a list of products of primes that could beat public key encryption?

The complete question was: I’m assuming almost anyone with sufficient computing power could generate big prime numbers (if these are not already published somewhere). Would making a table of all of the products of these prime numbers be so difficult? Published public keys could then be compared to this list and the prime factors would be known immediately with a simple database look-up. Wouldn’t this make public key cryptography useless?


Mathematician: Let’s say we are working with all prime numbers that are about d digits long or shorter. The largest such number is x=10d – 1. The number of prime numbers less than this (applying the prime number theorem) is about \frac{x}{\ln{(x)}} which is approximately equal to 0.43\frac{10^d}{d}. Making a table of the product of all of these primes would mean storing about \frac{x}{\ln{(x)}} squared numbers, which is about

.18\frac{10^{2d}}{d^2}

numbers. The average size of one of these numbers is about

.5 x 10d

Hence, on average one of these products will be larger than

.25 x 102d

which takes about

6.64 d

bits to store. The total number of bits needed will be about the number of bits per number times the number of numbers. The number of numbers though is about .18\frac{10^{2d}}{d^2} giving a total number of bits of about

(6.64 d )(.18 \frac{10^{2d}}{d^2}) = 1.2 \frac{10^{2d}}{d}

converting this to gigabytes, we have about

1.12 \frac{10^{(2d-9)}}{d} gigabytes.

If we set d=100 (corresponding to storing all products of prime numbers with no more than 100 digits each) that requires about

1.12 x 10189

which is a ridiculously huge number… far, far bigger than the entire size of the internet. For comparison, the Earth only has about 1050 atoms.

If we set d=20 (corresponding to storing all products of prime numbers with no more than 20 digits each) that requires about

5.6 x 1029
which is still much bigger than the size of the internet!

Basically, there are far too many large prime numbers to store them all, even if we restrict ourselves only to numbers of 20 digits in length.


Physicist: This isn’t an answer, just a clarification of what the question was about (for those of you who aren’t hip to the Comp Sci scene), followed by something I think is worth knowing.

Public key encryption is based on a “trap door algorithm”. Basically, you use a mathematical process that is easy to do in one direction and nie-impossible to do in the other direction, unless you know the “secret”. RSA encryption uses multiplication as it’s mathematical process. (There’s a bit more to it, and if anyone wants to know more let us know.)

Multiplication is easy: 170,363 x 83,701,993 = ?

Factoring is hard (secret = knowing the factors): ? x ? = 14,259,722,633,459

Cryptographers like prime numbers because by using only two big primes you make sure that your factors are large and hard to find. If you have b factors, then, for a given product N, the average size of each factor is about N^{\frac{1}{b}} (smaller and easier to find for larger b).

If you’re ever bored and want to find a big prime you can use Fermat’s Little Theorem: if x^{P-1} mod \, P = 1 (for 1 < x < P), then P is almost certainly prime. “mod P” just means “when the numbers involved get bigger than P, just subtract P a few times until you’re below P again”. You can do this with pencil and paper, even for huge numbers.

So for example: P = 7, and x= 5 (pick x at random). 56 mod 7 = (52)3 mod 7 = (25)3 mod 7 = (21+4)3 mod 7 = (4)3 mod 7 = 64 mod 7 = 63+1 mod 7 = 1. 7 is almost definitely prime!

To be more sure you can check a couple different x’s. False positives are very rare using this test.

Another example: P = 12, x=3. 311 mod 12 = 38+3 mod 12 = ((32)2)233 mod 12 = (92)227 mod 12 = (81)2(24+3) mod 12 = (72+9)23 mod 12 = (9)23 mod 12 = 81 x 3 mod 12 = (72+9)3 mod 12 = 9 x 3 mod 12 = 27 mod 12 = 3 ≠ 1. 12 is not prime!

Fermat’s Little Theorem is a nearly 100% accurate test, but if you’re some kind of mathematician or something you would want to use the algorithm found in “Primes is in P” (by Agrawal, Kayal, Saxena). This doesn’t have much to do with the original question, but I think it’s worth knowing.

Posted in -- By the Mathematician, Math, Number Theory | 1 Comment

Q: What are complex numbers used for?

Physicist: If you’ve ever had to do square roots you’ve probably come up against the problem of taking the square root of a negative number.  If you restrict your attention only to real numbers (0,1, -17, \pi, √2, …, any number you can think of), then there’s no way to take the square root of a negative.  But this makes taking square root, cube roots, and so on, a pain.

You have to remember: 1) The square root, forth root, sixth root, … of a positive number has two answers (positive and negative).  2) The square root, forth root, sixth root, … of a negative number has no answers.  3) The cube root, fifth root, seventh root, … of any number has one answer (with the same sign as the original number).  These are random, frustrating, difficult-to-remember rules.  Mathematicians had to deal with these all the time before the 1700’s.

Then Euler happened.  He was looking at a similar problem; finding the roots of polynomials.  Similar, because the question “x=\sqrt{-1}, what is x?”, is exactly the same as “x2+1=0, what is x?”.  He was annoyed that most polynomials have roots that don’t exist, so he said; “Sure…  But, what if the root did exist.  Like… in an imaginary sense…”.

That’s paraphrasing, but it’s pretty accurate.  He just declared that there’s a new number called “i” with the property that “i2 = -1″.

So, to actually answer: Complex numbers make it easy to take roots, and using complex numbers, all polynomials with terms up to xN have N roots.

Using only real numbers, the cube root of 1 is 1, and only 1. Using complex numbers you can see that the other two roots exist, they just happen to be off of the real line. In this picture the arrow to the right is "1", and the real line is the horizontal line. The other two arrows are the other two roots. The roots of a number are always equally spaced like this.

These two properties help complex numbers “complete” the real numbers.  That is to say; you don’t need to create “super complex numbers” to fix any problems with complex numbers.  One of the first “problems” that people ask about is; “Sure, \sqrt{-1} = \pm i, but what’s \sqrt{i}?”  Well, \sqrt{i} = \pm \left( \frac{1}{\sqrt{2}} + \frac{i}{\sqrt{2}} \right).  No problems!

Also, if you’ve ever had to do anything with trigonometry you’ve probably come up against:

\cos{(A+B)} = \cos{(A)} \cos{(B)} - \sin{(A)}\sin{(B)} \sin{(A+B)}=\sin{(A)}\cos{(B)}+\sin{(B)}\cos{(A)}

Which looks horrible, is horrible, and is horrible to deal with.  Here’s the same thing (both equations) using complex numbers:

e^{i(A+B)} = e^{iA}e^{iB}, which is just a direct application of Euler’s equation: e^{i \theta} = \cos{\theta} + i \sin{\theta}.  Almost any time that you have to do lots of summations or multiplications involving trig function, it’s best to bust out some complex numbers.

In the same vein, electrical engineers use “phasors” (phasor, not phaser) to talk about sinusoidal current (like what comes out of the wall).  Again, complex numbers = easy!

If you’ve gotten stuck behind a nasty integral in calculus (and if you haven’t, ignore this), you’ll find that many of them are surprisingly hard using real numbers, but baby simple using complex numbers.  For example: \int_0^\infty \frac{\sin{x}}{x} \, dx, \int_{-\infty}^\infty \frac{1}{1+x^2} \, dx, \int_0^\pi \frac{1}{2 \cos{x} + 1} \, dx, \int_{-\infty}^\infty \frac{p(x)}{q(x)} \, dx (where p and q are polynomials).

Some fields simply can’t be approached without complex numbers, particularly quantum mechanics.  In QM the probability of something happening is the square of the magnitude (absolute value) of the “probability amplitude”, which is complex-valued.  So, if the probability amplitude is \frac{i}{\sqrt{2}}, then the probability is \left| \frac{i}{\sqrt{2}} \right|^2 = \left( \frac{1}{\sqrt{2}} \right)^2 = \frac{1}{2}.  There’s really no way around this.  In fact, the Schrödinger equation, which is arguably the backbone of QM, has an “i” staring you right in the face.  Here’s the equation for a single particle

i\hbar\frac{\partial}{\partial t} \Psi(\mathbf{r},\,t) = -\frac{\hbar^2}{2m}\nabla^2\Psi(\mathbf{r},\,t) + V(\mathbf{r})\Psi(\mathbf{r},\,t)

where i is in the first term, and “\Psi” is the probability amplitude of the particle’s location.  That’s a bit much, but the point is; you can’t get rid of the complex numbers and do QM with just real numbers.

The moral of the story is that “complex numbers” are misnamed.  While they are intimidating at first, they make things simple all over the place.  Notably: trigonometry, anything with waves (electricity, light, sound), finding roots, streamlining math (often, but not always), and in quantum mechanics.

Posted in -- By the Physicist, Equations, Math, Quantum Theory | 23 Comments

Q: Can one truly create something from nothing? If matter formed from energy (as in the Big Bang expansion), where did the energy come from?

Physicist: That right there is one of the great unsolved questions.  Every experiment that’s ever been done (on this subject) verifies the conservation of mass and energy.  While the amount of mass or the amount of energy may change (they can be interchanged), the sum of the two is absolutely invariant.

This naturally leads to the question above.  There are plenty of theories bouncing around, but without a couple more big bangs to do tests on, it’s unlikely we’ll ever know for sure.  As we learn, many of the theories will be ruled out, but we’ll probably never be for sure sure.  Here are some examples that almost certainly won’t pan out:

Spectacular Uncertainty: Energy and time cannot both be exactly known.  Over any time scale there is always a little energy error, but the larger the time scale the smaller the energy error.  Generally, this takes the form of one or two extra particles that last effectively no time.  For example; gluons (the carrier of the nuclear strong force) usually don’t even exist long enough to cross an atomic nucleus at the speed of light.  But maybe, just maybe, the unimaginable amount of matter in the universe is some kind of amazing quantum clerical error.  You could tie in the anthropic principle (if there weren’t lots of matter there would be no one around to see it, so since we can see it…) if you want, but still.  That shouldn’t explain there being any more matter than the absolute minimum amount needed to have observers.

It is what it is: Maybe mass/energy conservation only works for T>0, but doesn’t mean bupkis for T=0.  In other words, there’s an unexplainable asterisk on the law.

God/Goddess/Gods/Higher Power: Sure.

All this has happened before, and all this will happen again: Maybe the big bang wasn’t the beginning, but in fact the universe just goes through expansion, collapse, and re-expansion cycles.  This has the advantage of explaining the big bang, and also eliminates the question about conservation of mass/energy, but it does leave lots of other questions.  Maybe worse questions.  Also, the universe gives every sign of wanting to expand forever, making it less likely that a previous iteration would have collapsed.

Bubbles: It could be that our universe “bubbled” off of an even larger universe, that had plenty of mass and energy to spare.  This theory don’t answer the question, but it does push it back far enough that it’s hopeless to try and answer rigorously.

Keep in mind that none of these theories are technically scientific.  Scientific knowledge is nothing more than what we can learn from observation, inference, and experiment.  Beyond inference, we’ve got very little to work with as far as the big bang goes.

Posted in -- By the Physicist, Philosophical, Physics, Quantum Theory | 62 Comments

Q: Why does wind make you colder, but re-entry makes you hotter?

The original question was:

Why is it that, when you are outdoors and the atmosphere is moving past you at a moderate rate (wind), you get colder; but when the atmosphere is moving past a space shuttle during re-entry, it gets hotter?

Things falling from space get crazy hot. Unless they're designed specifically to survive the onslaught of air, they tend to aerosolize. That doesn't seem to be the case on the ground.

Physicist: The air right on a surface stays effectively fixed to that surface, an effect that the fluid dynamicists call “the no-slip boundary condition”.  You can imagine this like a deck of cards being spread out on a table, the bottom card barely moves, the card above that moves a little more and so on.  By the time you get to the top card there’s plenty of movement, but the first couple of cards would seem to be effectively “stuck”.

The top card moves a lot, the bottom card barely moves, and the distance each card moves with respect to its neighbors is about the same.

We experience this as a “bubble” of air around our bodies.  The air right next to our skin is almost stationary, while the air a foot or two away barely notices us at all.  The greater the air speed, the more shearing the air near our bodies is, and the thinner the layer of roughly-stationary-air.  The boundary layer (since it’s near us) is warmer than the surrounding air, and acts as insulation.  So more wind means less insulation, which makes you colder.

Unless you live somewhere terrible like Florida or Morocco.  Then the wind just blows away the air that was keeping you cool.  Terrible places to live.  If you want to know what Florida is like, get a dozen humidifiers and hairdryers, and lock yourself in a tiny, air-tight room.

When you wave a piece of cardboard, or something, through the air the resistance you feel is (mostly) a build up of pressure on one side, and a drop of pressure on the back.  Increasing the pressure on a region of air compacts it and causes it to heat up.  When you walk down the street your front should actually be a little warmer than your back, but if you really think you can feel it, then you’re probably walking with the Sun in your face.

The space shuttle does the exact same thing.  It’s just better at compressing the air in front of it, since it’s moving at about Mach 23.  The rush of air moving past the shuttle does cool it off, but the effect is completely overwhelmed by the effect of the compression.  The effect is most pronounced above the speed of sound (Mach 1).  Below Mach 1 air will just get out of the way rather than compress much, but above Mach 1 you can basically “sneak up” on the air.  It doesn’t know you’re coming until it gets hit.

Posted in -- By the Physicist, Physics | 5 Comments

Q: Are explosions more or less powerful in space?

Physicist: Brace yourself for a mess of guesses:

I’ve heard (anecdotally, and on Mythbusters) that explosions in water are more dangerous that than explosions in the air.  I don’t know if that has more to do with the specific properties of the medium, or merely with the presence of the medium itself, but (with a reasonable guess) it seems that having material around should help transmit the energy.

For an explosion in space, the only thing that carries the energy to you is light (which you would have gotten anyway) and material from the explosive itself.  So the explosion itself should do less damage, while the shrapnel should actually be more dangerous (in air it would get slowed down, and even somewhat cushioned on impact).

Also, in a medium energy tends to travel at specific speeds (the speed of sound generally) which means that most of the energy will hit you all at once (Shock waves can travel faster than the speed of sound, but they burn up a lot of energy doing it)  An explosion in space will cause material to fly out with a broad distribution of velocities, so you’ll experience the explosion over a more drawn-out time.

That’s all guess work.  If anyone knows anything for certain, please post a comment.

Interesting aside: The famous EMP (electro-magnetic pulse) that follows soon after an atmospheric nuclear detonation would be absent in space (deep space anyway).  The bulk of the pulse is actually generated by the explosion moving the ionosphere (it moves… the ionosphere).

Deep seated pet peev: The kick-ass billowing explosions of Vin-Diesel movies will also be missing from explosions in space.  Instead, the explosion would appear as a “star burst” of material all flying outward in straight lines.

A billowing explosion and a star burst explosion.

Posted in -- By the Physicist, Physics | 8 Comments

Q: What is infinity? (A brief introduction to infinite sets, infinite limits, and infinite numbers)

Mathematician: To mathematicians, infinity is not a single entity, but rather a label given to a variety of related mathematical objects. What unites these “infinities” is that they are all, in some sense, larger than anything that can be obtained or enumerated in the real physical world. Below, I will discuss a few of the infinities that crop up most frequently. One common feature they share is that our intuition about how things should behave often break down in these cases and the math requires some subtlety. Hang onto your hat.

1. Infinite Sets

Whereas sets like {1,2,3} and {dog, cat, apple, bat} clearly are finite in size (with size 3 and 4 respectively), it is natural to say that the set of all integers and the set of real numbers have infinite size. What is particularly interesting though is that while both infinite sets, the integers have an infinite size that’s smaller (in a precise sense) than that of the real numbers.

To observe this, we begin by noting that we can relabel the elements of {dog, cat, apple, bat} to be {1,2,3,4} by assigning dog=1, cat=2, apple=3, bat=4, which does not alter the size of our set (since the size of a set is independent of the names given to its items). However, {1,2,3} is obviously a subset of {1,2,3,4}, so {1,2,3} cannot be larger in size than {1,2,3,4}, and therefore {1,2,3} cannot be larger than {dog, cat, apple, bat} since this set was constructed from {1,2,3,4} just by renaming elements. More generally, if one (finite) set is larger than another, then we can always relabel the larger set so that the smaller one becomes a subset of it. Let’s now assume that this property continues to hold for infinite sets (or, if you like, we can use this very natural property as part of the foundation for the definition of the sizes of infinite sets).

Now, we apply this reasoning about relabeling to the real numbers and integers. First, we observe that the integers are a subset of the real numbers, and hence cannot have size larger than the real numbers. On the other hand though (and this is subtle requiring proof, which can be found here and here) it is impossible to relabel the integers in such a way that the real numbers become a subset of them. Hence, the real numbers are indeed larger than the integers in some important sense.

It may seem obvious that the size of the set of real numbers is in some sense a larger infinity than that of the size of the integers. What may come as a greater surprise however, is that the set of integers {…, -2, -1, 0, 1, 2, …}, the set of positive integers {1, 2, 3, 4, …}, and the set of all rational numbers {p/q where p and q are integers and q > 0} are all infinite but have exactly the same infinite size. The reason is simply because relabelings of these sets exist that make them all into the same set. For example, note that the assignment 1=0, 2=1, 3=-1, 4=2, 5=-2, 6=3, 7=-3, etc. will turn the positive integers into the set of all integers.

As it turns out, there is an infinite “chain” of infinities that measure the size of sets, including \aleph_{0}, the size of the integers, \aleph_{1}, the size of the real numbers, \aleph_{2}, the size of the set of all functions from the real numbers to binary values, and so on. In fact, for every set of size \aleph_{n} one can form a set of size \aleph_{n+1} by taking the set of all subsets of the original set. A disturbing questions with an even more disturbing answer can then be posed: “does there exist a set whose size is greater than \aleph_{0} but less than \aleph_{1}?” Bizarrely, this question turns out to be independent from the standard axioms of mathematics. That leaves us with just three options:

(a) Accept the fact that this mathematical question in unanswerable or “outside of math”.

(b) Reject the existence of sets that are larger than the integers and smaller than the real numbers which would be confirming what is known as the Continuum Hypothesis and amounts to adding a new axiom to math.

(c) Accept the existence of sets with this in between size, which implies adding the existence of such sets as a new mathematical axiom.

2. Limits

Infinities often arise when using “limits”, mathematical constructions which provide a rigorous backbone for calculus. When we have a function f(x) and write

\lim_{x \rightarrow \infty} f(x)

what we mean is the value (if one exists) that the function f(x) approaches as x gets larger and larger. So, for example, we have

\lim_{x \rightarrow \infty} \frac{1}{x} = 0

since by making x large enough, we can make \frac{1}{x} as close to 0 as we like. We would also write that

\lim_{x \rightarrow \infty} x^{2} = \infty

since as x gets larger and larger, x^{2} grows bigger without end (for any real number r there exists an x large enough so that x^{2} exceeds r) . We note though that there is a “rate” at which x^{2} approaches infinity. To see this, we can consider taking limits of the ratio of x^{2} to other functions which also grow without bound as x grows. If these ratios “go to infinity”, then x^{2} goes to infinity faster than these other functions. For example, we have:

\lim_{x \rightarrow \infty} \frac{x^{2}}{x} = \infty

and

\lim_{x \rightarrow \infty} \frac{x^{2}}{\log(x)} = \infty

whereas

\lim_{x \rightarrow \infty} \frac{x^{2}}{x^{2}-1} = 1

so x^{2} goes to infinity faster than x and \log(x) but at the same rate as x^{2}-1.

3. Algebra

Yet another way to think about infinity, is to introduce it as a special “number” with certain properties. For example, we can define \infty so that it satisfies (for all real numbers x):

\infty > x

\infty + x = \infty

\infty \times x = \infty when x > 0

\frac{x}{\infty} = 0

Similar rules could be used to define -\infty.  Some tricky cases arise though for which no sensible definition of \infty seems possible. For example

\infty \times 0 = ?

\frac{\infty}{\infty} = ?

\infty - \infty = ?

Defining each of these latter statements is problematic. However, as long as we never need to multiply infinity by zero, or divide infinities, or do any other “undefinable” operations in whatever context we happen to be working, we can introduce \infty as if it were just a special new type of number.

4. Topology

Topology is the study of topological spaces (which are sort of like a generalized notion of surfaces) and the properties they have that are independent of angles and distances. If two surfaces can be made the same through stretching or pulling without requiring any cutting or gluing (more technically, if they can be mapped onto each other by a continuous function with a continuous inverse) then they are considered identical from a topological perspective. Hence, a disc and rectangular surface are topologically equivalent, as are a rubber band shaped surface and a disc with one hole punched in it (imagine stretching the hole until you get a band like object). You may begin to see why it has been said that a topologist is a person who can’t tell a teacup from a doughnut.

When topologists work with the real number line (i.e. the set of real numbers together with the usual notion of distance which induces topological structure), they sometimes introduce a “point at infinity”. This point, denoted \infty can be thought of as the point that you would always be heading towards if you started at 0 and traveled in either direction at any speed for as long as you liked. Strangely, when this infinite point is added to the real number line, it makes it topologically equivalent to a circle (think about the two ends of the number line both joining up to this single infinite point, which closes a loop of sorts). This same procedure can also be carried out for the plane (which is the two dimensional surface consisting of points (x,y) where x and y are any real numbers). By adding a point at infinity we compactify the plane, turning it into something topologically equivalent to a sphere (imagine, if you can, the edges of the infinite plane being folded up until they all join together at a single infinity point).

How to get a circle from a line in four easy steps.

In some sense these “points at infinity” that are introduced are not special in any way (they behave just like all other points from a topological perspective). However, if measures of distance are thrown back into the mix, it seems fair to say that these points at infinity are infinitely far away from all the others.

Conclusion

Ultimately, the question “what is infinity?” is not one that really can be answered, as it assumes that infinite things have a unique identity. It makes more sense to ask “how do infinite things arise in mathematics”, and the answer is that they arise in many, very important ways.

Posted in -- By the Mathematician, Math, Philosophical | 9 Comments