Q: How do we know that atomic clocks are accurate?

Physicist: It turns out that there is no way, whatsoever, to look at a single clock and tell whether or not it’s accurate.  A good clock isn’t so much accurate as it consistent.  It takes two clocks to demonstrate consistency, and three or more to find a bad clock.

Left: Might have the right time?  Right: Does have the right time.

Left: Might have the right time? Right: Does have the right time.

Just to define the term: a clock is a device that does some predictable physical process, and counts how many times that process is repeated.  For a grandfather clock the process is the swinging of a pendulum, and the counting is done by the escapement.  For an hour glass the process is running out of sand and being flipped over, with the counting done by hand.

A good hour glass can keep time accurate to within a few minutes a day, so sunrise and sunset won’t sneak up on you.  This is more or less the accuracy of the “human clock”Balance wheel clocks are capable of accuracies to within minutes per year, which doesn’t sound exciting, but really is.

Minutes per year is accurate enough to do a bunch of historically important stuff.  For example, you can detect that the speed of light isn’t infinite.  It takes 16 minutes for light to cross Earth’s orbit, and we can predict the eclipsing of Jupiter’s moons to within less than 16 minutes.  In fact, telescopes, Jupiter’s moons, and big books full up look-up tables were once used to tell time (mostly on ships at sea).

Minutes per year is enough to determine longitude, which is a big deal.  You can use things like the angle between the north star and the horizon to figure out your latitude (north/south measure), but since the Earth spins there’s no way to know your longitude (east/west measure) without first knowing what time it is.  Alternatively, if you know your longitude and can see the Sun in the sky, then you can determine the timeIt depends on which you are trying to establish.

Trouble crops up when someone clever starts asking things like “what is a second?” or “how do we know when a clock is measuring a second?”.  Turns out: if your clock is consistent enough, then you define what a second is in terms of the clock, and then suddenly your clock is both consistent and “correct”.

An atomic clock uses a quantum process.  The word “quantum” comes from “quanta” which basically means the smallest unit.  The advantage of an atomic clock is that it makes use of the consistency of the universe, and these quanta, to keep consistent time.  Every proton has exactly the same mass, size, charge, etc. as every other proton, but no two pendulums are ever quite the same.  Build an atomic clock anywhere in the universe, and it will always “tick” at the same rate as all of the other atomic clocks.

So, how do we know atomic clocks are consistent?  Get a bunch of different people to build the same kind of clock several times, and then see if they agree with each other.  If they all agree very closely for a very long time, then they’re really consistent.  For example, if you started up a bunch of modern cesium atomic clocks just after the Big Bang, they’d all agree to within about a minute and a half today.  And that’s… well that’s really consistent.

In fact, that’s a lot more consistent than the clock that is the Earth.  The process the Earth repeats is spinning around, and it’s counted by anyone who bothers to wake up in the morning.  It turns out that the length of the day is substantially less consistent than the groups of atomic clocks we have.  Over the lifetime of the Earth, a scant few billion years, the length of the day has slowed down from around 10 hours to 24.  That’s not just inconsistent, that’s basically broken (as far as being a clock is concerned).

jhg

Atomic clocks are a far more precise way of keeping track of time than the length of the day (the turning of the Earth).

So today, one second is no longer defined as “there are 86,400 seconds in one Earth-rotation”.  One second is now defined as “the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium 133 atom”, which is what most atomic clocks measure.

The best clocks today are arguably a little too good.  They’re accurate enough to detect the relativistic effects of walking speed.  I mean, what good is having a super-watch if carrying it around ruins its super-accuracy?  Scientists use them for lots of stuff, like figuring out how fast their neutrino beams are going from place to place.  But the rest of us mostly use atomic clocks for GPS, and even GPS doesn’t require that many.  Unless 64 is “many”.

That all said: If you have one clock, there’s no way to tell if it’s accurate.

If you have two, either they’re both good (which is good), or one or both of them aren’t.  However, there’s no way to know which is which.

With 3 or more clocks, as long as at least a few of them agree very closely, you can finally know which of your clocks are “right”, or at least working properly, and which are broken.

That philosophy is at the heart of science in general.  And why “repeatable science” is so important and “anecdotal evidence” is haughtily disregarded in fancy science circles.  If it can’t be shown repeatedly, it can’t be shown to be a real, consistent, working thing.

This entry was posted in -- By the Physicist, Experiments, Logic, Philosophical, Physics. Bookmark the permalink.

12 Responses to Q: How do we know that atomic clocks are accurate?

  1. Larry Dale says:

    I once had a friend who bought an expensive watch (it was expensive then) that was either a spin off or duplicate of the Astronaught’s watch…so you which watch I’m talking about. He was so excited that it was accurate to…(can’t remember how much)…and that it would still work ‘x’ meters under water for years. My reply was simply; ‘Wow, that’s great! You’d be dead but your watching would still be ticking.’

  2. Kopernik says:

    Origin of the phrase “Time out!”?

  3. Dean Wood says:

    In the history of science, we see that the “second” has been constantly redefined so to be more accurate. My question is this, how did science arrive at the modern number of “…9,192,631,770 periods…” to define the second? Why is this more accurate than 9,296,000,000 periods? Measurement is always relative to something else, right? So why is “9,192,631,770 periods” more accurate? How do we know?

  4. Norman Watson says:

    Perhaps you could measure the consistency of a clock by using a delay loop to allow previuos outputs (oscillations) to be compared with current outputs. This would allow variations in output frequency to be identified. Perhaps this could be used to build up an auto-correlation function for the clock which could be turned into a Power Spectral Density.

  5. Philip Peacock says:

    “how did science arrive at the modern number of “…9,192,631,770 periods…” to define the second? Why is this more accurate than 9,296,000,000 periods? Measurement is always relative to something else, right? So why is “9,192,631,770 periods” more accurate? How do we know?”

    Nobody “knew”. It was counted. i.e. they counted how many oscillations it took to be the same length as the previously defined second.

    So it went from
    1/86400 of a mean solar day
    to
    1/31,556,925.9747 of the tropical year for 1900 January 0 at 12 hours ephemeris time
    to
    the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom
    to
    the duration of 9,192,631,770 periods of the radiation corresponding to the transition between the two hyperfine levels of the ground state of the caesium-133 atom at rest at zero degrees Kelvin

    All of those steps were to make seconds more consistent rather than based on something that was always changing (like the earth’s rotation or revolution slowing down). But in all events they attempted to match the original definition of 1/86400th of a solar day simply to match how seconds were defined in the first place.

    If we went the other way and just picked a nice round number, then we’d have to replace all the watches and clocks in the world that use other approximations for a second (like vibrations of crystals or swinging pendulums).

    It’s all arbitrary. There is no “real” second to be discovered.

  6. Guest says:

    “It turns out that there is no way, whatsoever, to look at a single clock and tell whether or not it’s accurate.”

    Is this the case for weighing scales? I try to explain to family and friends the futility of when e.g. they know their bathroom scales say they weigh xkg and, on holiday they step onto the hotel’s bathroom scales and see they weigh ykg, and assume they have lost/gained y-xkg.

    Obviously, there is no way to tell if they would get the same reading if both scales were present and why give precedence to the second scale anyway?

    But I’m wondering if your general point – about the accuracy of a clock being the wrong question – is applicable to weighing scales.

    (Except in terms of accuracy to some pre-defined calibration tool.)

  7. Error: Unable to create directory uploads/2024/11. Is its parent directory writable by the server? The Physicist says:

    @Guest
    Absolutely! It’s an extremely general statement.

  8. Guest says:

    Lovely! Thankyou.

  9. Thaddeus Buttmunch says:

    But how do they define the hour? just sixty times the nine billion cesium vibrations??

    And how do they ZERO the thing to Begin with??

    How do they SET it in other words

  10. wouter says:

    >But how do they define the hour? just sixty times the nine billion cesium vibrations??

    Almost. 60 times 1 second gives you one minute, not one hour.

    >And how do they ZERO the thing to Begin with??

    Not. You don’t need to zero to define a time of 1 second.

  11. Sekhar says:

    This refers to the penultimate paragraph. (Quote “With 3 or more clocks, as long as at least a few of them agree very closely, you can finally know which of your clocks are “right”, or at least working properly, and which are broken” Unquote).

    That might lead to confusion : Consider 3 clocks. You imply “right” to mean consistent. But then, determining consistency by three clocks only may lead to error. eg Clocks No 1 and No 2 may agree with each other, as compared to No 3. But No 3 may agree with a fourth clock, No 4, to a much closer degree than Nos 1 and 2.

    So, perhaps, it would be correct to say that :
    (A) Clocks that are more consistent are deemed more correct
    (B) To test clocks for consistency requires at least two clocks of each type, to establish consistency within the type, or at least four clocks if all are of the same type
    (C) if there are three clocks, and two agree amongst themselves (ie are consistent), and the third one differs from each of the other two, a fourth clock may have to be tested as well to see which set it agrees with. Note that a fifth clock is not required ie there is no infinite progresion in the number of clocks.
    (D) That means 4 or more clocks are required to test for consistency (or two clocks or more of a given type, if there is more than one type)., as opposed to 3 or more clocks.

    The above ie (A), (B), (C) and (D) seem to be correct, but I am not sure. Would appreciate any comments on what I said.

    Many thanks for an interesting article.

  12. X says:

    Years later: How do we know that “if you started up a bunch of modern cesium atomic clocks just after the Big Bang, they’d all agree to within about a minute and a half today.” Have the atoms ‘oscillated’ at the same speed all through the history? How do we know? Cosmologists speculate of all kinds of strange change in the laws of physics in the early universe. So how do we know….

Leave a Reply

Your email address will not be published.