Mathematician: Usually, in math, there are lots of ways of writing the same thing. For instance:
= = = =
As it so happens, 0.9999… repeating is just another way of writing one. A slick way to see this is to use:
Another approach, that makes it a bit clearer what is going on, is to consider limits. Let’s define:
…
and so on.
Now, our number is bigger than for every n, since our number has an infinite number of 9’s, whereas always has a finite number, so we can write:
for all n.
Taking 1 and subtracting all parts of the equation from it gives:
Then, we observe that:
and hence
.
But we can make the left hand side into as small a positive number as we like by making n sufficiently large. That implies that 1-0.9999… must be smaller than every positive number. At the same time though, it must also be at least as big as zero, since 0.9999… is clearly not bigger than 1. Hence, the only possibility is that
and therefore that
.
What we see here is that 0.9999… is closer to 1 than any real number (since we showed that 1-0.9999… must be smaller than every positive number). This is intuitive given the infinitely repeating 9’s. But since there aren’t any numbers “between” 1 and all the real numbers less than 1, that means that 0.9999… can’t help but being exactly 1.
Update: As one commenter pointed out, I am assuming in this article certain things about 0.9999…. In particular, I am assuming that you already believe that it is a real number (or, if you like, that it has a few properties that we generally assume that numbers have). If you don’t believe this about 0.9999… or would like to see a discussion of these issues, you can check this out.
What can be summed up by saying that in the decimal system rational numbers that aren’t usually “shown” in their repeating decimal form do in fact have a repeating decimal representation:
1 / 1 = 0.9999… = 1
1 / 2 = 0.4999… = 0.5
1 / 3 = 0.3333… = (no short representation)
1 / 4 = 0.2499… = 0.25
1 / 5 = 0.1999… = 0.2
1 / 6 = 0.1666… = (no short representation)
…
2 / 1 = 1.9999… = 2
2 / 2 = 0.9999… = 1
2 / 3 = 0.6666… = (no short representation)
…
And so on and so forth. In fact, it wouldn’t surprise me to find someone out there arguing the repeating decimal representation to be the most appropriate decimal version. 🙂
By the way, are there practical circumstances (let’s say, some interesting algorithm) in which this representation, rather than the usual one, would be preferable or even easier to use?
I always liked this explanation:
1/3 = 0.33333…
2/3 = 0.66666…
3/3 = 0.999999… = 1
I hate to do it, but I’m going to have disagree with the explanation, though I’ll grant it’s a very common mis-explanation. I addressed the question here: http://www.xamuel.com/why-is-0point999-1/ To summarize it here: the question is not so much why 0.999… equals 1, but rather, what ARE real numbers in the first place? One naive approach is to define real numbers to BE their decimal representations, with arithmetic DEFINED by the algorithms, and this is what anyone would implicitly think after K-12 math (since we never tell them otherwise). According to this naive definition, 0.999… is NOT 1, as these decimal representations are certainly different. But what the explanations in your post demonstrate is that this naive definition is a bad one, because it causes algebra to break (for example, using the naive definition, the distributive law breaks and a sequence becomes able to converge to two different limits simultaneously). Since we can’t stomach having such badly behaved arithmetic, we basically *define* 0.9999…=1 (and 1.999…=2, and so on) using equivalence classes. TL;DR: 0.999…=1 by definition.
Xamuel, may I ask which of the two explanations you are disagreeing with, and on what level (as an persuasion to the layman, as a proof to blahblah, or otherwise)? And why do you think most people implicitly take the decimal expansion as their definition of the real numbers, is it what you gathered from your non-math friends?
Paul: Fact is, all the standard “proofs” of 0.999…=1 go about things the wrong way. The general pattern is, they present an argument which uses some Fact X, which fact is some well-behavedness fact about the reals. They then conclude that if 0.999… is not 1, that Fact is wrong, which would make the reals badly behaved, so 0.999…=1. But this is backwards. However much we want Fact X to hold, it depends on WHAT the reals actually are.
(.999…)=9*(.999…)/9=(10-1)*(.999…)/9: We have used the distributive law. This law is not true of formal decimals under the arithmetic of the addition and multiplication algorithms.
Limit argument: We use the fact that a sequence can converge to at most one limit (formalized with epsilons and deltas): This fact is not true of formal decimals under the arithmetic of the addition and multiplication algorithms.
3/3=3*(1/3)=3*(.333…)=.999…: We have used some facts about division and multiplication which aren’t true of formal decimals under the algorithmic addition and multiplication.
Correct Conclusion: We should not use formal decimals as our definition of the reals. We should use Dedekind cuts or Cauchy sequences. Or we can use Stevin’s construction (google it), where we do use formal decimals, but DECLARE that 0.999…=1. By definition. In ORDER to make other desired facts work.
Incorrect Conclusion: “So 0.999…=1. Because reals gotta be well-behaved. Don’t ask what reals are! Nothing to see here!”
I admit I’ve never actually polled random people about what numbers are. But in my experience, elementary math is most often taught as though reals are formal strings of decimals, with arithmetic defined by algorithms. You’re right, this might be something worth investigating.
And now that I check, Google does not correctly find Stevin’s construction -_- Here’s a link: http://en.wikipedia.org/wiki/Construction_of_the_real_numbers#Stevin.27s_construction
Hey Xamuel,
Thanks for your comment. Yes, I am implicitly assuming that 0.9999… is a real number (or, at least, has certain properties of real numbers). The proofs I used work just fine, under this assumption. That does not imply this is a mis-explanation, only that it requires belief that 0.9999… has the properties of other numbers. The people I have showed these simple proofs to seem satisfied (you excluded), implying that they believe that 0.9999… has the properties that other numbers have. In light of that, I do not know of evidence for your claim that the wrong question is being answered or the wrong assumptions are being made.
When you round it is. 0.99999999999999999 is 0.00000000000000001 away from one. So, technically you could count it as one unless yo are working on a rocket ship… Being exact is important at that point.
Yeah, Xamuel has got it, – definition of the Reals needs a bit of sprucing up. You can make the case for the non-existence of any Real Number that is NOT of the form .9999999…, because if the number does not includes an unknown element, then it is “imaginary”. This would butt heads with what we commonly think of as an imaginary number. Anyway, the whole question goes nowhere until there is a change in Western philosophical direction concerning ontology, and that is not happening for a while.
if .999999… is equal to one, does that mean math in incapable of describing a number which infinitely approaches, but is not quite equal to one? what if i take one, and then subtract the smallest possible amount from it, it should be almost but not quite one. and i would figure it would be represented by .999999…. forever.
Unfortunately, there’s no “smallest amount”. Necessarily, if you take away any actual amount away from one you’ll end up with something smaller than 0.999…
but what if i took that amount away, giving me something smaller then .9999…. then kept adding a little more, continuously over an infinite amount of time? the number infinitely approaches one but never hits it. and if i wanted to describe what this ever changing number approaching one without ever realizing one is, how would i do so if i cannot use .99999…. is there some other notation for it?
You know, there’s another number system that might be useful for analogies here. We all learn very early the difference between these “pseudo-numbers” and the actual “numbers” themselves.
I’m talking about , of course. and so forth. These are all different symbols, different pseudo-rational numbers, but they’re all representatives of the same rational number. How do we know two formal fractions are equal? Precisely when they cross-multiply to equal numbers.
As we know, something similar happens with real numbers (I’m taking Xemuel’s definition of equivalence classes of pseudo-reals, BTW, because I like it — it’s just the Cauchy sequence def, but slightly pared down, I think), except in this case two decimals are equal precisely when their difference goes to zero. So since ,
I’ve never seen anybody compare directly to before, but it seems to me to make good sense.
(I wrote up something a little more extensive here.)
(PS I am hoping that this will take tex?)
We got a LaTex plug-in for this site. To write something in LaTex write”$latex” then your code, and end with “$”.
(I added in the “latex” for the last comment already)
@christopher “1”? You’re exactly describing a “limiting sequence” or more specifically a “Cauchy sequence”. It turns out that one of the properties of the number line is that a limiting sequence only approaches one point.
In this case you can definitely construct something that approaches 0.9999…, but at the same time that sequence will also approach 1.
They’re the same after all!
i say 1=0.99999….. cause when u subtract 0.99999… from 1 the number that comes out will be 0.00000…. and there cant be a number at the end of those stream of 0 cause if there was then the 0.9999… wouldnt really be going on forever so making the number added with 0.9999 to make 1 will be 0 and 1-0 can only be 1 making 1-0 and 0.99999….+0 equal or 0.9999999…..=1
1/3 != .3333333333
1/3 = .3333333333…. + 1/ (3 *10 ^infinity)
2/3 = .666666666….. + 2/(3*10^ infinity)
3/3 = .99999999….+ 3/(3*10^infinity) = .999999999…. + .000001 = 1
o.O
.9999999999….. != 1
0.9999… = (9*0.9999…) / 9 = ((10-1) 0.9999…) / 9
= (10*0.9999… – 0.9999…) / 9
= (9.9999…99990 – 0.9999…) / 9
= (9 + 0.9999…99990 – 0.9999…99999) / 9 = (9-.000…0001) / 9 = 8.9999999999/9 != 1 o.o
In another thread https://www.askamathematician.com/?p=6992 you said that because two different numbers equaled each other, the original hypothesis could not be true. Why doesn’t this apply here? (Rather, wouldn’t it prove that 0.99… isn’t a real number [a non-real hypothesis?])
That was me (a Physicist!).
In that case, the fact that 0 and 1 are definitively different numbers was used to prove another point. If, however, the fact that 0 isn’t 1 was in doubt, then that fact (0 isn’t 1) couldn’t be used to prove anything else.
Pingback: 0.999... = 1? - Page 6
It depends on the number system. If you use hyperreal numbers, they are not the same.
Someone provided this algebra to show 0.99…= 1
x = 0.999….
10x = 9.999….
10x-x= 9x = 9.99… – 0.999… = 9
–> 9x = 9 –> x = 1
But the way I see it is like this.
take a finite precision say x=0.9999
10x would then be:
10x = …
10* 0.0009 = 0.0090
+ 10* 0.0090 = 0.0900
+ 10* 0.0900 = 0.9000
+ 10* 0.9000 = 9.0000
Adding the RHS gives 0.0090 + 0.0900 + 0.9000 + 9.0000 = 9.9990
So whereas x=0.9999 10x = 9.9990
Following the same algebra
10x – x = 9x = 10*(0.9999) – 0.9999 = 9.9990 – 0.9999 = 8.9991
–> 9x = 8.9991
solving for x –> x = 8.9991/9 = 0.9999.
This will be true for any finite precision, so why does the infinite limit x=0.999…. = 1?
This surely has to involve a conceptual mathematical preference, say that the difference can’t become infinitely small forever.
I can see how that might be the case in physical reality, but why does it ever have to be the case numerically?
Both of the derivations you did here amount to multiplying by 9 and then dividing by 9. When you do that you end up back where you started.
In the first case you find (and it’s not a super-rigorous proof) that 0.9999… = 1. In the second case you find that 0.9999 = 0.9999.
The second case doesn’t say much of anything. If you’re dissatisfied with a particular proof, try to find a new one! There are very few mathematical ideas that don’t have more than one proof.
Fascinating! I never could get my head around weird numbers. What I don’t understand on this topic rests upon the fact that 1/2 = 0.5 yet 1/3 = 0.333…
Whether I cut a cake into exact halves or exact thirds, each slice is finite and definite in size. Yet our decimal system comes up with a recurring number, as though our 1/3 slice can never be exactly 1/3.
To me, this says that the decimal system (if it is right to call it that) has limitations, and cannot communicate reality exactly. So, when we write 0.333… or 0.999…, we *mean* exactly 1/3 or exactly 1.
Is that right? Or have I just bumped my head again?
Just another thought… 0.999… recurring is not really infinite – as long as it retains any meaning as a number – because if you follow the train of 9s you will reach an infinitesimal point where the size of that 9 is too small to measure the smallest part of a quantum particle, right? At that point, that 9 must be rounded to have any meaning.
So 0.999… in our universe is really 1.
Whereas, if we really had a universe built up of infinitely regressive smaller parts, so that you could never run out of bits to measure, 0.999… would always be different to 1.
In the first explanation above (10 x 0.999… – 1 x 0.999… = 9.999… – 0.999…) the decimal place in the 10 x 0.999… is being shifted left, so that there is always one less 9 in its fractonal series than in the 0.999…, therefore – if we cannot round the fractional parts – they will never be equal?
So I’m thinking, it is in the rounding of the fraction that we find relief, and it is legitimate to round an infinite fraction because there is no such thing as an infinitely small part?
Physical laws don’t apply when we’re talking about math, which is a shame. It would be nice to be able to use physical intuition in (more of) math.
The important thing here is that the string of 9’s, despite not adding up to anything greater than one, never ends. So, when you shift a little by multiplying by 10 for example, you don’t change the string of 9’s. That is, there’s no “last 9” that can move.
Math involving infinite quantities or infinite numbers of things is a little tricky, but nothing mysterious. Mathematicians deal with this stuff all the time.
So, it is an infinite series because .9999… can be represented by the sum, n equals 1 to infinity of (9/10) to the nth power. Change that to a geometric series format and you’ll find it to be 1. So, it is a limit that is being taken if you look at the proof.
I agree with a lot of what Xamuel was stating. What *is* a decimal representation?
People just like to say 1/2 = 0.5, but in binary, 1/2 is represented as 0.1 isn’t it? In hexadecimal: 1/2 is listed as 0.8. This may seem trivial, but it is not. 0.99… has a significant meaning in base ten. It means that you are 1 short of completing a decimal place at various powers of ten. It doesn’t matter how many powers of ten you go out, you’re still not going to reach 1 or 10^0. This is because, in base ten, there is no complete way to represent anything aside from 1, 2, 5, 10 and powers of 1, 2, 5 and or 10.
For example:
1/1 = 1
1/2 = 0.5 = (5 x 10)/(10^2)
1/3 = 0.33… = ((3 x 10)/(10^2)) + ((3 x 10)/(10^3)) …
= we cannot completely write this in decimal…
1/4 = 0.25 = (5 x 5)/(10^2)
1/5 = 0.20 = (5 x (2^2))/(10^2)
1/6 = 0.166… = ((1 x 10) /(10^2)) + (((3 x 2) x 10)/(10^3)) + (((3 x 2) x 10)/(10^4))…
…again cannot be fully written out as a fraction..
1/7 = 0.142857(repeating)… I’m too lazy to write it out, but again, it is an infinite series….
1/8 = 0.125 = ((1 x 10)/(10^2)) + ((2 x 10)/(10^3)) + ((5 x 10)/(10^4))
1/9 is a similar situation to that of 1/3 or 1/6..
1/10 = 0.1 = ((1 x 10)/(10^2))
I know that is a lot of text to point out obvious statements and that I might have made a typo or two.. but the point is this: A base or in this case decimal notation should serve a purpose.
I don’t know the exact reason we use decimal in many cultures, but I’m assuming it has to do with counting and the use of fingers. Since counting is something that we endeavor to do quickly (therefore in a finite amount of time) it makes little sense to try to represent fractions with indefinitely repeating decimals if we wish to gain exact answers. Of course it is valid for conceptualization or estimation and easier than switching between different bases. So one could validly say that 0.33 repeating conceptually limits to 1/3… or for most applications just use 0.33 not repeating (use 2 or 3 or x digits).
What matters really is the level of precision we need for the task at hand. If it becomes more arduous to represent something in x digit places than simply changing to another base, then we should switch bases.. This case be a headache though since we’re so use to counting and thus calculating in the base system we were brought up in (usually base 10, but not everywhere on Earth).
I prefer to not view 1 as = 0.99 repeating. I don’t know why it bothers me so much…because infinitesimals are no fun either… I guess my hang up is that every base should have one optimal way of representing a rational number and yet this mindset of not allowing infinitesimals forces every system to have two representations for every rational value…this implies to me that this is the result of human tinkering and limits us to rational numbers as well.
How would you define 0.99… + pi or + sqrt(2)? You cannot really write this out without forcing 0.99… to equal 1. So, why not leave 0.99… alone in the first place when working in a base that doesn’t place nicely with thirds. Well, those are my thoughts and feelings on the topic at least.
An infinite number of mathematicians walked into a bar.
The first mathematician ordered a beer.
The 2nd mathematician ordered half a beer.
The 3rd mathematician ordered a quarter of a beer.
The 4th mathematician ordered an eighth of a beer.
The bartender yelled “You all are idiots!”
and poured two beers.
Just curious, does the problem we have with the representation of 1/3=.3333… (or .999…= 1) have anything to do with why we can not trisect an angle using only a compass and straight edge?
Nope!
That’s a whole other thing.
This is wrong, 0.999… Is not 1. This can be expalained but solving an equation:
0.99… + x = 1
If 0.99…. Does equal one then x should be zero. Now if you solve for x:
0.99… + x = 1
1 + x = 1 + 0.00…1
x = 0.00…1
Sophie,
There is no such number as 0.00…1. There cannot be a 1 to the right of an infinite number of 0s. That is, there is nothing after infinity.
The equality is false for so many reasons. Not a single valid proof exists in support of the argument that 0.999… = 1. However, ignorant academics since Euler cling to this nonsense.
To read about how I systematically debunked this nonsense, I recommend you visit:
http://www.spacetimeandtheuniverse.com/math/6661-0-999-really-equal-1-a.html
and for the entire debate:
http://www.spacetimeandtheuniverse.com/math/4507-0-999-equal-one-116.html#post17360
Every single so-called “proof” that 0.999… = 1 is debunked in the debate.
For all those that are writing about 0.00…1; we must remember that 0.9999… has in infinite amount or 9 beyond the decimal point. Therefore if you subtract 0.999… from one you would get 0.000… There would be an infinite amount of zeros and you would never reach the 1, it just isn’t there.
Possible the clearest proof of 0.99…=1 for myself is that of:
0.999… = x
multiply by 10
9.999… = 10x
subtract x where x is 0.999…
9 = 9x
divide by 9
1 =x
It is surprising that no one has referred to Non Standard Analysis in this debate. Real Numbers are constructed by axiomatic approach. This is a set which is a MODEL of the axioms — However, logicians proved that there can always be nonstandard models, where all the axioms hold.
In the standard model of reals, there are no infinitesmals other than ZERO. That means that the only real smaller than all reciprocals (1/n) is ZERO. Since the difference between 0.999… and 1 must be infinitesmal, and the only infinitesmal is zero, it follows that there is no difference.
In a non-standard model of reals, there exist infinitesmals other than zero. These are smaller than all reals, but greater than ZERO. In such models the two can be different.
There is no logical way to settle the argument, because we can choose whichever model we want to use for our purposes. Both are logically consistent, and both satisfy all axioms of real numbers.
People using the proof of , well 1.9999-0.9999 are rounding actually, and all other proofs depends on rounding, if you are not rounding you’ll end end up with 00-00 , i want to ask you people does anyone have a physical explanation of this ? Because infinite functions have physical meanings like short circuits or resonance etc… ,the simplest example is cutting an apple or pizza into exact 3 pieces? why the function of this cutting into 120 degrees ( Mercedes mark) is discontinuous while cutting 5 pieces for example or half is exact ?? if anyone thought about it please inform. Thanks all.
What if I wrote 0.875… = 1. Do you think this makes things any clearer? 0.999…, like any other non-terminating representation is nonsense. 0.999… is equal to nothing but itself. You can divide a circle into exactly 3 pieces (this was discussed at the thread http://www.spacetimeandtheuniverse.com/math/4507-0-999-equal-one-317.html).
Infinity is an ill-defined concept and has no place in mathematics.
People are looking at 0.00000… wrongly.
It is 1/10ⁿ
C’mon guys. We don’t expect people to be thinking that it is “infinite zeros followed by a one”.
→ You start with 1, and begin dividing it by successive powers of 10, just as many as you do with Sum(9/10^k) k=1→n
There is a huge difference in this understanding.
One divided by any number > 1, is never “0” in theory.
The limit is 0, but the actual value never is.
I hope we all understand the difference between limit and value (outcome, result) of a function, yes?
That is wrong. 0.999999.. is 999999…./100000…, and the denominator minus the numerator is NOT 0, it’s 1! Although the PROPORTION is getting smaller, it will always be missing that 1. 10(0.99999…) is NOT 9.9999…., it is 9.9999…0 due to the Rule of Infinity! That ways, other repeating digits will be EXTENDED 1:
9.999999….0, or 10×0.9999….
– 0.99999…..9, or 0.999… is 8.99999….1. 0.9999… is NOT 1, lim 0.9999….. is 1. WHY SHOULD YOU ESTIMATE?
Go to themathpro.wordpress.com for more info.
It is impossible to write repeating nines in a form of a rational number… unlike any other repeating number. We can have 0.(9) = 1-1/inf. , but infinity is not a number. It is only one abstract conception. What I question myself is: does 0,(9), or 1,56(9) and so on realy exist?
9*0.(1)=1
7*0.(142857)=1
not 0.(9)
0.(1) is not equal to 1/9
When we divide 1 by 9, there is always a residue of one. The residue goes smaller and smaller, but never equals zero. That infinitely small part is the difference between 0.(1) and 1/9, and when multiplied makes the difference between 0.(9) and 1.
1/9 = 0.1+1/90 = 0.11+1/900 = 0.111+1/9000 = … = 0.(1)+1/9*10^n
No matter how many ones we have, there is always an amount (1/9*10^n) that is not included in 0.(1).
The argument is that the “residue” gets infinitesimally small, and thus doesn’t exist in the “real number system”. However, it is not the infinitesimal that is the problem; it is the problem with an index of a function that increases, so that you cannot identify that “position” that the 1 is in. f(n) = 1/10ⁿ . If n→inf , what position is the 1 in ?
The counter argument is: “what position are any of those 9’s in ‘at infinity’ ?”
They are also infinitesimals.
It all stems from the ill-formed idea ‘infinity’ .
The ellipsis in 0.999… mean “and so on” but mathematicians feel obligated to define it using infinity, and that is when everything goes to crap.
0.999… equals an “infinite series” math object. How can it be a number if it can’t define a finite amount? To which someone will argue: “It is exactly 1!”
To me, this is admission that 0.999… means nothing if you require another decimal number to describe the quantity.
Also, Euler’s proof of infinite sums is flawed. It is all based on finite partial sums, and then he basically uses mathematical induction to conclude his theory. The problem though, is that is used incorrect operations on infinite series to reach his conclusion.
When used correctly, he would have concluded that 0.(9) = 0.(9).
It is true that we are not able to define how bigger 1/9 is than 0.(1). But it is obviously bigger. There have to be a reason that 0.(9) can not be expressed like a rational number. When we work with rational numbers there is no problem. And everything that we think equals 0.(9), when transformed in rational number form equals 1.
There is more. In nonary system 1/9 or 0.1111… (in decimal) is 1/10 or 0.1 (in nonary). Multiplied by 9 (decimal) or 10 (nonary), it equals 1 (both systems). No 0.(9)s (decimal) or 0.(8)s (nonary). So the problem is that in decimal system 1/9 does not equal 0.(1).
But in nonary system we still can not express 0.(8) as a rational number. We can write all other repeating numbers but this one. And it is true for all other systems – binary, ternary on so on.
From another mathematician:
It’s great to see that this thread is still going. Some links say that mathematicians laugh at the foolishness of those who say that 0.999… is not equal to 1. Those are the egotistical mathematicians. Of all subjects, math should be the one that teaches us best to be humble in what we have learned. As someone said, “Avoid pride, for the lowest fool can ask a question that the wisest cannot answer.”
That said, I would like to add my comments on the side of 0.999… = 1 The algebra proof about 10x = 9.999… and so forth is quite convincing, if one accepts the existence of an infinite train of 9s as representing a number. One accepts that 0.999… means “a rational number” as soon as one writes “let x = 0.999…”. With that under the belt, the proof really proves that 0.999… = 1, since the proof uses the basic laws of algebra to reach the conclusion.
But what is someone says, “I can’t even be sure that x = 0.999… really means anything. That is, I feel that it is about as meaningful as x = 0/0, which certainly has no meaning.” Then that proof cannot continue.
Thus, let’s try a new approach. Does everyone believe in long division of whole numbers, as in 23 divided by 7? I hope everyone does. Since we don’t have a key for the long division symbol, let’s use #, so that
23/7 = 7#23. = 3.285714285714…
with that block of digits repeating forever. Now do the long division 8#8, which must equal 1. Do it again, this time, do it as 8#8.0 The first step does 8 into 8.0, which gives 9 above the 0, with the point to the left, i.e., .9 Multiply, and subtract 72, and get reminder 8. Bring down the next digit, which is 0, and divide again. A second 9 appears in the quotient. Multiply, subtract 72, and the remainder is 8, etc., etc.
Thus, we have that by long division, 8 over 8 is .99999… And the division never ends, so the train of 9s never ends. Thus, the long division shows that 8/8 (which is 1) is equal to .999… They are indeed two ways to write 1 (a third way is 8/8).
The same thing will occur if we use 7#7, or 63#63, or whatever, except 0#0.
Is this convincing enough? If not, then long division is in serious trouble, but surely we shouldn’t mess with it.
Actually, for long division, the first decision question is “Does 8 divide into 8?”
The answer is “Yes” and “How many times” the answer is “1”.
Your way, the answer to the first decision question is “0” which is false.
And 23/7 is 3R2, ie: 21/7 + 2/7, or any number of ways to include the remainder, which is that part at the bottom of the long division after the subtraction is done.
23/7 = 21/7 + 2/7
23/7 = 21/7 + 20/70 = 21/7 + 14/70 + 6/70
= 21/7 + 14/70 + 60/700
=21/7 + 14/70 + 56/700 + 4/700
= 3.28 R4
You can’t have the next iteration without the remainder. There is no way to represent 23/7 as a decimal number. Any attempt will be an approximation only.
Yes, 0.99999… = 1. Really. Why? I can give several reasons. (Yes, the first two have more or less been given above. But I’m giving an alternate way of phrasing it, I think.)
1) First, it is certainly not greater than 1. Now, you can take any number that is less than one and then carry out enough 9s in 0.999… to exceed that number. So it cannot be less than one. Now, if a number is not less than one, and also not greater than 1, it must be 1. Here the squeeze theorem takes on a whole new meaning — sort of.
2) If 0.999… is not 1, then 0.333… is not 1/3. I mean really, 1/3=0.333…, so 1/3*3 =3*0.333…=0.999… I see nothing wrong with this.
3) In my _Introduction to Analysis_ book by Rosenlicht, an infinite decimal is defined as the least upper bound of the sequence a0, a0.a1, a0.a1a2, a0.a1a2a3… where “an”, with “n” an integer, is equal to a single-digit numeral. You absolutely need this for things like 1/3 = 0.333…, 1/7=1.428571428571428571428571…, etc. to be true. And the only way the infinite decimals on the RHS equal the fractions on the LHS is if the value of an infinite decimal is the least upper bound of the sequence of partial sums that are formed by appending one digit at a time, as given above.
This also means that you have equalities like 5.1399999. . . = 5.1400000, etc., an example actually given in the book.
It is also needed to make the mathematician’s answer work. You have 10*0.999… = 9.999… But you’ve basically added a leading 9, which, when replacing 0.999 by x, gives 10x=9+x, which gives x=1, and 1 is the LUB of 0.999…
4) You could say 0.999… approaches 1, but never gets there. You can similarly say that 0.333… approaches 1/3, but never gets there. So If 0.333… is going to represent 1/3, how is 0.999… representing 1 be any different?
0.999… = 1 only if you accept the definition that was conceived to MAKE them equal.
As a stand-alone series, without the definition of “infinite series”, I believe most people would agree they are not equal, because an infinite series of addition is meaningless, which is why some sort of meaning, was assigned to them, that being the limit, should it exist.
would the same hold true in hexadecimal or octal?
or in octal it would be 0.777… and in hex o.FFF…
@dilip kumar
That’s exactly right. In base 8, 1=0.7777… In base 3, 1=0.22222…
The important thing is that the difference between the two numbers is zero (which means they’re the same number).