1/3 = .333…

1/3 + 1/3 + 1/3 = 3/3 = 1

.333… + 333… + 333… = .999…

.999… = 1

Discuss

  • Saik0A
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I don’t really think I’m “cherry picking”

    You really are.

    Virtually my whole last paragraph was ignored in my original comment. But you keep doing you.

    • myslsl@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      11 months ago

      I’m cherry picking, yet you cherry picked the sentence “I don’t really think I’m cherry picking” over the entirety of my previous comment to you?

      Virtually my whole last paragraph was ignored in my original comment.

      Did you not read the entire last paragraph of my first comment where I directly quoted and responded to the last paragraph of your original comment? Here, let me quote it for you. I see reading is not your strong suit.

      Quote I took from your last paragraph:

      But I do think it throws a wrench in other parts of math if we assume it’s universally true. Just like in programming languages… primarily float math that these types of issues crop up a lot, we don’t just assume that the 3.999999… is accurate, but rather that it intended 4 from the get-go, primarily because of the limits of the space we put the number in.

      My response:

      It definitely doesn’t throw a wrench into things in other parts of math (at least not in the sense of there being weird murky contradictions hiding in math due to something like this). Ieee floats just aren’t comparable. With ieee floats you always have some finite collection of bits representing some number. The arrangement is similar to how we do scientific notation, but with a few weird quirks (like offsets in the exponent for example) that make it kinda different. But there’s only finitely many different numbers that these kinds of standards can represent due to there only being finitely many bit patterns for your finite number of bits. The base 10 representation of a number does not have the same restriction on the number of digits you can use to represent numbers. When you write 0.999…, there aren’t just a lot (but finitely many) 9’s after the decimal point, there are infinitely many 9’s after the decimal point.

      In a programming context, once you start using floating point math you should avoid using direct equality at all and instead work within some particular error bound specified by what kind of accuracy your problem needs. You might be able to get away with equating 4.000001 and 4 in some contexts, but in other contexts the extra accuracy of 0.0000001 might be significant. Ignoring these kinds of distinctioms have historically been the cause of many weird and subtle bugs.

      Quote I took from your last paragraph:

      I have no reason to believe that this isn’t the case for our base10 numbering systems either.

      My response:

      The issue here is that you don’t understand functions, limits, base expansions of numbers or what the definition of notation like 0.999… actually is.

      But you keep doing you.

      Lmao, be sure to work on that reading comprehension problem of yours.

      What are you even expecting? How am I supposed to read your mind and respond to all the super important and deep points you think you’ve made by misunderstanding basic arithmetic and calculus? Maybe the responsibility is on you to raise those points if you want further response from me on them and not on me to somehow just magically know what you want?

      • Saik0A
        link
        fedilink
        English
        arrow-up
        1
        ·
        11 months ago

        I’m cherry picking, yet you cherry picked the sentence “I don’t really think I’m cherry picking” over the entirety of my previous comment to you?

        Nope. Because I’m establishing that you’re picking and choosing my original argument to respond to. That makes ALL of your responses kind of pointless.

        The issue here is that you don’t understand functions, limits, base expansions of numbers or what the definition of notation like 0.999… actually is.

        Which is taken in bad faith. Not only do you resort to ad hominem (you have no idea what I know about functions and limits, yet you go out of your way to claim that I know nothing), but you fundamentally DO NOT UNDERSTAND what I’ve said. Instead you choose to be a prick about it and act like you have some understanding of Math that nobody else in the room does. You’re no Einstien guy. You can’t even understand the premise that put forth and address it. You stuck your fingers in your ears and scream “LALALALALALA” and DON’T ADDRESS WHAT I SAID.

        try and take the limit I mentioned

        No… I’ve presented the problematic limit… You IGNORED this and just assert your own thing. This is tantamount to “The sky is blue” when I show you a picture of sky that happens to be purple or red. Your limit (sky) isn’t the one I’m concerned about… that’s not the discussion.

        So let’s break it down barney for you.

        You claim 0.999… IS 1. Fine. My claim is that it “really” isn’t (In the most mildest/literal sense), but is likely sufficient enough that it doesn’t matter. I believe that it’s a fundamental “glitch” in the base10 numbering system. This problem DOESN’T exist in base12 or other systems that are divisible by whatever number we’re talking about. Then I proceed to back up this “idea” with a similar but easier to parse example of floating point numbers. Where you can commonly get 4.0…1 or 3.9…9 (And keep in mind that this is only finite because of the space limitation of the memory of the computer… This would otherwise be 3.999…) numbers and both can and usually do mean 4. The idea behind that being that The numbering system in simply unable to represent the intended value. Now… To address this point… Can you prove that this premise doesn’t exist for your claim? Because my response is simply that Base12 doesn’t have the issue with (1/3)*3 when parsed in non fractional notation. Now you TRIED to address this by saying that since we can assign places in decimal notation and can feasibly go as far out to the right as we want… that we CAN represent it accurately. My counter argument would be that we do not have sufficient mechanism to represent it as evidenced that changing out notation to another base doesn’t have this “flaw”. Now remember… I even from the get-go said "I do accept that we can just presume 0.999… can just be assumed 1 " as there is no REASONABLE case where 1.9999999999999999999… isn’t just 2. The real question becomes that if we assign 1.999… = 2 definitionally. Is there ANY mathematical case where this causes an issue? Since we can’t prove a negative… it’s a moot question. It works for everything… except becomes weirdly questionable in cases of limits where there is a hole in the function.

        Further… Any definition of limits you use to evaluate for 0.999… = 1 I can also apply to the previous examples of functions where there are holes. That was the point of the “I know the proof… The only thing I’ve never had anyone clean up appropriately is that limits disprove that this is the case.” and funny enough… even you haven’t done it. You’ve not sufficiently explained it.

        Instead of being a dick… Learn to actually talk to people. I have no interest in continuing discourse with you. You’ve shown to argue in bad faith from the get-go.

        It’s funny, even in the wiki article you linked people admit that it’s not foolproof.

        and Timothy Gowers argues in Mathematics: A Very Short Introduction that the resulting identity 0.999… = 1 is a convention as well: However, it is by no means an arbitrary convention, because not adopting it forces one either to invent strange new objects or to abandon some of the familiar rules of arithmetic.[49]

        So reason for acceptance is that some rules go silly? I can agree with that… But that doesn’t mean that it makes fully logical sense. The funny part is… this wiki page even covers why I believe what I do…

        https://en.wikipedia.org/wiki/0.999…#In_alternative_number_systems

        Revisiting subtraction (http://math.fau.edu/richman/docs/999.pdf is an actual source/copy of the original article… Yes I actually do understand math and yes I’ve read this a long time ago regardless of your attempts to denigrate me.) in the above link is where I sit logically on the matter. Note that term used here in the paper is skeptic, not because I don’t believe that it works… but that I don’t believe that it is a law of math that necessarily is provable in the universe as we know it. Any calculatable 0.999… value you place into the limits that I’ve exampled in my original post does NOT calculate to 1 unless you make the jump to using some other form of math to “prove” that 0.999… = 1. So you can tell me that 1/3 can be properly evaluated in decimal… I’m not the only one who thinks otherwise. But yeah… I’m just some fucking moron on the internet in your mind. So I’m sure you’ll tell me how I’m all wrong even though there is legitimate discourse on the matter still in the mathematics community and I’m not the newest person on the block to think the way I do.