For me, and I guess for programmers in general, as presumably anticipated by the author, the concept of floating point numbers as a pure construct without having to cram them into n bits was easy to grasp, even relaxing.
Perhaps that was the intention there.
As for choosing polynomials in general as a starting point, a potential answer is maybe found in the text:
> Polynomials occur with stunning ubiquity across mathematics.
That is, beyond the concept of basic arithmetic that's surely already known, it's possibly a logical place to begin.
If you read my argument... I'm not against polynomials. I'm against using "real" numbers for programming-related examples.
Floating point numbers with arbitrary precision are not the same thing as "real" numbers. They are still rational numbers. See, even you made this mistake, not surprisingly, this will confuse (or even worse, will give a false sense of understanding) to programmers reading such examples.
Arbitrary precision floating point numbers exist and are quite tangible, while, perhaps, not very common, and that is the concept that's easy to grasp. These behave like almost any other number you know: you can add them, multiply, take a natural logarithm of etc.
You cannot do any of those operations on "real" numbers in computers because of the way arithmetic operations work, you'd have to start with the least significant digit (to know if there's a carry), but there's no way to find out what the least significant digit is going to be.
Perhaps that was the intention there.
As for choosing polynomials in general as a starting point, a potential answer is maybe found in the text:
> Polynomials occur with stunning ubiquity across mathematics.
That is, beyond the concept of basic arithmetic that's surely already known, it's possibly a logical place to begin.