The standard format for computers to store real numbers is called binary floating point, which is different from the format most people are used to. For example, if you type 0.1 into Python's interpreter, you see that Python's internal representation of the number is different from what you typed:
>>> 0.1 0.10000000000000001
TECHNICAL This happens because a computer can't accurately represent some STUFF decimal numbers as floats. For more information than you really need to know about why, see Appendix B of the Python Tutorial at http://www.python.org. (For detail nuts only: The decimal numbers that can be represented accurately have a fractional component containing only sums of powers of two.)
default in Python 3.0; For integer division,
Was this article helpful?