I recently wrote a short Python program to calculate the factorial of a number as a test to see how much faster integer multiplication is compared to floating point multiplication. Imagine my surprise when I observed that it was the floating point multiplication that was faster! I'm puzzled by this and am hoping someone can enlighten me. I'm using exactly the same function for the factorial calculation and simply passing it a float versus an integer. Here is the code:

```
import time
def fact(n):
n_fact = n
while n > 2:
n_fact *= n - 1
n -= 1
print(n_fact)
return n_fact
n = int(input("Enter an integer for factorial calculation: "))
n_float = float(n)
# integer factorial
start = time.time()
fact(n)
end = time.time()
print("Time for integer factorial calculation: ", end - start, "seconds.")
# float factorial
start = time.time()
fact(n_float)
end = time.time()
print("Time for float factorial calculation: ", end - start, "seconds.")
```

When I run this program the results vary, but by and large the integer calculation comes out faster most of the time, which is counter to everything I thought I knew (keep in mind, I'm no expert). Is there something wrong with my method of timing the calculation? Do I need to run the calculation thousands of times to get a more accurate measure of the time? Any insight would be appreciated.

`timeit`

to benchmark running times, it is possible that your results are wrong using this method.`float`

operations to be faster, whereas in the last paragraph you said the`int`

operations were faster. Which case did you observe? When I time your function with`timeit`

I see integers being faster up to about`n = 50`

, above which there is a small edge in favour of floating point operations (which I qualitatively would expect given the fixed-size nature of`floats`

vs the unlimited-size`ints`

in Python). (NB anything above`n = 170`

exceeds the range of`float`

values.)`double`

under the hood. depending on whether you are working with large numbers this could affect your results.