Monte Carlo Integration

One of the earliest applications of random numbers was numerical computation of integrals, that is, a non-random (deterministic) problem. Here we shall address two related methods for computing fa f (x)dx.

8.5.1 Standard Monte Carlo Integration

Let be uniformly distributed random numbers between a and b. Then

is an approximation to the integral fb f (x)dx. This method is usually referred to as Monte Carlo integration. It is easy to interpret (8.7). A well-known result from calculus is that the integral of a function f over [a, b] equals the mean value of f over [a, b] multiplied by the length of the interval, b — a. If we approximate the mean value of f (x) by the mean of n randomly distributed function evaluations f (xj), we get the method (8.7).

We can implement (8.7) in a small function:

import random as random_number def MCint(f, a, b, n): s=0

x = random_number.uniform(a, b) s += f(x) I = (float(b-a)/n)*s return I

One normally needs a large n to obtain good results with this method, so a faster vectorized version of the MCint function is handy:

from numpy import *

x = random.uniform(a, b, n) s = sum(f(x)) I = (float(b-a)/n)*s return I

Let us try the Monte Carlo integration method on a simple linear function f(x) = 2 + 3x, integrated from 1 to 2. Most other numerical integration methods will integrate such a linear function exactly, regardless of the number of function evaluations. This is not the case with Monte Carlo integration. It would be interesting to see how the quality of the Monte Carlo approximation increases n. To plot the evolution of the integral approximation we must store intermediate I values. This requires a slightly modified MCint method:

# store the intermediate integral approximations in an

# array I, where I[k-1] corresponds to k function evals. I = zeros(n)

Note that we let k go from 1 to n while the indices in I, as usual, go from 0 to n-1. Since n can be very large, the I array may consume more memory than what we have on the computer. Therefore, we decide to store only every N values of the approximation. Determining if a value is to be stored or not can then be computed by the mod function (see page 423 or Exercise 2.45):

That is, every time k can be divided by N without any remainder, we store the value. The complete function takes the following form:

# store every N intermediate integral approximations in an

# array I and record the corresponding k value I_values = []

x = random_number.uniform(a, b) s += f(x) if k / N == 0:

I = (float(b-a)/k)*s I_values.append(I) k_values.append(k) return k_values, I_values

Now we have the tools to plot the error in the Monte Carlo approximation as a function of n:

return 2 + 3*x k, I = MCint3(f1, 1, 2, 1000000, N=10000) from scitools.std import plot error = 6.5 - array(I)

plot(k, error, title='Monte Carlo integration', xlabel='n', ylabel='error')

Figure 8.4 shows the resulting plot.

Monte Carlo integration

Monte Carlo integration

Montecarlo Integration Error
Fig. 8.4 The convergence of Monte Carlo integration applied to /12(2 + 3x)dx.

For functions of one variable, method (8.7) requires many points and is inefficient compared to other integration rules. Most integra-

tion rules have an error that reduces with increasing n, typically like n-r for some r > 0. For the Trapezoidal rule, r = 2, while r = 1/2 for Monte Carlo integration, which means that this method converges quite slowly compared to the Trapezoidal rule. However, for functions of many variables, Monte Carlo integration in high space dimension completely outperforms methods like the Trapezoidal rule and Simpson's rule. There are also many ways to improve the performance of (8.7), basically by being "smart" in drawing the random numbers (this is called variance reducing techniques).

8.5.2 Computing Areas by Throwing Random Points

Think of some geometric region G in the plane and a surrounding bounding box B with geometry [xL, xH] x [vl, yH]. One way of computing the area of G is to draw N random points inside B and count how many of them, M, that lie inside G. The area of G is then the fraction M/N (G's fraction of B's area) times the area of B, (xH—xL)(yH—yL). Phrased differently, this method is a kind of dart game where you record how many hits there are inside G if every throw hits uniformly within B.

Let us formulate this method for computing the integral f (x)dx. The important observation is that this integral is the area under the curve y = f (x) and above the x axis, between x = a and x = b. We introduce a rectangle B,

B = {(x, y) | a < x < b, 0 < y < m}, where m < maxxg[a b] f (x). The algorithm for computing the area under the curve is to draw N random points inside B and count how many of them, M, that are above the x axis and below the y = f (x) curve, see Figure 8.5. The area or integral is then estimated by

First we implement the "dart method" by a simple loop over points:

below = 0 # counter for no of points below the curve for i in range(n):

x = random_number.uniform(a, b) y = random_number.uniform(0, m) if y <= f(x): below += 1 area = below/float(n)*m*(b-a) return area

Note that this method draws twice as many random numbers as the previous method.

A vectorized implementation reads

Fig. 8.5 The "dart" method for computing integrals. When M out of N random points in the rectangle [0, 2] x [0, 2.4] lie under the curve, the area under the curve is estimated as the M/N fraction of the area of the rectangle, i.e., (M/N)2 • 2.4.

def MCint_area_vec(f, a, b, n, m): x = random.uniform(a, b, n) y = random.uniform(0, m, n) below = y[y < f(x)].size area = below/float(n)*m*(b-a) return area

Even for 2 million random numbers the plain loop version is not that slow as it executes within some seconds on a slow laptop. Nevertheless, if you need the integration being repeated many times inside another calculation, the superior efficiency of the vectorized version may be important. We can quantify the efficiency gain by the aid of the timer time. clock() in the following way (see Appendix E.6.1):

import time t0 = time.clock()

t2 = time.clock() # time of MCint_area_vec is t2-t1

print 'loop/vectorized fraction:', (t1-t0)/(t2-t1)

With n = 106 I achieved a factor of about 16 in favor of the vectorized version on an IBM laptop.

0 0

Post a comment