Uses of Polynomials

So why is all this polystuff important? The main reason is that you can use polynomials to approximate functions both from gathered data and from analytical functions. And since polynomials only require multiplications and additions, implementing polynomials in an embedded system, for example, is straightforward.

Fitting polynomials to data is done using the function polyfit(x, y, n). Given a vector of x points and a vector of y points, polyfit(x, y, n) will return a polynomial of degree n (highest power of x) that best fits the set of data points. Another function that is of use is polyval(p, x); this function returns the value of the polynomial at x (x can be a vector).

Example: Linear Regression

A known curve-fitting algorithm is linear regression. The idea is to draw a straight line in such a way that the total distance of all the points from the line is minimal.

For the purpose of this example, we'll create a straight line and then add "measurement noise" to the values. Confronted with the new "noisy" data, we'll try to evaluate the first order polynomial that fits the data. We'll compare the results with the known true values (see Listing 8-1).

Listing 8-1. Linear Regression with polyfit() from pylab import *

x = linspace(start, end, N) y=A*x+B y += randn(N)/10

figure()

title('Linear regression with polyfit()') plot(x, y, 'o', label='Measured data; A=%.2f, B=%.2f' % (A, B)) plot(x, polyval(p, x), label='Linear regression; A=%.2f, B=%.2f' % tuple(p)) legend(loc='best')

I've randomly selected two values for A and B, and constructed a linear line with noise using randn(). Then, I used polyfit() to fit the data to a first degree polynomial, a straight line. Lastly, I plot the data along with the newly constructed linear line. Figure 8-4 shows the results of this linear regression.

Linear regression with polyfitO

• • Measured data; A=0.59, B=0.42 - Linear regression; A=0.57, B=0.45

Figure 8-4. Linear regression with polyfit()

Figure 8-4. Linear regression with polyfit()

Example: Linear Regression of Nonlinear Functions

In cases where the function you're trying to fit isn't linear, at times it's still possible to perform linear regression.

The following is an example of fitting exponential data:

from pylab import *

# number of data points N = 100

x = linspace(start, end, N) y = exp(A*x+B) y += randn(N)/5

figure()

title(r'Linear regression with polyfit(), $y=BeA{Ax}$') plot(x, y, 'o', label='Measured data; A=%.2f, B=%.2f' % (A, exp(B))) plot(x, exp(polyval(p, x)), label='Linear regression; A=%.2f, B=%.2f' % (p[0], exp(p[l]))) legend(loc='best')

The regression is performed in the call to the function polyfit(). This time, I've passed x and log(y) as values allowing a linear regression on log(y) or an exponential regression on y. You can see the results of this regression in Figure 8-5.

Linear regression with polyfitO, y - Be

Figure 8-5. Fitting exponential data

• • Measured data; A=0.93, B=1.49 - Linear regression; A=0.91, B=1.51

Figure 8-5. Fitting exponential data

Example: Approximating Functions with Polynomials

Another set of problems solvable with polyfit() is approximation of functions using interpolation. The motivation behind this is a simple implementation of known functions. For the purpose of this example, we'll approximate the function sin(x).

The idea is to create a polynomial that passes through known interpolation points—that is, calculate the value of sin(x) for known n values of x, and then create a polynomial of degree n - 1 that passes through all these points.

We start by selecting a set of points from 0 to re/2; these will be our interpolation points. Values outside this range can be computed using trigonometry identities and the interpolation function. We select five points for interpolation, thus deciding the degree of the interpolation polynomial to be 4. Once the points are selected, we calculate the sine of these points.

For the purpose of this example, I've chosen sine values that can be easily computed using the sqrt() function. You might argue that I'm cheating here, because I'm using a nonlinear function (square root) to calculate sin(x) and not purely polynomials. But you've already seen how to calculate the square root of a number using Newton's method in Chapter 7.

■rip The selection of interpolation points is an interesting topic, and work by the mathematician Pafnuty Chebyshev has contributed much to the topic. See http://en.wikipedia.org/wiki/Pafnuty_Chebyshev and http://en.wikipedia.org/wiki/Chebyshev_nodes.

The values I'll select for interpolation are 0, 30, 45, 60, and 90 degrees. The reason I chose these values is that I know their exact sine values: 0, V2/2, V3/2, and 1, respectively. Or, in vector form:

>>> values = [0, pi/6, pi/4, pi/3, pi/2] >>> sines = sqrt(arange(5))/2 >>> sines array([ 0. , 0.5, 0.70710678, 0.8660254, 1. ]) Given these, interpolation is straightforward:

>>> p = polyfit(values, sines, len(values)-l)

>>> p array([ 2.87971125e-02, -2.04340696e-01, 2.13730075e-02, 9.95626l84e-01, 2.22044605e-l6])

So if you were to implement sin(x), all you need is to store the values of p given previously and then write a simple routine to calculate the value of sin(x) using the polynomial. If you're using NumPy, simply call polyfit().

Let's plot the difference between our implementation of sin(x) and Python's built-in sin(x) function:

>>> plot(x, polyval(p, x)-sin(x), label='error', lw=3) »> grid()

>>> ylabel('polyval(p, x)-sin(x)') >>> xlabel('x')

>>> title('Error approximating sin(x) using polyfit()') >>> xlim(0, pi/2)

Figure 8-6 illustrates this difference.

Error approximating sin(x) using polyfit()

0.0003

-0,0002

0.0002

-0,0002

0.0002

Figure 8-6. Interpolation accuracy

Figure 8-6. Interpolation accuracy

The results are quite accurate, less than 0.003 at worst.

Was this article helpful?

0 0

Post a comment