Exercise 8.1. Flip a coin N times.

Make a program that simulates flipping a coin N times. Print out "tail" or "head" for each flip and let the program count the number of heads. (Hint: Use r = random.random() and define head as r <= 0.5 or draw an integer among {1, 2} with r = random.randint(1,2) and define head when r is 1.) Name of program file: flip_coin.py. o

What is the probability of getting a number between 0.5 and 0.6 when drawing uniformly distributed random numbers from the interval [0,1)? To answer this question empirically, let a program draw N such random numbers using Python's standard random module, count how many of them, M, that fall in the interval (0.5, 0.6), and compute the probability as M/N. Run the program with the four values N = 10l for i = 1, 2, 3, 6. Name of program file: compute_prob.py. o

Suppose we have eight different colors. Make a program that chooses one of these colors at random and writes out the color. Hint: Use a list of color names and use the choice function in the random module to pick a list element. Name of program file: choose_color.py. o

Suppose there are 40 balls in a hat, of which 10 are red, 10 are blue, 10 are yellow, and 10 are purple. What is the probability of getting two blue and two purple balls when drawing 10 balls at random from the hat? Name of program file: 4balls_from10.py. o

Exercise 8.5. Probabilities of rolling dice.

1. You throw a die. What is the probability of getting a 6?

2. You throw a die four times in a row. What is the probability of getting 6 all the times?

3. Suppose you have thrown the die three times with 6 coming up all times. What is the probability of getting a 6 in the fourth throw?

4. Suppose you have thrown the die 100 times and experienced a 6 in every throw. What do you think about the probability of getting a 6 in the next throw?

First try to solve the questions from a theoretical or common sense point of view. Thereafter, make functions for simulating cases 1, 2, and 3. Name of program file: rolling_dice.py. o

Exercise 8.6. Estimate the probability in a dice game.

Make a program for estimating the probability of getting at least one 6 when throwing n dice. Read n and the number of experiments from the command line. (To verify the program, you can compare the estimated probability with the exact result 11/36 when n = 2.) Name of program file: one6_2dice.py. o

Somebody suggests the following game. You pay 1 unit of money and are allowed to throw four dice. If the sum of the eyes on the dice is less than 9, you win 10 units of money, otherwise you lose your investment. Should you play this game? Answer the question by making a program that simulates the game. Name of program file: sum9_4dice.py. o

It turns out that the game in Exercise 8.7 is not fair, since you lose money in the long run. The purpose of this exercise is to adjust the winning award so that the game becomes fair, i.e., that you neither lose nor win money in the long run.

Make a program that computes the probability p of getting a sum less than s when rolling n dice. Name of program file: sum_s_ndice_fair.py.

If the cost of each game is q units of money, the game is fair if the payment in case you win is r = q/p. Run the program you made for s = 9 and n = 4, which corresponds to the game in Exercise 8.7, and compute the corresponding p. Modify the program from Exercise 8.7 so that the award is r = 1/p, and run that program to see that now the game is fair, i.e., you neither win nor lose money in a large number of games.

Explanation. The formula for a fair game can be developed as follows. Let p = M/N be the probability of winning, which means that you in the long run win M out of N games. The cost is Nq and the income is Mr. To make the net income Mr — Nq zero, which is the requirement of a fair game, we get r = qN/M = q/p. (This reasoning is based on common sense and an intuitive interpretation of probability. More precise reasoning from probability theory will introduce the game as an experiment with two outcomes, either you win with probability p and or lose with probability 1 — p. The expexted payment is then the sum of probabilities times the corresponding net incomes: —q(1 — p) + (r — q)p (recall that the net income in a winning game is r — q). A fair game has zero expected payment, i.e., r = q/p.) o

Make a computer program for throwing two dice a large number of times. Record the sum of the eyes each time and count how many times each of the possibilities for the sum (2, 3, ..., 12) appear. A dictionary with the sum as key and count as value is convenient here. Divide the counts by the total number of trials such that you get the frequency of each possible sum. Write out the frequencies and compare them with exact probabilities. (To find the exact probabilities, set up all the 6 x 6 possible outcomes of throwing two dice, and then count how many of them that has a sum s for s = 2, 3,..., 12.) Name of program file: freq_2dice.py. o

Exercise 8.10. Compute the probability of drawing balls.

A hat has 20 balls, 5 red, 5 yellow, 5 green, and 5 brown. We draw n > 3 balls at random. What is the probability of getting

• at least one red and one brown ball?

• exactly one red ball? exactly two red balls?

at least three green balls?

Use Monte Carlo simulation to compute the probabilities and write out the answers to the four questions for n = 3, 5, 7,10,15. Name of program file: draw_balls.py. o

Exercise 8.11. Compute the probability of hands of cards.

Use the Deck.py module (in src/random) and the same_rank and same_suit functions from the cards module to compute the following probabilities by Monte Carlo simulation:

exactly two pairs among five cards, four or five cards of the same suit among five cards, four-of-a-kind among five cards.

Name of program file: card_hands.py. o

Exercise 8.12. Play with vectorized boolean expressions.

Using the numpy.random module, make an array containing N uniformly distributed random numbers between 0 and 1. Print out the arrays r <= 0.5, r [r <= 0.5], where(r <=0.5, 1, 0) and convince yourself that you understand what these arrays express. We want to compute how many of the elements in r that are less than or equal to 0.5. How can this be done in a vectorized way, i.e., without explicit loops in the program, but solely with operations on complete arrays? Name of program file: bool_vec.py. o

Exercise 8.13. Vectorize the program from Exer. 8.1.

Simulate flipping a coin N times and write out the number of tails. The code should be vectorized, i.e., there must be no loops in Python. Hint: Use ideas from Exercise 8.12. Name of program file: flip_coin_vec.py. o

The purpose of this exercise is to speed up the code in Exercise 8.2 by vectorization. Hint: First draw an array r with a large number of random numbers in [0,1). The simplest way to count how many elements in r that lie between 0.5 and 0.6, is to first extract the elements larger than 0.5: r1 = r [r>0. 5], and then extract the elements in r1 that are less than 0.6 and get the size of this array: r1 [r1<=0.6]. size. Name of program file: compute_prob_vec.py.

Remark. An alternative and more complicated method is to use the where function. The condition (the first argument to where) is now a compond boolean expression 0.5 <= r <= 0.6, but this cannot be used with NumPy arrays. Instead one must test for 0.5 <= r and r < = 0.6. The needed boolean construction in the where call is operator. and_(0. 5 <= r, r <= 0.6). See also the discussion of the same topic in Chapter 4.4.1. o

Exercise 8.15. Throw dice and compute a small probability.

Compute the probability of getting 6 eyes on all dice when rolling 7 dice. Since you need a large number of experiments in this case (see the first paragraph of Chapter 8.3), you can save quite some simulation time by using a vectorized implementation. Name of program file: roll_7dice.py. o

Exercise 8.16. Difference equation for random numbers.

Simple random number generators are based on simulating difference equations. Here is a typical set of two equations:

for n = 1,2, A seed x0 must be given to start the sequence. The numbers yi, ..., represent the random numbers and x0,xi,... are "help" numbers. Although yn is completely deterministic from (8.14)-(8.15), the sequence yn appears random. The mathematical expression p mod q is coded as p % q in Python.

Use a = 8121, c = 28411, and m = 134456. Solve the system (8.14)-(8.15) in a function that generates and returns N random numbers. Make a histogram to examine the distribution of the numbers (the yn numbers are randomly distributed if the histogram is approximately flat). Name of program file: diffeq_random.py. o

Exercise 8.17. Make a class for drawing balls from a hat.

Consider the example about drawing colored balls from a hat in Chapter 8.3.3. It could be handy to have an object that acts as a hat:

# make a hat with balls of 3 colors, each color appearing

hat = Hat(colors=('red', 'black', 'blue'), number_of_each_color=4)

# draw 3 balls at random balls = hat.draw(number_of_balls=3)

Realize such code with a class Hat. You can borrow useful code from the balls_in_hat.py program and ideas from Chapter 8.2.5. Use the Hat class to solve the probability problem from Exercise 8.4. Name of program file: Hat. py. o

Exercise 8.18. Independent vs. dependent random numbers.

Generate a sequence of N independent random variables with values 0 or 1 and print out this sequence without space between the numbers

(i.e., as 001011010110111010).

The next task is to generate random zeros and ones that are dependent. If the last generated number was 0, the probability of generating a new 0 is p and a new 1 is 1 — p. Conversely, if the last generated was 1, the probability of generating a new 1 is p and a new 0 is 1 — p. Since the new value depends on the last one, we say the variables are dependent. Implement this algorithm in a function returning an array of N zeros and ones. Print out this array in the condense format as described above.

Choose N = 80 and try p = 0.5, 0 = 0.8 and p = 0.9. Can you describe the differences between sequences of independent and dependent random variables? Name of program file: dependent_random_variables.py. o

Exercise 8.19. Compute the probability of flipping a coin.

Modify the program from either Exercise 8.1 or 8.13 to incorporate the following extensions: look at a subset Ni < N of the experiments and compute probability of getting a head (Mi/Ni, where Mi is the number of heads in Ni experiments). Choose N = 1000 and print out the probability for N1 = 10,100, 500,1000. (Generate just N numbers once in the program.) How do you think the accuracy of the computed probability vary with N1? Is the output compatible with this expectation? Name of program file: flip_coin_prob.py. o

We address the same problem as in Exercise 8.19, but now we want to study the probability of getting a head, p, as a function of N1, i.e., for N1 = 1,..., N. We also want to vectorize all operations in the code. A first try to compute the probability array for p is h = where(r <= 0.5, 1, 0)

An array q[i] = sum(h([:i])) reflects a cumulative sum and can be efficiently generated by the cumsum function in numpy: q = cumsum(h). Thereafter we can compute p by q/I, where I [i]=i+1 and I can be computed by arange(1 ,N+1) or r_[1:N+1]. Implement both the loop over i and the vectorized version based on cumsum and check in the program that the resulting p array has the same elements (for this purpose you have to compare float elements and you can use the float_eq function from SciTools, see Exercise 2.51, or the allclose function in numpy (float_eq actually uses allclose for array arguments)). Plot p against I for the case where N = 10000. Annotate the axis and the plot with relevant text. Name of program file: flip_coin_prob_developm.py. o

Exercise 3.26 describes some problems that can be solved exactly using the formula (3.8), but we can also simulate these problems and find approximate numbers for the probabilities. That is the task of this exercise.

Make a general function simulate_binomial(p, n, x) for running n experiments, where each experiment have two outcomes, with probabilities p and 1—p. The n experiments constitute a "success" if the outcome with probability p occurs exactly x times. The simulate_binomial function must repeat the n experiments N times. If M is the number of "successes" in the N experiments, the probability estimate is M/N. Let the function return this probability estimate together with the error (the exact result is (3.8)). Simulate the three cases in Exercise 3.26 using this function. Name of program file: simulate_binomial_problems.py. o

Make a program for simulating the development of a poker (or simplified poker) game among n players. Use ideas from Chapter 8.2.4. Name of program file: poker.py. o

Exercise 8.23. Write a non-vectorized version of a code.

Read the file birth_policy.py containing the code from Chapter 8.3.4. To prove that you understand what is going on in this simulation, replace all the vectorized code by explicit loops over the random arrays. For such code it is natural to use Python's standard random module instead of numpy.random. However, to verify your alternative implementation it is convenient to have the same sequence of random numbers in the two programs. Therefore, use numpy's random module, but use it like the standard Python random module, i.e., draw real numbers one at a time instead of a whole array at once. Name of program file: birth_policy2.py. o

Exercise 8.24. Estimate growth in a simulation model.

The simulation model in Chapter 8.3.4 predicts the number of individuals from generation to generation. Make a simulation of the "one son" policy with 10 generations, a male portion of 0.51 among newborn babies, set the fertility to 0.92, and assume that 6% of the population will break the law and want 6 children in a family. These parameters implies a significant growth of the population. See if you can find a factor r such that the number of individuals in generation n fulfills the difference equation xn = (1 + r)xn-1 .

Hint: Compute r for two consecutive generations xn-1 and xn (r = xn/xn- i — 1) and see if r is approximately constant through the evolution of the generations. Name of program file: growth_birth_policy.py. o

Exercise 8.25. Investigate guessing strategies for Ch. 8.4.1.

In the game from Chapter 8.4.1 it is smart to use the feedback from the program to track an interval [p, q] that must contain the secret number. Start with p = 1 and q = 100. If the user guesses at some number n, update p to n + 1 if n is less than the secret number (no need to care about numbers smaller than n + 1), or update q to n — 1 if n is larger than the secret number (no need to care about numbers larger than n — 1).

Are there any smart strategies to pick a new guess s € [p, q]? To answer this question, investigate two possible strategies: s as the midpoint in the interval [p, q], or s as a uniformly distributed random integer in [p, q]. Make a program that implements both strategies, i.e., the player is not prompted for a guess but the computer computes the guess based on the chosen strategy. Let the program run a large number of games and see if either of the strategies can be considered as superior in the long run. Name of program file: strategies4guess.py. o

Exercise 8.26. Make a vectorized solution to Exer. 8.7.

Vectorize the simulation program from Exercise 8.7 with the aid of the module numpy. random and the numpy. sum function. Name of program file: sum9_4dice_vec.py. o

Suggest a player strategy for the game in Chapter 8.4.2. Remove the question in the player_guess function in the file src/random/ndice2.py, and implement the chosen strategy instead. Let the program play a large number of games, and record the number of times the computer wins. Which strategy is best in the long run: the computer's or yours? Name of program file: simulate_strategies1.py. o

Exercise 8.28. Solve Exercise 8.27 with different no. of dice.

Solve Exercise 8.27 for two other cases with 3 and 50 dice, respectively. Name of program file: simulate_strategies2.py. o

Extend the program from Exercise 8.28 such that the computer and the player can use a different number of dice. Let the computer choose a random number of dice between 2 and 20. Experiment to find out if there is a favorable number of dice for the player. Name of program file: simulate_strategies3.py. o

Use the method in Chapter 8.5.2 to compute n by computing the area of a circle. Choose G as the circle with its center at the origin and with unit radius, and choose B as the rectangle [-1,1] x [-1,1]. A point (x,y) lies within G if x2 + y2 < 1. Compare the approximate n with math.pi. Name of program file: MC_pi.py. o

This exercise has the same purpose of computing n as in Exercise 8.30, but this time you should choose G as a circle with center at (2, 1) and radius 4. Select an appropriate rectangle B. A point (x,y) lies within a circle with center at (xc,yc) and with radius R if (x — xc)2 + (y — yc)2 < R2. Name of program file: MC_pi2.py. o

Let xo,..., xn be N + 1 uniformly distributed random numbers between 0 and 1. Explain why the random sum = ^N=0 2(1 — x2)-1 is an approximation to n. (Hint: Interpret the sum as Monte Carlo integration and compute the corresponding integral exactly by hand.) Make a program for plotting versus N for N = 10k, k = 0,1/2,1,3/2, 2, 5/2,..., 6. Write out the difference between S106 and pi from the math module. Name of program file: MC_pi_plot.py. o

Modify the walk1D.py program such that the probability of going to the right is r and the probability of going to the left is 1 — r (draw numbers in [0,1) rather than integers in {1,2}). Compute the average position of np particles after 100 steps, where np is read from the command line. Mathematically one can show that the average position approaches rns — (1 — r)ns as np ^ to. Write out this exact result together with the computed mean position with a finite number of particles. Name of program file: walk1D_drift.py. o

Set np=1 in the walk1Dv.py program and modify the program to measure how many steps it takes for one particle to reach a given point x = xp. Give xp on the command line. Report results for xp = 5, 50, 5000, 50000. Name of program file: walk1Dv_hit_point.py. o

The purpose of this exercise is to reimplement the walk2D.py program from Chapter 8.7.1 with the aid of classes. Make a class Particle with the coordinates (x, y) and the time step number of a particle as attributes. A method move moves the particle in one of the four directions and updates the (x, y) coordinates. Another class, Particles, holds a list of Particle objects and a plotstep parameter (as in walk2D.py). A method move moves all the particles one step, a method plot can make a plot of all particles, while a method moves performes a loop over time steps and calls move and plot in each step.

Equip the Particle and Particles classes with print functionality such that one can print out all particles in a nice way by saying print p (for a Particles instance p) or print self (inside a method). Hint: In __str__ , apply the pformat function from the pprint module to the list of particles, and make sure that __repr__ just reuse __str__ in both classes.

To verify the implementation, print the first three positions of four particles in the walk2D. py program and compare with the corresponding results produced by the class-based implementation (the seed of the random number generator must of course be fixed identically in the two programs). You can just perform p.move() and print p three times in a verify function to do this verification task.

Organize the complete code as a module such that the classes Particle and Particles can be reused in other programs. The test block should call a run(N) method to run the walk for N steps, where N is given on the command line.

Compare the efficiency of the class version against the vectorized version in walk2Dv.py, using the techniques of Appendix E.6.1. Name of program file: walk2Dc.py. o

Exercise 8.36. Vectorize the class code from Exer. 8.35.

The program developed in Exercise 8.35 cannot be vectorized as long as we base the implementation on class Particle. However, if we remove that class and focus on class Particles, the latter can employ arrays for holding the positions of all particles and vectorized updates of these positions in the moves method. Use ideas from the walk2Dv.py program to vectorize class Particle. Verify the code against walk2Dv.py as explained in Exercise 8.35, and measure the efficiency gain over the version with class Particle. Name of program file: walk2Dcv.py. o

Exercise 8.37. 2D random walk with walls; scalar version.

Modify the walk2D. py program or the walk2Dc. py program from Exercise 8.35 so that the walkers cannot walk outside a rectangular area A = ] x ]. Do not move the particle if the new position of a particle is outside A. Name of program file: walk2D_barrier.py. o

Exercise 8.38. 2D random walk with walls; vectorized version.

Modify the walk2Dv.py program so that the walkers cannot walk outside a rectangular area A = ] x [yL]. Hint: First perform the moves of one direction. Then test if new positions are outside A. Such a test returns a boolean array that can be used as index in the position arrays to pick out the indices of the particles that have moved outside A. With this array index, one can move all particles outside A back to the relevant boundary of A. Name of program file: walk2Dv_barrier.py. o

Exercise 8.39. Simulate the mixture of gas molecules.

Suppose we have a box with a wall dividing the box into two equally sized parts. In one part we have a gas where the molecules are uniformly distributed in a random fashion. At t = 0 we remove the wall. The gas molecules will now move around and eventually fill the whole box.

This physical process can be simulated by a 2D random walk inside a fixed area A as introduced in Exercises 8.37 and 8.38 (in reality the motion is three-dimensional, but we only simulate the two-dimensional part of it since we already have programs for doing this). Use the program from either Exercises 8.37 or 8.38 to simulate the process for A = [0,1] x [0,1]. Initially, place 10000 particles at uniformly distributed random positions in [0,1/2] x [0,1]. Then start the random walk and visualize what happens. Simulate for a long time and make a hardcopy of the animation (an animated GIF file, for instance). Is the end result what you would expect? Name of program file: disorder1.py.

Molecules tend to move randomly because of collisions and forces between molecules. We do not model collisions between particles in the random walk, but the nature of this walk, with random movements, simulates the effect of collisions. Therefore, the random walk can be used to model molecular motion in many simple cases. In particular, the random walk can be used to investigate how a quite ordered system, where one gas fills one half of a box, evolves through time to a more disordered system. o

Exercise 8.40. Simulate the mixture of gas molecules.

Solve Exercise 8.39 when the wall dividing the box is not completely removed, but instead we make a small hole in the wall initially. Name of program file: disorder2.py. o

You are presented n glasses of beer, each containing a different brand. You are informed that there are m > n possible brands in total, and the names of all brands are given. For each glass, you can pay p euros to taste the beer, and if you guess the right brand, you get q > p euros back. Suppose you have done this before and experienced that you typically manage to guess the right brand T times out of 100, so that your probability of guessing the right brand is b = T/100.

Make a function simulate(m, n, p, q, b) for simulating the beer tasting process. Let the function return the amount of money earned and how many correct guesses (< n) you made. Call simulate a large number of times and compute the average earnings and the probability of getting full score in the case m = n = 4, p = 3, q = 6, and b = 1/m (i.e., four glasses with four brands, completely random guessing, and a payback of twice as much as the cost). How much more can you earn from this game if your ability to guess the right brand is better, say b = 1/2? Name of program file: simulate_beer_tasting.py. o

A common mathematical model for the evolution of stock prices can be formulated as a difference equation xn — xn— 1 + ^tpxn— i + axn-iV^irn-i, (8.16)

where xn is the stock price at time tn, ^t is the time interval between two time levels (^t = tn — tn-i), p is the growth rate of the stock price, a is the volatility of the stock price, and ro,..., rn-i are normally distributed random numbers with mean zero and unit standard deviation. An initial stock price x0 must be prescribed together with the input data p, a, and ^t.

We can make a remark that Equation (8.16) is a Forward Euler discretization of a stochastic differential equation for x(t):

dx = px + aN (t), where N(t) is a so-called white noise random time series signal. Such equations play a central role in modeling of stock prices.

Make R realizations of (8.16) for n = 0,..., N for N = 5000 steps over a time period of T = 180 days with a step size ^t = T/N. Name of program file: stock_prices.py. o

Exercise 8.43. Compute with option prices in finance.

In this exercise we are going to consider the pricing of so-called Asian options. An Asian option is a financial contract where the owner earns money when certain market conditions are satisfied.

The contract is specified by a strike price K and a maturity time T. It is written on the average price of the underlying stock, and if this average is bigger than the strike K, the owner of the option will earn the difference. If, on the other hand, the average becomes less, the owner recieves nothing, and the option matures in the value zero. The average is calculated from the last trading price of the stock for each day.

From the theory of options in finance, the price of the Asian option will be the expected present value of the payoff. We assume the stock price dynamics given as,

where r is the interest-rate, and a is the volatility of the stock price. The time t is supposed to be measured in days, t = 0,1,2,..., while e(t) are independent identically distributed normal random variables with mean zero and unit standard deviation. To find the option price, we must calculate the expectation p = (1 + r)-T E

The price is thus given as the expected discounted payoff. We will use Monte Carlo simulations to estimate the expectation. Typically, r and a can be set to r = 0.0002 and a = 0.015. Assume further S(0) = 100.

a) Make a function that simulates a path of S(t), that is, the function computes S(t) for t = 1,..., T for a given T based on the recursive definition in (8.17). The function should return the path as an array.

b) Create a function that finds the average of S(t) from t = 1 to t = T. Make another function that calculates the price of the Asian option based on N simulated averages. You may choose T = 100 days and K = 102.

c) Plot the price p as a function of N. You may start with N = 1000.

d) Plot the error in the price estimation as a function N (assume that the p value corresponding to the largest N value is the "right" price). Try to fit a curve of the form c/\/N for some c to this error plot. The purpose is to show that the error is reduced as 1/VN.

Name of program file: option_price.py.

If you wonder where the values for r and a come from, you will find the explanation in the following. A reasonable level for the yearly interest-rate is around 5%, which corresponds to a daily rate

0.05.250 = 0.0002. The number 250 is chosen because a stock exchange is on average open this amount of days for trading. The value for a is calculated as the volatility of the stock price, corresponding to the standard deviation of the daily returns of the stock defined as (S(t + 1) — S(t))/S(t). "Normally", the volatility is around 1.5% a day. Finally, there are theoretical reasons why we assume that the stock price dynamics is driven by r, meaning that we consider the risk-neutral dynamics of the stock price when pricing options. There is an exciting theory explaining the appearance of r in the dynamics of the stock price. If we want to simulate a stock price dynamics mimicing what we see in the market, r in Equation (8.17) must be substituted with the expected return of the stock. Usually, ^ is higher than r. o

In a laboratory experiment waves are generated through the impact of a model slide into a wave tank. (The intention of the experiment is to model a future tsunami event in a fjord, generated by loose rocks that fall into the fjord.) At a certain location, the elevation of the surface, denoted by n, is measured at discrete points in time using an ultra-sound wave gauge. The result is a time series of vertical positions of the water surface elevations in meter: n(to), n(ti), n(t2),..., n(tn). There are 300 observations per second, meaning that the time difference between to neighboring measurement values n(tj) and n(ti+1) is h = 1/300 second.

Write a Python program that accomplishes the following tasks:

1. Read h from the command line.

2. Read the n values in the file src/random/gauge.dat into an array eta.

3. Plot eta versus the time values.

4. Compute the velocity v of the surface by the formula i = 1,..., n — 1.

Plot v versus time values in a separate plot. 5. Compute the acceleration a of the surface by the formula

Plot a versus the time values in a separate plot. Name of program file: labstunami1.py.

Exercise 8.45. Numerical differentiation of noisy signals.

The purpose of this exercise is to look into numerical differentiation of time series signals that contain measurement errors. This insight might be helpful when analyzing the noise in real data from a laboratory experiment in Exercises 8.44 and 8.46.

1. Compute a signal

Display ?yi versus time ti in a plot. Choose A = 1 and T = 2n. Store the g values in an array etabar.

2. Compute a signal with random noise Ei, n = r?i + Ei,

Ei is drawn from the normal distribution with mean zero and standard deviation a = 0.04A. Plot this n signal as circles in the same plot as ni. Store the Ei in an array E for later use.

3. Compute the first derivative of by the formula ni+1 — n?i-1 . , ,

and store the values in an array detabar. Display the graph.

4. Compute the first derivative of the error term by the formula

and store the values in an array dE. Calculate the mean and the standard deviation of dE.

5. Plot detabar and detabar + dE. Use the result of the standard deviation calculations to explain the qualitative features of the graphs.

6. The second derivative of a time signal ni can be computed by ni+1 — 2gi + ni — 1 . --, i = 1,...,n — 1.

Use this formula on the etabar data and save the result in d2etabar. Also apply the formula to the E data and save the result in d2E. Plot d2etabar and d2etabar + d2E. Compute the standard deviation of d2E and compare with the standard deviation of dE and E. Discuss the plot in light of these standard deviations.

Name of program file: sine_noise.py. o

Exercise 8.46. Model the noise in the data in Exer. 8.44.

We assume that the measured data can be modeled as a smooth time signal n(t) plus a random variation E(t). Computing the velocity of n = n + E results in a smooth velocity from the g term and a noisy signal from the E term. We can estimate the level of noise in the first derivative of E as follows. The random numbers E(tj) are assumed to be independent and normally distributed with mean zero and standard deviation a. It can then be shown that

2h produces numbers that come from a normal distribution with mean zero and standard deviation 2 a. How much is the original noise, reflected by a, magnified when we use this numerical approximation of the velocity? The fraction

will also generate numbers from a normal distribution with mean zero, but this time with standard deviation 2h-2a. Find out how much the noise is magnified in the computed acceleration signal.

The numbers in the gauge.dat file are given with 5 digits. This is no certain indication of the accuracy of the measurements, but as a test we may assume a is of the order 10-4. Check if the visual results for the velocity and acceleration are consistent with the standard deviation of the noise in these signals as modeled above. o

If we have a noisy signal n», where i = 0,..., n counts time levels, the noise can be reduced by computing a new signal where the value at a point is a weighted average of the values at that point and the neighboring points at each side. More precisely, given the signal n», i = 0,..., n, we compute a filtered (averaged) signal with values n(i) by the formula n(i) = 4(ni+i + 2n» + n»-^ i = 1,...,n -1, n0i) = no, nni) = nn.

Make a function filter that takes the n values in an array eta as input and returns the filtered n(i) values in an array. Let n(k) be the signal arising by applying the filtered function k times to the same signal.

Make a plot with curves n and the filtered nj values for k = 1,10,100. Make similar plots for the velocity and acceleration where these are made from both the original n data and the filtered data. Discuss the results. Name of program file: labstunami2.py. o

Exercise 8.48. Find the expected waiting time in traffic lights.

A driver must pass 10 traffic lights on a certain route. Each light has a period red-yellow-green-yellow of two minutes, of which the green and yellow lights last for 70 seconds. Suppose the driver arrives at a traffic light at some uniformly distributed random point of time during the period of two minutes. Compute the corresponding waiting time. Repeat this for 10 traffic lights. Run a large number of routes (i.e., repetitions of passing 10 traffic lights) and let the program write out the average waiting time. Does the computed time coincide with what you would expect? Name of program file: waiting_time.py. o

Was this article helpful?

## Post a comment