STA 410/2102 - Topics and Study Questions for Test 1
A) Arithmetic on the computer.
The idea of floating-point arithmetic - representation in terms
of mantissa and exponent. Round-off error, underflow and overflow.
The problems of saturation and of cancellation. See Chapter 2 of
the textbook.
Exercise 1: Suppose arithmetic is done with decimal floating-point
numbers with two decimal digits in the mantissa and a
two-digit exponent. What will be the result of the
following operations? (Here "e" means "times ten to the
power".)
a) 0.46e10 + 0.36e9
b) (0.10e-60 * 0.10e-50) / (0.1e-40)
c) (0.99e0 + 0.3e-2) + 0.3e-2
d) 0.99e0 + (0.3e-2 + 0.3e-2)
e) (0.10e-2 + 0.16e-3) - (0.10e-2 + 0.14e-3)
B) Simulation and permutation tests.
Generation of random variates. Simulation to determine properties
of statistical procedures (eg, correctness of the distribution of
p-values under the null hypothesis, power of a test for an alternative).
Permutation tests.
Exercise 2: Write a S/R function called my.rnorm that generates
normal random variates, taking the same arguments as
the built-in rnorm (sample size, mean, standard deviation).
This function should use the built-in runif and qnorm
functions. The arguments of qnorm are a vector of
numbers between 0 and 1, a mean, and a standard deviation.
It returns a vector of quantiles of the normal distribution
with that mean and standard deviation, with the quantile
positions being given by the first argument.
C) Least-squares regression by solving the normal equations.
The Cholesky decomposition, forward substitution, and backward
substitution. See sections 3.1 to 3.3 of the text for this and
for (D).
Exercise 3: Solve the following system of equation for z by hand,
using the Cholesky decomposition and forward and
backward substitution:
[ 4 2 2 ] [ 10 ]
[ ] [ ]
[ 2 10 1 ] z = [ 2 ]
[ ] [ ]
[ 2 1 5 ] [ 1 ]
D) Least-squares regression using orthogonal transformations.
Definition of an orthogonal transformation. Effect of an orthogonal
transformation on the regression problem. Using orthogonal
transformations to produce an upper-triangular X matrix, and why this
helps. Accomplishing this with Givens rotations and Householder
reflections.
Exercise 4: Find a Householder transformation which when applied to
the vector [ 5 3 0 4 ]' will produce a vector in which
the last two coordinates are zero, and the first coordinate
is the same as before. What is the full orthogonal matrix
corresponding to this transformation?