In all of these examples I divide the integrand by the density/probability function of the noted importance sampler. #4.5.1. -------- It is natural to generate from the N(0,1) here as the importance sampler. Note constant 1/sqrt(2*pi) needs to be accounted for. #generate a sample of n from the N(0,1) n=10000 x=rnorm(n,0,1) est=0 s2=0 for(i in 1:n) { est=est+(cos(x[i]))**2 s2=s2+(cos(x[i]))**4 } est=(sqrt(2*pi))*est/n s2=2*pi*s2/n-est**2 cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") output 10000 1.386782 1.412824 1.438866 ------------------------------------------------------------------ #4.5.2 ------ So we will use the binomial(m,2/3) as an importance sampler as this sum is the expected value of X**6 with respect to this distribution. Here I've chosen m=10 as an example and for MC sample sizes n=10**5 and n=10**6. m=10 #generate a sample of n from the binomial(m,2/3) n=100000 x=rbinom(n,m,2/3) est=0 s2=0 for(i in 1:n) { est=est+(x[i]**6) s2=s2+(x[i]**12) } est=est/n s2=s2/n-est**2 cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") output > cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") 10000 155384.1 161051.9 166719.6 > cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") 1e+05 156374.4 158145.4 159916.5 ------------------------------------------------------------------------ #4.5.5 We use the Poisson(5) distribution as an importance sampler. Note constant exp(-5) needs to be accounted for. #generate a sample of n from the Poisson(5) n=100000 x=rpois(n,5) est=0 s2=0 for(i in 1:n) { est=est+sin(x[i]**2) s2=s2+(sin(x[i]**2))**2 } est=exp(5)*est/n s2=exp(5)*exp(5)*s2/n-est**2 cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") 10000 -42.25446 -39.57449 -36.89451 1e+05 -39.7936 -38.94073 -38.08785 --------------------------------------------------------------------- #4.5.13 (a) Put g(x,y)=exp(-y) then h(x,y)/g(x,y)=exp(y)h(x,y). (b) The following algorithm uses g as an importance sampler. #generate a sample of n from the X~U(0,1) independent of Y~exponentil(1) n=10000 x=runif(n,0,1) y=rexp(n,1) est=0 s2=0 for(i in 1:n) { h=cos(sqrt(x[i]*y[i]))*exp(-y[i]**2) g=exp(-y[i]) fn=h/g est=est+fn s2=s2+fn**2 } est=est/n s2=s2/n-est**2 cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") output 1000 0.737877 0.7780043 0.8181315 10000 0.7545874 0.7673984 0.7802094 (c) Put g(x,y)=5exp(-5y) then h(x,y)/g(x,y)=exp(5y)h(x,y)/5. (d) #generate a sample of n from the X~U(0,1) independent of Y~exponentil(5) n=10000 x=runif(n,0,1) y=rexp(n,5) est=0 s2=0 for(i in 1:n) { h=cos(sqrt(x[i]*y[i]))*exp(-y[i]**2) g=5*exp(-5*y[i]) fn=h/g est=est+fn s2=s2+fn**2 } est=est/n s2=s2/n-est**2 cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") output 1000 0.6277745 0.7439872 0.8601998 10000 0.7192546 0.7647017 0.8101488 (e) From looking at the length of intervals supposed to caontain the true value algorithm (b) appears to be more accurate. ----------------------------------------------------------------------------------------- #4.5.14 ------- It is natural to generate from the U(0,1) here as the importance sampler. #generate a sample of n from the U(0,1) n=100000 x=runif(n,0,1) est=0 s2=0 for(i in 1:n) { fn=cos(x[i]**3)*sin(x[i]**4) est=est+fn s2=s2+fn**2 } est=est/n s2=s2/n-est**2 cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") output 1000 0.1404497 0.1567201 0.1729905 10000 0.1419291 0.1470346 0.1521401 1e+05 0.1464545 0.1480653 0.149676 ---------------------------------------------------------------------------- #4.5.16 ------- Here it makes sense to use a geometric(4/5) distribution as the importance sampler. #generate a sample of n from the geometric(4/5) n=100000 x=rgeom(n,4/5) est=0 s2=0 for(i in 1:n) { fn=((3+x[i]**2)**(-5))*((1/5)**x[i]) fn=fn/dgeom(x[i],4/5) est=est+fn s2=s2+fn**2 } est=est/n s2=s2/n-est**2 cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") output 1000 0.004172345 0.004330897 0.004489449 10000 0.004257632 0.004308029 0.004358426 1e+05 0.004296691 0.004312597 0.004328503 -------------------------------------------------------------------------------- #4.5.17 ------- #generate a sample of n from the N(0,1) n=100000 x=rnorm(n,0,1) y=x*x-3*x+2 est=0 s2=0 for(i in 1:n) { # test to see if y>=0 if (y[i] >= 0){est=est+1} } est=est/n s2=est*(1-est) cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") output 1000 0.8414114 0.873 0.9045886 10000 0.8550579 0.8653 0.8755421 1e+05 0.8601016 0.86336 0.8666184 ------------------------------------------------------------------------------ #I.8 If X~N_k(mu, Sigma) let T = Cholesky factor of Sigma (upper triangular with nonegative diagonal) so Sigma=T'T. Now let Z~N_k(0, I). So mu+T'Z~N_k(Mu, T'T)=N_k(Mu, Sigma). For example suppose mu is given by > mu [1] -1 0 1 2 and Sigma is given by > Sigma [,1] [,2] [,3] [,4] [1,] 1 2 3 4 [2,] 2 5 8 11 [3,] 3 8 14 20 [4,] 4 11 20 30 The following code estimates the expected value of X'(Sigmainv)X where Sigmainv is the inverse of Sigma. #generate a sample of n from the N_4(0,I) hold in 4Xn matrix Z (each column is a sampled vector) n=100000 z1=rnorm(n,0,1) z2=rnorm(n,0,1) z3=rnorm(n,0,1) z4=rnorm(n,0,1) Z=rbind(z1,z2,z3,z4) mu=c(-1,0,1,2) mu r1=c(1,2,3,4) r2=c(2,5,8,11) r3=c(3,8,14,20) r4=c(4,11,20,30) Sigma=rbind(r1,r2,r3,r4) Sigma Sigmainv=solve(Sigma) Sigmainv # get the Cgolesky factor of Sigma T=chol(Sigma) T #X contains the sample of n from N_k(mu, Sigma) X=mu+t(T)%*%Z est=0 s2=0 for (i in 1:n){ fn=t(X[,i])%*%Sigmainv%*%X[,i] est=est+fn s2=s2+fn**2 } est=est/n s2=s2/n cat("estimate +- 3 standard errors","\n") cat(n,est-3*sqrt(s2/n),est,est+3*sqrt(s2/n),"\n") output 1000 8.18984 9.200078 10.21032 10000 8.649707 8.961286 9.272866 1e+05 8.933 9.032486 9.131972