Study 2.1 Results: F-tests, numerator df=1, Poisson(86) heterogeneity in sample size only > ################################################################################## > # AnalyzeStudy2.txt > ################################################################################## > # Analysis of small sample size data from Study 2, Poisson heterogeneity in > # sample size only, F-tests > > rm(list=ls()) > options(scipen=999) # To avoid scientific notation! > # read.table chokes on large data sets > simdata = scan("Study2.1.Data.txt") # .1=1df, .3=3df, .5=5df Read 1600000 items > nrows = 20*10000; ncols = 8 > dim(simdata) = c(ncols,nrows) # Reversed: R reads by columns, rows > simdata = t(simdata) > colnames(simdata) = c("Simulation", "Nstudies", "PopMeanPower", "SampleMeanPower", + "Pcurve", "Puniform", "MaxLike", "Zcurve") > simdata = data.frame(simdata) > head(simdata) Simulation Nstudies PopMeanPower SampleMeanPower Pcurve Puniform MaxLike Zcurve 1 1 15 0.05 0.05 0.05062563 0.05062563 0.05000000 0.004744805 2 2 15 0.05 0.05 0.05060441 0.05060441 0.05000000 0.005038262 3 3 15 0.05 0.05 0.05066354 0.05066354 0.05000000 0.065213110 4 4 15 0.05 0.05 0.06168337 0.06695439 0.07253706 0.126112700 5 5 15 0.05 0.05 0.05062917 0.05062917 0.05000000 0.023963060 6 6 15 0.05 0.05 0.05075913 0.06794078 0.06828645 0.144637300 > # tail(simdata) > # attach(simdata) > # Assembing a data frame in univariate format for nice easy tables > # It's much faster to bind columns together. > nlines = dim(simdata)[1] # Will be bigger for full data set. > est = as.matrix(simdata[,5:8]) # Estimates > unidata = cbind(rep(simdata$Nstudies,each=4), rep(simdata$PopMeanPower,each=4), + rep(simdata$SampleMeanPower,each=4), rep(1:4,times=nlines), + as.numeric(t(est)) ) > colnames(unidata) = c("Nstudies", "PopMeanPower", "SampleMeanPower", "Method", + "Estimate") > unidata = data.frame(unidata) > Methods = c("Pcurve","Puniform","MaxLike","Zcurve") > unidata$Method = factor(unidata$Method ,labels = Methods) > attach(unidata) > AbsError = abs(Estimate-SampleMeanPower) # This is the target for a meta-analysis > Bias = Estimate - PopMeanPower > # Call the estimate close enough if it's within 5% of the target. > CloseEnough = numeric(length(AbsError)); CloseEnough[AbsError<0.05] = 1 > > > ######################### Sample sizes ############################# > kounts = table(Method,Nstudies, PopMeanPower) > cat("\n Sample sizes \n\n"); print(kounts) Sample sizes , , PopMeanPower = 0.05 Nstudies Method 15 25 50 100 250 Pcurve 10000 10000 10000 10000 10000 Puniform 10000 10000 10000 10000 10000 MaxLike 10000 10000 10000 10000 10000 Zcurve 10000 10000 10000 10000 10000 , , PopMeanPower = 0.25 Nstudies Method 15 25 50 100 250 Pcurve 10000 10000 10000 10000 10000 Puniform 10000 10000 10000 10000 10000 MaxLike 10000 10000 10000 10000 10000 Zcurve 10000 10000 10000 10000 10000 , , PopMeanPower = 0.5 Nstudies Method 15 25 50 100 250 Pcurve 10000 10000 10000 10000 10000 Puniform 10000 10000 10000 10000 10000 MaxLike 10000 10000 10000 10000 10000 Zcurve 10000 10000 10000 10000 10000 , , PopMeanPower = 0.75 Nstudies Method 15 25 50 100 250 Pcurve 10000 10000 10000 10000 10000 Puniform 10000 10000 10000 10000 10000 MaxLike 10000 10000 10000 10000 10000 Zcurve 10000 10000 10000 10000 10000 > > ###################### Average Performance ###################### > estframe = aggregate(Estimate,by=list(Method,Nstudies, PopMeanPower),FUN=mean, na.rm=T) > meanest = estframe$x; dim(meanest) = c(4,5,4) # Goes down col1 first > MeanEstimatedPower = kounts > for(j in 1:4) MeanEstimatedPower[,,j] = meanest[,,j] > cat("\n Mean Estimated Power \n\n") Mean Estimated Power > print(round(MeanEstimatedPower,3)) , , PopMeanPower = 0.05 Nstudies Method 15 25 50 100 250 Pcurve 0.083 0.073 0.064 0.059 0.055 Puniform 0.076 0.067 0.061 0.058 0.054 MaxLike 0.076 0.067 0.061 0.057 0.054 Zcurve 0.086 0.071 0.058 0.049 0.040 , , PopMeanPower = 0.25 Nstudies Method 15 25 50 100 250 Pcurve 0.269 0.261 0.256 0.253 0.251 Puniform 0.256 0.253 0.252 0.251 0.251 MaxLike 0.260 0.255 0.253 0.251 0.251 Zcurve 0.314 0.305 0.293 0.280 0.268 , , PopMeanPower = 0.5 Nstudies Method 15 25 50 100 250 Pcurve 0.484 0.491 0.496 0.497 0.499 Puniform 0.473 0.485 0.493 0.496 0.499 MaxLike 0.479 0.489 0.495 0.497 0.499 Zcurve 0.513 0.516 0.513 0.508 0.502 , , PopMeanPower = 0.75 Nstudies Method 15 25 50 100 250 Pcurve 0.728 0.736 0.742 0.747 0.749 Puniform 0.721 0.732 0.740 0.746 0.748 MaxLike 0.728 0.736 0.742 0.747 0.749 Zcurve 0.704 0.712 0.717 0.723 0.728 > > ########################## Standard deviations ######################### > vframe = aggregate(Estimate,by=list(Method,Nstudies, PopMeanPower),FUN=var, na.rm=T) > vest = vframe$x; dim(vest) = c(4,5,4) > SDestimatedPower = kounts > for(j in 1:4) SDestimatedPower[,,j] = sqrt(vest[,,j]) > cat("\n Standard deviations of estimated power \n\n") Standard deviations of estimated power > print(round(SDestimatedPower,3)) , , PopMeanPower = 0.05 Nstudies Method 15 25 50 100 250 Pcurve 0.059 0.039 0.024 0.015 0.007 Puniform 0.050 0.032 0.019 0.012 0.006 MaxLike 0.050 0.033 0.020 0.012 0.006 Zcurve 0.088 0.065 0.044 0.031 0.019 , , PopMeanPower = 0.25 Nstudies Method 15 25 50 100 250 Pcurve 0.156 0.128 0.095 0.069 0.046 Puniform 0.147 0.121 0.089 0.065 0.042 MaxLike 0.146 0.120 0.087 0.064 0.042 Zcurve 0.155 0.127 0.093 0.068 0.045 , , PopMeanPower = 0.5 Nstudies Method 15 25 50 100 250 Pcurve 0.175 0.139 0.102 0.073 0.046 Puniform 0.170 0.132 0.097 0.070 0.044 MaxLike 0.166 0.130 0.095 0.068 0.043 Zcurve 0.151 0.121 0.091 0.068 0.045 , , PopMeanPower = 0.75 Nstudies Method 15 25 50 100 250 Pcurve 0.128 0.098 0.069 0.048 0.030 Puniform 0.126 0.097 0.067 0.047 0.029 MaxLike 0.121 0.093 0.065 0.045 0.028 Zcurve 0.105 0.084 0.064 0.048 0.033 > > ########################## Sample mean true power ######################### > TPframe = aggregate(SampleMeanPower,by=list(Method,Nstudies, PopMeanPower),FUN=mean, na.rm=T) > tp = TPframe$x; dim(tp) = c(4,5,4) > TPOW = kounts > for(j in 1:4) TPOW[,,j] = tp[,,j] > cat("\n Sample mean true power \n\n") Sample mean true power > print(round(TPOW,3)) , , PopMeanPower = 0.05 Nstudies Method 15 25 50 100 250 Pcurve 0.05 0.05 0.05 0.05 0.05 Puniform 0.05 0.05 0.05 0.05 0.05 MaxLike 0.05 0.05 0.05 0.05 0.05 Zcurve 0.05 0.05 0.05 0.05 0.05 , , PopMeanPower = 0.25 Nstudies Method 15 25 50 100 250 Pcurve 0.25 0.25 0.25 0.25 0.25 Puniform 0.25 0.25 0.25 0.25 0.25 MaxLike 0.25 0.25 0.25 0.25 0.25 Zcurve 0.25 0.25 0.25 0.25 0.25 , , PopMeanPower = 0.5 Nstudies Method 15 25 50 100 250 Pcurve 0.50 0.50 0.50 0.50 0.50 Puniform 0.50 0.50 0.50 0.50 0.50 MaxLike 0.50 0.50 0.50 0.50 0.50 Zcurve 0.50 0.50 0.50 0.50 0.50 , , PopMeanPower = 0.75 Nstudies Method 15 25 50 100 250 Pcurve 0.75 0.75 0.75 0.75 0.75 Puniform 0.75 0.75 0.75 0.75 0.75 MaxLike 0.75 0.75 0.75 0.75 0.75 Zcurve 0.75 0.75 0.75 0.75 0.75 > > ###################### Bias ###################### > biasframe = aggregate(Bias,by=list(Method,Nstudies, PopMeanPower),FUN=mean, na.rm=T) > meanbias = biasframe$x; dim(meanbias) = c(4,5,4) # Goes down col1 first > MeanBias = kounts > for(j in 1:4) MeanBias[,,j] = meanbias[,,j] > cat("\n Mean Bias \n\n") Mean Bias > print(round(MeanBias,3)) , , PopMeanPower = 0.05 Nstudies Method 15 25 50 100 250 Pcurve 0.033 0.023 0.014 0.009 0.005 Puniform 0.026 0.017 0.011 0.008 0.004 MaxLike 0.026 0.017 0.011 0.007 0.004 Zcurve 0.036 0.021 0.008 -0.001 -0.010 , , PopMeanPower = 0.25 Nstudies Method 15 25 50 100 250 Pcurve 0.019 0.011 0.006 0.003 0.001 Puniform 0.006 0.003 0.002 0.001 0.001 MaxLike 0.010 0.005 0.003 0.001 0.001 Zcurve 0.064 0.055 0.043 0.030 0.018 , , PopMeanPower = 0.5 Nstudies Method 15 25 50 100 250 Pcurve -0.016 -0.009 -0.004 -0.003 -0.001 Puniform -0.027 -0.015 -0.007 -0.004 -0.001 MaxLike -0.021 -0.011 -0.005 -0.003 -0.001 Zcurve 0.013 0.016 0.013 0.008 0.002 , , PopMeanPower = 0.75 Nstudies Method 15 25 50 100 250 Pcurve -0.022 -0.014 -0.008 -0.003 -0.001 Puniform -0.029 -0.018 -0.010 -0.004 -0.002 MaxLike -0.022 -0.014 -0.008 -0.003 -0.001 Zcurve -0.046 -0.038 -0.033 -0.027 -0.022 > > ########################## Z-tests for bias ######################### > # Calculate Z > ZforBias = kounts > for(j in 1:4) ZforBias[,,j] = + sqrt(kounts[1,1,1])*(MeanEstimatedPower[,,j] - TPOW[,,j])/SDestimatedPower[,,j] > cat("\n Z-tests for bias \n", + " Any Z with |Z|>4.37 is significant.\n\n") Z-tests for bias Any Z with |Z|>4.37 is significant. > print(round(ZforBias,2)) , , PopMeanPower = 0.05 Nstudies Method 15 25 50 100 250 Pcurve 56.09 57.19 59.46 63.43 69.66 Puniform 51.76 54.13 57.61 63.33 69.37 MaxLike 52.11 53.48 55.69 59.77 61.85 Zcurve 40.88 32.87 18.41 -3.87 -56.26 , , PopMeanPower = 0.25 Nstudies Method 15 25 50 100 250 Pcurve 12.13 8.33 6.55 3.89 3.24 Puniform 3.76 2.29 2.32 0.92 1.77 MaxLike 6.57 4.35 3.81 2.02 2.51 Zcurve 41.56 43.18 45.95 43.69 38.59 , , PopMeanPower = 0.5 Nstudies Method 15 25 50 100 250 Pcurve -9.35 -6.20 -3.70 -3.82 -1.65 Puniform -15.81 -11.26 -7.23 -6.25 -3.35 MaxLike -12.32 -8.28 -5.24 -4.88 -2.50 Zcurve 8.60 13.64 14.18 12.00 4.95 , , PopMeanPower = 0.75 Nstudies Method 15 25 50 100 250 Pcurve -17.38 -14.05 -11.56 -7.09 -3.54 Puniform -23.00 -18.73 -14.59 -9.21 -5.33 MaxLike -18.61 -15.12 -12.07 -7.38 -3.79 Zcurve -43.32 -45.35 -51.22 -56.05 -67.36 > > # Protect the 80 tests in this table with Bonferroni > # Two-tailed at the 0.001 significance level > # 0.0005/80 = 0.00000625, and qnorm(0.00000625) = -4.36868 > # So any Z with |Z|>4.37 is significant. > > ###################### Mean Absolute Error ###################### > # Take a look at marginals to see roughly who won overall > cat(" Marginal means for method \n") Marginal means for method > print(aggregate(round(100*AbsError,2),by=list(Method),FUN=mean, na.rm=T)) Group.1 x 1 Pcurve 6.033297 2 Puniform 5.728344 3 MaxLike 5.584147 4 Zcurve 6.442513 > > errframe = aggregate(AbsError,by=list(Method,Nstudies, PopMeanPower),FUN=mean, na.rm=T) > meanerr = 100*errframe$x; dim(meanerr) = c(4,5,4) > MeanAbsoluteError = kounts > for(j in 1:4) MeanAbsoluteError[,,j] = meanerr[,,j] > cat("\n Mean Absolute Error of Prediction \n\n") Mean Absolute Error of Prediction > print(round(MeanAbsoluteError,2)) , , PopMeanPower = 0.05 Nstudies Method 15 25 50 100 250 Pcurve 3.32 2.25 1.41 0.93 0.52 Puniform 2.57 1.75 1.11 0.76 0.43 MaxLike 2.59 1.74 1.09 0.73 0.39 Zcurve 6.53 4.90 3.38 2.44 1.79 , , PopMeanPower = 0.25 Nstudies Method 15 25 50 100 250 Pcurve 12.94 10.49 7.69 5.53 3.64 Puniform 12.11 9.87 7.17 5.18 3.38 MaxLike 12.07 9.76 7.05 5.10 3.32 Zcurve 13.55 11.09 8.21 5.96 3.87 , , PopMeanPower = 0.5 Nstudies Method 15 25 50 100 250 Pcurve 14.32 11.20 8.14 5.80 3.67 Puniform 13.93 10.68 7.80 5.56 3.51 MaxLike 13.61 10.41 7.60 5.39 3.41 Zcurve 12.42 9.91 7.44 5.48 3.59 , , PopMeanPower = 0.75 Nstudies Method 15 25 50 100 250 Pcurve 9.77 7.59 5.38 3.72 2.35 Puniform 9.79 7.59 5.34 3.71 2.32 MaxLike 9.33 7.23 5.11 3.53 2.21 Zcurve 8.34 6.96 5.56 4.30 3.13 > > ###################### Is estimate close enough? ###################### > > closeframe = + aggregate(CloseEnough,by=list(Method,Nstudies, PopMeanPower),FUN=mean, na.rm=T) > klose = closeframe$x; dim(klose) = c(4,5,4) > ProportionWithin.05 = kounts > for(j in 1:4) ProportionWithin.05[,,j] = klose[,,j] > cat("\n Proportion of estimates within 0.05 of target \n\n") Proportion of estimates within 0.05 of target > print(round(ProportionWithin.05,3)) , , PopMeanPower = 0.05 Nstudies Method 15 25 50 100 250 Pcurve 0.780 0.843 0.918 0.972 0.999 Puniform 0.827 0.882 0.948 0.985 1.000 MaxLike 0.821 0.879 0.948 0.986 1.000 Zcurve 0.670 0.736 0.838 0.930 0.993 , , PopMeanPower = 0.25 Nstudies Method 15 25 50 100 250 Pcurve 0.214 0.272 0.381 0.526 0.728 Puniform 0.227 0.288 0.412 0.549 0.761 MaxLike 0.225 0.294 0.420 0.556 0.773 Zcurve 0.223 0.279 0.369 0.495 0.702 , , PopMeanPower = 0.5 Nstudies Method 15 25 50 100 250 Pcurve 0.209 0.269 0.376 0.514 0.724 Puniform 0.222 0.286 0.387 0.525 0.746 MaxLike 0.220 0.297 0.397 0.540 0.758 Zcurve 0.231 0.292 0.398 0.529 0.734 , , PopMeanPower = 0.75 Nstudies Method 15 25 50 100 250 Pcurve 0.335 0.409 0.553 0.723 0.914 Puniform 0.332 0.412 0.552 0.720 0.916 MaxLike 0.344 0.429 0.571 0.744 0.929 Zcurve 0.422 0.464 0.550 0.655 0.802 > > > > ################################################################################## > # Need to do all 6 pairwise matched t-tests for mean absolute error, for each > # combination of Nstudies and PopMeanPower, > # with a Bonferroni correction for 120 tests in the table. > # alpha = 0.001; a = alpha/120; critZ = qnorm(1-a/2); critZ # 4.456436 > # Start with simdata, multivariate format. It's best to re-start R. > ###################################################################################### > > rm(list=ls()) > options(scipen=999) # To avoid scientific notation > alpha = 0.001; a = alpha/120; critZ = qnorm(1-a/2) > # read.table chokes on large data sets > simdata = scan("Study2.1.Data.txt") # .1=1df, .3=3df, .5=5df Read 1600000 items > nrows = 20*10000; ncols = 8 > dim(simdata) = c(ncols,nrows) # Reversed: R reads by columns, rows > simdata = t(simdata) > colnames(simdata) = c("Simulation", "Nstudies", "PopMeanPower", "SampleMeanPower", + "Pcurve", "Puniform", "MaxLike", "Zcurve") > simdata = data.frame(simdata) > attach(simdata) > ae = as.matrix(simdata[,5:8]) # The estimates > for(kol in 1:4) ae[,kol] = abs(ae[,kol]-SampleMeanPower) # Now they are absolute errors. > > # table(Nstudies, SampleMeanPower) # 5 by 4 > Methods = c("Pcurve","Puniform","MaxLike","Zcurve") > nstudies = unique(Nstudies); truepower = unique(PopMeanPower) > nlines = dim(simdata)[1]; index = 1:nlines > > pairwise = matrix(0,4,4); rownames(pairwise) = colnames(pairwise) = Methods > keepscore = pairwise # Count winners in this matrix > > # This does not have to be efficient, and it isn't. > > cat("\n\n Z-tests for pairwise differences in mean absolute error \n", + "Positive value means the row method is less accurate on average. \n", + "Critical value Bonferroni protected at the 0.001 level is",critZ,".\n\n") Z-tests for pairwise differences in mean absolute error Positive value means the row method is less accurate on average. Critical value Bonferroni protected at the 0.001 level is 4.456436 . > > for(i in 1:5) # Looping over Nstudies + { + for(j in 1:4) # Looping over PopMeanPower + { + cat("\n Nstudies =",nstudies[i]," and PopMeanPower =",truepower[j],"\n\n") + In = (Nstudies==nstudies[i]) * (PopMeanPower==truepower[j]) + pick = index[In==1] + # Fill matrix of pairwise tests in a double loop + for(K in 1:4) + { + for(L in 1:4) + { + differ = (ae[,K]-ae[,L])[pick] + Z = 0 + if(K != L) Z = t.test(differ)$statistic + if(K < L) pairwise[K,L] = Z + if(Z < -critZ) keepscore[K,L] = keepscore[K,L]+1 + } # Next L + } # Next K + print(round(pairwise[1:3,2:4],2)) + } # Next j (PopMeanPower) + } # Next i (Nstudies) Nstudies = 15 and PopMeanPower = 0.05 Puniform MaxLike Zcurve Pcurve 20.54 25.44 -83.04 Puniform 0.00 -1.83 -104.32 MaxLike 0.00 0.00 -119.07 Nstudies = 15 and PopMeanPower = 0.25 Puniform MaxLike Zcurve Pcurve 11.52 16.01 -9.84 Puniform 0.00 1.34 -17.38 MaxLike 0.00 0.00 -21.70 Nstudies = 15 and PopMeanPower = 0.5 Puniform MaxLike Zcurve Pcurve 4.85 12.19 32.44 Puniform 0.00 9.00 17.69 MaxLike 0.00 0.00 17.98 Nstudies = 15 and PopMeanPower = 0.75 Puniform MaxLike Zcurve Pcurve -0.41 10.69 29.74 Puniform 0.00 14.63 19.88 MaxLike 0.00 0.00 17.10 Nstudies = 25 and PopMeanPower = 0.05 Puniform MaxLike Zcurve Pcurve 20.5 25.41 -83.70 Puniform 0.0 1.42 -111.44 MaxLike 0.0 0.00 -120.33 Nstudies = 25 and PopMeanPower = 0.25 Puniform MaxLike Zcurve Pcurve 10.36 16.12 -10.61 Puniform 0.00 4.98 -17.35 MaxLike 0.00 0.00 -22.51 Nstudies = 25 and PopMeanPower = 0.5 Puniform MaxLike Zcurve Pcurve 7.98 16.88 26.84 Puniform 0.00 9.35 11.01 MaxLike 0.00 0.00 9.13 Nstudies = 25 and PopMeanPower = 0.75 Puniform MaxLike Zcurve Pcurve -0.02 11.47 15.40 Puniform 0.00 14.08 10.53 MaxLike 0.00 0.00 5.64 Nstudies = 50 and PopMeanPower = 0.05 Puniform MaxLike Zcurve Pcurve 20.24 25.95 -81.13 Puniform 0.00 6.97 -107.19 MaxLike 0.00 0.00 -111.17 Nstudies = 50 and PopMeanPower = 0.25 Puniform MaxLike Zcurve Pcurve 11.45 18.29 -11.63 Puniform 0.00 6.87 -19.55 MaxLike 0.00 0.00 -25.13 Nstudies = 50 and PopMeanPower = 0.5 Puniform MaxLike Zcurve Pcurve 7.28 15.84 20.08 Puniform 0.00 8.85 6.69 MaxLike 0.00 0.00 4.00 Nstudies = 50 and PopMeanPower = 0.75 Puniform MaxLike Zcurve Pcurve 1.16 13.04 -5.25 Puniform 0.00 12.74 -4.66 MaxLike 0.00 0.00 -11.63 Nstudies = 100 and PopMeanPower = 0.05 Puniform MaxLike Zcurve Pcurve 19.3 25.98 -77.51 Puniform 0.0 15.20 -99.95 MaxLike 0.0 0.00 -101.39 Nstudies = 100 and PopMeanPower = 0.25 Puniform MaxLike Zcurve Pcurve 10.77 17.12 -12.63 Puniform 0.00 6.44 -19.87 MaxLike 0.00 0.00 -25.04 Nstudies = 100 and PopMeanPower = 0.5 Puniform MaxLike Zcurve Pcurve 6.89 16.25 12.44 Puniform 0.00 10.52 2.02 MaxLike 0.00 0.00 -2.88 Nstudies = 100 and PopMeanPower = 0.75 Puniform MaxLike Zcurve Pcurve 0.55 13.23 -20.74 Puniform 0.00 13.83 -16.51 MaxLike 0.00 0.00 -25.02 Nstudies = 250 and PopMeanPower = 0.05 Puniform MaxLike Zcurve Pcurve 19.25 29.82 -84.54 Puniform 0.00 38.10 -99.95 MaxLike 0.00 0.00 -100.99 Nstudies = 250 and PopMeanPower = 0.25 Puniform MaxLike Zcurve Pcurve 12.58 19.67 -10.17 Puniform 0.00 7.18 -19.95 MaxLike 0.00 0.00 -25.16 Nstudies = 250 and PopMeanPower = 0.5 Puniform MaxLike Zcurve Pcurve 7.35 16.71 5.25 Puniform 0.00 9.92 -3.17 MaxLike 0.00 0.00 -9.30 Nstudies = 250 and PopMeanPower = 0.75 Puniform MaxLike Zcurve Pcurve 2.04 16.54 -35.95 Puniform 0.00 14.57 -31.92 MaxLike 0.00 0.00 -40.56 > > cat("\nTotal number of significant comparisons: \n") Total number of significant comparisons: > print(sum(keepscore)) [1] 108 > > cat("\n\n Number of times row method is significantly more accurate than column method \n\n"); print(addmargins(keepscore,2)) Number of times row method is significantly more accurate than column method Pcurve Puniform MaxLike Zcurve Sum Pcurve 0 0 0 13 13 Puniform 15 0 0 13 28 MaxLike 20 17 0 14 51 Zcurve 7 5 4 0 16 > Now just the case where H0 is false Positive value means the row method is less accurate on average. Critical value Bonferroni protected at the 0.001 level is still 4.456436 . > keepscore2 = matrix(0,4,4) # Count winners in this matrix > rownames(keepscore2) = colnames(keepscore2) = Methods > > > for(i in 1:5) # Looping over Nstudies + { + for(j in 2:4) # Looping over PopMeanPower + { + # cat("\n Nstudies =",nstudies[i]," and PopMeanPower =",truepower[j],"\n\n") + In = (Nstudies==nstudies[i]) * (PopMeanPower==truepower[j]) + pick = index[In==1] + # Fill matrix of pairwise tests in a double loop + for(K in 1:4) + { + for(L in 1:4) + { + differ = (ae[,K]-ae[,L])[pick] + Z = 0 + if(K != L) Z = t.test(differ)$statistic + if(K < L) pairwise[K,L] = Z + if(Z < -critZ) keepscore2[K,L] = keepscore2[K,L]+1 + } # Next L + } # Next K + # print(round(pairwise[1:3,2:4],2)) + } # Next j (PopMeanPower) + } # Next i (Nstudies) > > cat("\nTotal number of significant comparisons: \n") Total number of significant comparisons: > print(sum(keepscore2)) [1] 80 > > cat("\n\n Number of times row method is significantly more accurate than column method out of 15\n\n"); print(addmargins(keepscore2,2)) Number of times row method is significantly more accurate than column method out of 15 Pcurve Puniform MaxLike Zcurve Sum Pcurve 0 0 0 8 8 Puniform 10 0 0 8 18 MaxLike 15 14 0 9 38 Zcurve 7 5 4 0 16