Using interpolation to reduce computing time for analysis of large but
simple data sets with application to design of epidemiological studies
Abstract
One way to investigate the precision of estimates likely to result from planned experiments and planned epidemiological studies is to simulate a large number of possible outcomes and analyse the sets of possible results. This appears to be computationally expensive for some multi-stage designs, so choice of designs is instead based on theoretical derivation of expected information. This paper shows that for some types of studies the analysis of large numbers of simulated outcomes can be achieved more rapidly by making use of interpolation.
View on arXivComments on this paper
