Data-generating models under which the random forest algorithm performs badly

Examples are given of data-generating models under which some versions of the random forest algorithm may fail to be consistent or be extremely slow to converge to the optimal predictor. The evidence provided for these properties is based on mostly intuitive arguments, similar to those used earlier with simpler examples, and on numerical experiments. Although one can always choose a model under which random forests perform very badly, it is shown that when substantial improvement is possible simple methods based on statistics of 'variable use' and 'variable importance' may indicate a better predictor based on a sort of mixture of random forests; thus, by acknowledging the difficulties posed by some models one may improve the performance of random forests in some applications.
View on arXiv