Bayesian random forests for high-dimensional classification and regression with complete and incomplete microarray data

Random Forests (RF) are ensemble of trees methods widely used for data prediction, interpretation and variable selection purposes. The wide acceptance can be attributed to its robustness to high dimensionality problem. However, when the high-dimensional data is a sparse one, RF procedures are ineffi...

全面介绍

Saved in:
书目详细资料
主要作者: Oyebayo, Olaniran Ridwan
格式: Thesis
语言:English
English
English
出版: 2018
主题:
在线阅读:http://eprints.uthm.edu.my/326/1/24p%20OLANIRAN%20RIDWAN%20OYEBAYO.pdf
http://eprints.uthm.edu.my/326/2/OLANIRAN%20RIDWAN%20OYEBAYO%20COPYRIGHT%20DECLARATION.pdf
http://eprints.uthm.edu.my/326/3/OLANIRAN%20RIDWAN%20OYEBAYO%20WATERMARK.pdf
标签: 添加标签
没有标签, 成为第一个标记此记录!
实物特征
总结:Random Forests (RF) are ensemble of trees methods widely used for data prediction, interpretation and variable selection purposes. The wide acceptance can be attributed to its robustness to high dimensionality problem. However, when the high-dimensional data is a sparse one, RF procedures are inefficient. Thus, this thesis aims at improving the efficiency of RF by providing a probabilistic framework using Bayesian reasoning. The modification comprises of two main modelling problems: high-dimensionality and missing data. These problems were extensively studied within the scope of classification (binary and multi-class) and regression (linear and survival). The new procedure called Bayesian Random Forest (BRF) focuses on modification of terminal node parameter estimation and selection of random subsets for splitting. BRF algorithm combines the strengths of random subset and greedy selection procedures in creating new maximal ordered variable relevance weights. These weights are in turn used to develop new impurity functions for selecting optimal splits for each tree in a forest. BRF works mainly because the maximal weights are computed using a data-driven procedure called bootstrap prior which was shown to satisfy the uniformly minimum variance property under mild regularity conditions. In addition, BRF ensures that important variables are selected at each subset selection step, thus reducing false signals and eventually improving accuracy of models. As a further extension, missing covariates problem was also handled by pre-imputing the variables using Multivariate Imputation by Chain Equation (MICE) before building forests. Performance analysis was achieved using simulated and eighteen real-life classification and regression microarray cancer datasets. Empirical results from the data analysis established appreciable supremacy over RF and several other competing methods. Keyword: Random Forest, Bayesian Inference, Classification, Regression, Missing Data.