删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

基于正态混合模型的贝叶斯分类方法及其应用

本站小编 Free考研考试/2021-12-27

基于正态混合模型的贝叶斯分类方法及其应用 张婧1, 袁敏2, 刘妍岩21. 中南财经政法大学统计与数学学院, 武汉 430073;
2. 武汉大学数学与统计学院, 武汉 430072 The Bayes Classifier Based on the Normal Mixture Model and Its Application ZHANG Jing1, YUAN Min2, LIU Yanyan21. School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan 430073, China;
2. School of Mathematics and Statistics, Wuhan University, Wuhan 430072, China
摘要
图/表
参考文献
相关文章(2)
点击分布统计
下载分布统计
-->

全文: PDF(441 KB) HTML (1 KB)
输出: BibTeX | EndNote (RIS)
摘要本文主要研究正态混合模型的贝叶斯分类方法.贝叶斯分类以后验概率最大为准则,后验概率需要估计相关的条件分布.对于连续型数据的分类,其数据由多个类别混合而成,仅用单一分布难以描述,此时混合模型是一个较好的选择,并且可由EM算法获得.模拟实验表明,基于正态混合模型的贝叶斯分类方法是可行有效的.对于特征较多的分类,不同特征对分类的影响不同,本文对每个特征应用基于正态混合模型的贝叶斯分类方法构建基本分类器,然后结合集成学习,用AdaBoost算法赋予每个分类器权重,再线性组合它们得到最终分类器.通过UCI数据库中实际的Wine Data Set验证表明,本文分类方法与集成学习的结合可以得到高准确率和稳定的分类.
服务
加入引用管理器
E-mail Alert
RSS
收稿日期: 2017-05-10
PACS:O212.7
基金资助:国家自然科学基金面上项目(No.11971362),国家自然科学基金青年项目(No.11901581),中南财经政法大学中央高校基本科研业务费专项资金(No.2722020JCG064)资助.

引用本文:
张婧, 袁敏, 刘妍岩. 基于正态混合模型的贝叶斯分类方法及其应用[J]. 应用数学学报, 2020, 43(4): 742-755. ZHANG Jing, YUAN Min, LIU Yanyan. The Bayes Classifier Based on the Normal Mixture Model and Its Application. Acta Mathematicae Applicatae Sinica, 2020, 43(4): 742-755.
链接本文:
http://123.57.41.99/jweb_yysxxb/CN/ http://123.57.41.99/jweb_yysxxb/CN/Y2020/V43/I4/742


[1] Breiman L. Random forests. Machine Learning, 2001, 45(1):5-32
[2] Xu B, Huang J, Williams G, Wang Q, Ye Y. Classifying very high-dimensional data with random forests built from small subspaces. International Journal of Data Warehousing and Ming, 2012, 8(2):44-63
[3] Féraud R, Clérot F. A methodology to explain neural network classification. Neural Networks the Official Journal of the International Neural Network Society, 2002, 15(2):237-246
[4] Zhang G. Neural networks for classification:a survey. IEEE Transactions on Systems Man & Cybernetics Part C, 2000, 30(4):451-462
[5] Burges J. A tutorial on support vector machines for pattern recognition. Data Mining and Knowledge Discovery, 1998, 2(2):121-167
[6] Cristianini N, Shawe-Taylor J. An introduction to support vector machines and other kerner-based learning methods. Cambridge:Cambridge University Press, 2000
[7] Hastie T, Tibshirani R, Friedman J. The elements of statistical learning:data ming, inference and prediction. New York:Springer-Verlag, 2001
[8] Mujalli R, López G, Garach L. Bayes classifiers for imbalanced traffic accidents datasets. Accident Analysis & Prevention, 2016, 88:37-51
[9] Nunzio G. A new decision to take for cost-sensitive Naïve Bayes classifiers. Information Processing & Management, 2014, 50(5):653-674
[10] Figueiredo M, Jain A. Unsupervised learning of finite mixture models. IEEE Transactions on Pattern Analysis & Machine Intelligence, 2002, 24(3):381-396
[11] Thijs H, Molenberghs G, Michiels B, et al. Strategies to fit pattern-mixture models. Biostatistics, 2002, 3(2):245-265
[12] Chen J, Khalili A. Order selection in finite mixture models with a nonsmooth penalty. Journal of the American Statistical Association, 2008, 103(485):1674-1683
[13] Dempster A, Laird N, Rubin D. Maximum likelihood from imcomplete data via the EM algorithm. Journal of the Royal Statistical Society, Series B, 1977, 39(1):1-38
[14] Ranjan R, Huang B, Fatehi A. Robust Gaussian process modeling using EM algorithm. Journal of Process Control, 2016, 42:125-136
[15] Kadir S, Goodman D, Harris K. High-dimensional cluster analysis with the masked EM algorithm. Neural Computation, 2014, 26(11):2379-2394
[16] Friedman J. Greedy function approximation:a gradient boosting machine. Annals of Statistics, 2001, 29(5):1189-1232
[17] Friedman J, Hastie T, Tibshirani R. Additive logistic regression:a statistical view of boosting. Annals of Statistics, 2000, 28(2):337-407
[18] Freund Y, Schapire R. A short introduction to boosting. Journal of Japanese Society for Artificial Intelligence, 1999, 14(5):771-780
[19] Webb G, Zheng Z. Multistrategy ensemble learning:reducing error by combining ensemble learning techniques. IEEE Transactions on Knowledge & Data Engineering, 2004, 16(8):980-991
[20] Freund Y, Schapire R. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Science, 1997, 55(1):119-139
[21] Schapire R, Singer Y. Improved boosting algorithms using confidence-rate predictions. Machine Learning, 1999, 37(3):197-336
[22] Zhu J, Zou H, Rosset S, Hastie T. Multi-class adaboost. Statistics & Its Interface, 2009, 2(3):349-360
[23] Wu C. On the convergence properties of the EM algorithm. The Annals of Statistics, 1983, 11(1):95-103

[1]李明远, 张云俊, 周晓华. 基于EM算法和流行病学史数据的COVID-19传播模式分析[J]. 应用数学学报, 2020, 43(2): 427-439.
[2]谢民育, 吴茗, 熊明, 宁建辉. 指令性抽样下总体均值和方差的估计及其应用[J]. 应用数学学报(英文版), 2010, 33(2): 297-307.



PDF全文下载地址:

http://123.57.41.99/jweb_yysxxb/CN/article/downloadArticleFile.do?attachType=PDF&id=14799
相关话题/统计 应用数学 数据 概率 中南财经政法大学