删除或更新信息,请邮件至freekaoyan#163.com(#换成@)

基于双向门限递归单元神经网络的维吾尔语形态切分

清华大学 辅仁网/2017-07-07

基于双向门限递归单元神经网络的维吾尔语形态切分
哈里旦木·阿布都克里木, 程勇, 刘洋, 孙茂松
清华大学 计算机科学与技术系, 智能技术与系统国家重点实验室, 清华信息科学与技术国家实验室(筹), 北京 100084
Uyghur morphological segmentation with bidirectional GRU neural networks
ABUDUKELIMU Halidanmu, CHENG Yong, LIU Yang, SUN Maosong
State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China

摘要:

输出: BibTeX | EndNote (RIS)
摘要以维吾尔语为代表的低资源、形态丰富语言的信息处理对于满足“一带一路”语言互通的战略需求具有重要意义。这类语言通过组合语素来表示句法和语义关系,因而给语言处理带来严重的数据稀疏问题。该文提出基于双向门限递归单元神经网络的维吾尔语形态切分方法,将维吾尔词自动切分为语素序列,从而缓解数据稀疏问题。双向门限递归单元神经网络能够充分利用双向上下文信息进行切分消歧,并通过门限递归单元有效处理长距离依赖。实验结果表明,该方法相比主流统计方法和单向门限递归单元神经网络获得了显著的性能提升。该方法具有良好的语言无关性,能够用于处理更多的形态丰富语言。
关键词 双向门限递归单元,神经网络,维吾尔语,形态切分
Abstract:Information processing of low-resource, morphologically-rich languages such as Uyghur is critical for addressing the language barrier problem faced by the One Belt and One Road (B&R) program in China. In such languages, individual words encode rich grammatical and semantic information by concatenating morphemes to a root form, which leads to severe data sparsity for language processing. This paper introduces an approach for Uyghur morphological segmentation which divides Uyghur words into sequences of morphemes based on bidirectional gated recurrent unit (GRU) neural networks. The bidirectional GRU exploits the bidirectional context to resolve ambiguities and model long-distance dependencies using the gating mechanism. Tests show that this approach significantly outperforms conditional random fields and unidirectional GRUs. This approach is language-independent and can be applied to all morphologically-rich languages.
Key wordsbidirectional gated recurrent unitneural networkUyghurmorphological segmentation
收稿日期: 2016-07-08 出版日期: 2017-01-20
ZTFLH:TP391.2
通讯作者:刘洋,副教授,E-mail:liuyang2011@tsinghua.edu.cnE-mail: liuyang2011@tsinghua.edu.cn
引用本文:
哈里旦木·阿布都克里木, 程勇, 刘洋, 孙茂松. 基于双向门限递归单元神经网络的维吾尔语形态切分[J]. 清华大学学报(自然科学版), 2017, 57(1): 1-6.
ABUDUKELIMU Halidanmu, CHENG Yong, LIU Yang, SUN Maosong. Uyghur morphological segmentation with bidirectional GRU neural networks. Journal of Tsinghua University(Science and Technology), 2017, 57(1): 1-6.
链接本文:
http://jst.tsinghuajournals.com/CN/10.16511/j.cnki.qhdxxb.2017.21.001 http://jst.tsinghuajournals.com/CN/Y2017/V57/I1/1


图表:
1 维吾尔语词示例
1 面向维吾尔语切分的双向门限递归单元神经网络
2 门限递归单元
2 维吾尔语形态切分语料库
3 维吾尔语词语频度和比例
4 维吾尔语语素频度和比例
5 向量维度对BiGRU形态切分性能的影响
6 对比实验结果
7 维吾尔语形态切分实例分析


参考文献:
[16] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780.
[1] Orhun M, Tanguǎ C, Adal? E. Rule based analysis of the Uyghur nouns[J]. International Journal on Asian Language Processing, 2009, 19(1):33-43.
[17] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate[Z/OL]. (2014-09-01). https://arxiv.org/abs/1409.0473
[18] Schuster M, Paliwal K. Bidirectional recurrent neural networks[J]. IEEE Transactions on signal processing, 1997, 45(11):2673-2681.
[2] Sami V, Peter S, Arne G et al. Morfessor 2.0:Python Implementation and Extensions for Morfessor Baseline, ISBN 978-952-60-5501-5[R]. Helsinki:Aalto University, 2013.
[19] Graves A, Jaitly N, Mohamed A. Hybrid speech recognition with deep bidirectional ISTM[C]//2013 IEEE Workshop on Automatic Speech Recognition and Understanding. Olomouc, Czech:IEEE, 2014:8-12.
[3] Lafferty J, McCallum A, Pereira F. Conditional random fields:probabilistic models for segmenting and labeling sequence data[C]//Proceedings of the 18th International Conference on Machine Learning. Williamstown, MA, USA:Morgan Kaufmann, 2001:282-289.
[4] Ruokolainen T, Kohonen O, Virpioja S et al. Supervised morphological segmentation in a low-resource learning setting using conditional random fields[C]//Proceeding of the Seventeenth Conference on Computational National Language Learning. Sofia, Bulgaria:Association for Computational Linguistics, 2013:8-9.
[5] Aisha B, SUN Maosong. A statistical method for Uyghur tokenization[C]//International Conference on Natural Language Processing and Knowledge Engineering. Dalian:IEEE, 2009:24-27.
[6] 买热哈巴·艾力, 姜文斌, 王志洋, 等. 维吾尔语词法分析的有向图模型[J]. 软件学报, 2012, 23(12):3115-3129. Aili M, JIANG Wenbin, WANG Zhiyang, et al. Directed graph model of Uyghur morphological analysis[J]. Journal of Software, 2012, 23(12):3115-3129. (in Chinese)
[7] Wumaier A, Tian S. Conditional random fields combined FSM stemming method for Uyghur[C]//International Conference on Computer Science and Information Technology. Beijing:IEEE, 2009:8-11.
[8] Ablimit M, Kawahara T, Pattar A, et al. Stem-affix based Uyghur morphological analyzer[J]. International Journal of Future Generation Communication and Networking, 2016, 9(2):59-72.
[9] Chung J, Gulcehre C, Cho K, et al. Empirical evaluation of gated recurrent neural networks on sequence modeling[Z/OL]. (2014-12-11). https://arxiv.org/abs/1412.3555.
[10] Chen X, Qiu X, Zhu C et al. Long short-term memory neural networks for Chinese word segmentation[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal:Association for Computational Linguistics, 2015:17-21.
[11] Yao Y, Huang Z. Bi-directional LSTM recurrent neural network for Chinese word segmentation[Z/OL]. (2016-02-16). http://arxiv.org/abs/1602.04874.
[12] Morita H, Kawahara D, Kurohashi S. Morphological analysis for unsegmented languages using recurrent neural network language model[C]//Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal:Association for Computational Linguistics, 2015:17-21.
[13] Wang L, Cao Z, Xia Y, et al. Morphological segmentation with window ISTM neural networks[C]//Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence. Phoenix, AZ, USA:Association for the Advancement of Artificial Intelligence, 2016:2842-2848.
[14] Wang P, Qian Y, Soong F, et al. Part-of-speech tagging with bidirectional long short-term memory recurrent neural network[Z/OL]. (2015-10-21). http://arxiv.org/abs/1510.06168.
[15] Bengio Y, Simard P, Frasconi P. Learning long-term dependencies with gradient descent is difficult[J]. IEEE Transactions on neural networks, 1994, 5(2):157-166.
[16] Hochreiter S, Schmidhuber J. Long short-term memory[J]. Neural Computation, 1997, 9(8):1735-1780.
[17] Bahdanau D, Cho K, Bengio Y. Neural machine translation by jointly learning to align and translate[Z/OL]. (2014-09-01). https://arxiv.org/abs/1409.0473
[18] Schuster M, Paliwal K. Bidirectional recurrent neural networks[J]. IEEE Transactions on signal processing, 1997, 45(11):2673-2681.
[19] Graves A, Jaitly N, Mohamed A. Hybrid speech recognition with deep bidirectional ISTM[C]//2013 IEEE Workshop on Automatic Speech Recognition and Understanding. Olomouc, Czech:IEEE, 2014:8-12."


相关文章:
[1]高莹莹, 朱维彬. 面向情感语音合成的言语情感描述与预测[J]. 清华大学学报(自然科学版), 2017, 57(2): 202-207.
[2]阿不都萨拉木·达吾提, 于斯音·于苏普, 艾斯卡尔·艾木都拉. 类别区分词与情感词典相结合的维吾尔文句子情感分类[J]. 清华大学学报(自然科学版), 2017, 57(2): 197-201.
[3]热合木·马合木提, 于斯音·于苏普, 张家俊, 宗成庆, 艾斯卡尔·艾木都拉. 基于模糊匹配与音字转换的维吾尔语人名识别[J]. 清华大学学报(自然科学版), 2017, 57(2): 188-196.
[4]艾斯卡尔·肉孜, 殷实, 张之勇, 王东, 艾斯卡尔·艾木都拉, 郑方. THUYG-20:免费的维吾尔语语音数据库[J]. 清华大学学报(自然科学版), 2017, 57(2): 182-187.
[5]邢安昊, 张鹏远, 潘接林, 颜永红. 基于SVD的DNN裁剪方法和重训练[J]. 清华大学学报(自然科学版), 2016, 56(7): 772-776.
[6]张劲松, 高迎明, 解焱陆. 基于DNN的发音偏误趋势检测[J]. 清华大学学报(自然科学版), 2016, 56(11): 1220-1225.
[7]田垚, 蔡猛, 何亮, 刘加. 基于深度神经网络和Bottleneck特征的说话人识别系统[J]. 清华大学学报(自然科学版), 2016, 56(11): 1143-1148.
[8]邓青, 马晔风, 刘艺, 张辉. 基于BP神经网络的微博转发量的预测[J]. 清华大学学报(自然科学版), 2015, 55(12): 1342-1347.

相关话题/语言 维吾尔 比例 实验 技术