[1]Adam N R, Worthmann J C. Securitycontrol methods for statistical databases: A comparative study[J]. ACM Computing Surveys(CSUR), 1989, 21(4): 515556[2]Peters M E, Neumann M, Iyyer M, et al. Deep contextualized word representations[C] Proc of the 2018 Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. New Orleans, Louisiana: NAACL, 2018: 22272237[3]Elman J L. Finding structure in time[J]. Cognitive Science, 1990, 14(2): 179211[4]Dernoncourt F, Lee J Y, Uzuner O, et al. Deidentification of patient notes with recurrent neural networks[J]. Journal of the American Medical Informatics Association, 2017, 24(3): 596606[5]Memory L S T. Long shortterm memory[J]. Neural Computation, 2010, 9(8): 17351780[6]Lafferty J, McCallum A, Pereira F C N. Conditional random fields: Probabilistic models for segmenting and labeling sequence data[C] Proc of the 18th Int Conf on Machine Learning (ICML2001). New York: ACM, 2001: 282289[7]Liu Z, Yang M, Wang X, et al. Entity recognition from clinical texts via recurrent neural network[J]. BMC Medical Informatics and Decision Making, 2017, 17: 5361[8]Huang Z, Xu W, Yu K. Bidirectional LSTMCRF models for sequence tagging[J]. arXiv preprint, arXiv:1508.01991, 2015[9]Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need[COL] Proc of NIPS. 2017[20240122]. https:proceedings.neurips.ccpaper2017hash3f5ee24354 7dee91fbd053c1c4a845aaAbstract.html[10]Devlin J, Chang M W, Lee K, et al. BERT: Pretraining of deep bidirectional transformers for language understanding[C] Proc of the 2019 Conf of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Piscataway, Minneapolis, Minnesota: NAACL, 2019: 41714186[11]Khin K, Burckhardt P, Padman R. A deep learning architecture for deidentification of patient notes: Implementation and evaluation[J]. arXiv preprint, arXiv:1810.01570, 2018[12]Strubell E, Verga P, Andor D, et al. Linguisticallyinformed selfattention for semantic role labeling[C] Proc of the 2018 Conf on Empirical Methods in Natural Language Processing. Brussels, Belgium: ACL, 2018: 50275038[13]Zhang Z, Wu Y, Zhou J, et al. SGNet: Syntaxguided machine reading comprehension[C] Proc of the AAAI Conf on Artificial Intelligence. Menlo Park, CA: AAAI, 2020: 96369643[14]Bugliarello E, Okazaki N. Enhancing machine translation with dependencyaware selfattention[C] Proc of the 58th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2020: 16181627[15]Velikovi P, Cucurull G, Casanova A, et al. Graph attention networks[COL] Proc of Int Conf on Learning Representations. 2018[20240122]. https:openreview.netforum?id=rJXMpikCZ[16]Levow G A. The third international Chinese language processing bakeoff: Word segmentation and named entity recognition[C] Proc of Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2006[17]Stubbs A, Uzuner . Annotating longitudinal clinical narratives for deidentification: The 2014 i2b2UTHealth corpus[J]. Journal of Biomedical Informatics, 2015, 58: S20S29[18]Zhang Y, Yang J. Chinese NER using lattice LSTM[C] Proc of the 56th Annual Meeting of the Association for Computational Linguistics. Stroudsburg, PA: ACL, 2018: 15541564[19]Dozat T, Manning C D. Deep biaffine attention for neural dependency parsing[COL] Proc of Intl Conf on Learning Representations. 2017[20240122]. https:openreview.netforum?id=Hk95PK9le[20]Gardner M, Grus J, Neumann M, et al. Allennlp: A deep semantic natural language processing platform[J]. arXiv preprint, arXiv:1803.07640, 2018[21]Kingma D P, Ba J. Adam: A method for stochastic optimization[J]. arXiv preprint, arXiv:1412.6980, 2014[22]郑旭如. 基于深度学习的数据脱敏研究[D]. 哈尔滨: 哈尔滨工业大学, 2020[23]Kitazono J, Grozavu N, Rogovschi N, et al. tDistributed stochastic neighbor embedding with inhomogeneous degrees of freedom[C] Proc of the 23rd Int Conf on Neural Information Processing. Berlin: Springer, 2016: 119128
|