[1]张海,崔宇路,余露瑶,等.人工智能视角下深度学习的研究热点与教育应用趋势——基于2006~2019年WOS数据库中20708篇文献的知识图谱分析[J].现代教育技术,2020, 30(01):32-38. [2]王连柱.机器学习应用于语言智能的研究综述[J]. 现代教育技术, 2018, 28(09):66-72. [3]XIAO D, ZHANG H, LI Y, et al. Ernie-gen: an enhanced multi-flow pre-training and fine-tuning framework for natural language generation[J]. arXiv, 2020 (1):1-8. [4]GAO Y, BING L, LI P, et al. Generating distractors for reading comprehension questions from real examinations[C]// Proceedings of the AAAI Conference on Artificial Intelligence.Hawaii: AAAI-press, 2019: 6423-6430. [5]杜鹏东,孙涛,田振清.信息熵在多项选择题质量分析中的应用[J].现代教育技术, 2006 (3):36-39. [6]KURDI G, LEO J, PARSIA B, et al. A systematic review of automatic question generation for educational purposes[J]. International Journal of Artificial Intelligence in Education,2020, 30(1): 121-204. [7]RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. arXiv, 2019 (10):1-67. [8]VASWANI A, SHAZEER N, PARMAR N, et al. Attention is all you need[C]// Advances in neural information processing systems. California: NIPS, 2017: 5998-6008. [9]RAFFEL C, SHAZEER N, ROBERTS A, et al. Exploring the limits of transfer learning with a unified text-to-text transformer[J]. Journal of Machine Learning Research, 2020,21(140): 1-67. [10]SATRIA A Y, TOKUNAGA T. Automatic generation of english reference question by utilising Nonrestrictive Relative Clause[C]// the 7th International Conference on Computer Supported Education. Changsha: Springer, 2017: 379-386. [11]SATRIA A Y, TOKUNAGA T. Evaluation of automatically generated pronoun reference questions[C]// Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications. Copenhagen: ACL, 2017: 76-85. [12]JIANG S, LEE J. Distractor generation for chinese fill-in-the-blank items[C]// Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications. Copenhagen: ACL, 2017: 143-148. [13]LIANG C, YANG X, WHAM D, et al. Distractor generation with generative adversarial nets for automatically creating fill-in-the-blank questions[C]// Proceedings of the Knowledge Capture Conference. Texas: Association for Computing Machinery, 2017: 1-4. [14]LIANG C, YANG X, DAVE N, et al. Distractor generation for multiple choice questions using learning to rank[C]// Proceedings of the thirteenth workshop on innovative use of NLP for building educational applications. New Orleans: ACL, 2018: 284-290. [15]LIU M, RUS V, LIU L. Automatic chinese factual question generation[J]. IEEE Transactions on Learning Technologies, 2016, 10(2): 194-204. [16]LIU M, RUS V, LIU L. Automatic chinese multiple choice question generation using mixed similarity strategy[J]. IEEE Transactions on Learning Technologies, 2017,11(2): 193-202. [17]KUMAR G, BANCHS R E, D'HARO L F. Automatic fill-the-blank question generator for student self-assessment[C]// 2015 IEEE Frontiers in Education Conference. Texas: IEEE, 2015: 1-3. [18]PATRA R, SAHA S K. Progress in computing, analytics and networking[M]. Singapore: Springer, 2018: 511-518. [19]PATRA R, SAHA S K. A hybrid approach for automatic generation of named entity distractors for multiple choice questions[J]. Education and Information Technologies, 2019,24(2): 973-993. [20]YANEVA V. Automatic distractor suggestion for multiple-choice tests using concept embeddings and information retrieval[C]// Proceedings of the thirteenth workshop on innovative use of NLP for building educational applications. New Orleans: ACL, 2018: 389-398. [21]SHAH R, SHAH D, KURUP L. Automatic question generation for intelligent tutoring systems[C]// 2017 2nd International Conference on Communication Systems, Computing and IT Applications. Mumbai: IEEE, 2017: 127-132. [22]SUN X, LIU J, LYU Y, et al. Answer-focused and position-aware neural question generation[C]// Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. Brussels: EMNLP, 2018: 3930-3939. [23]NADEAU D, SEKINE S. A survey of named entity recognition and classification[J]. Lingvisticae Investigationes, 2007,30(1): 3-26. [24]LIANG C, YANG X, WHAM D, et al. Distractor generation with generative adversarial nets for automatically creating fill-in-the-blank questions[C]// Proceedings of the Knowledge Capture Conference. Texas:K-CAP, 2017: 1-4. [25]ZHOU X, LUO S, WU Y. Co-attention hierarchical network: Generating coherent long distractors for reading comprehension[C]// Proceedings of the AAAI Conference on Artificial Intelligence. New York:AAAI, 2020: 9725-9732. [26]ENRICO LOPEZ L, CRUZ D K, BLAISE CRUZ J C, et al. Transformer-based end-to-end question generation[J]. arXiv, 2020(5):1-9. [27]OQUAB M, BOTTOU L, LAPTEV I, et al. Learning and transferring mid-level image representations using convolutional neural networks[C]// Proceedings of the IEEE conference on computer vision and pattern recognition. Ohio:CVPR, 2014: 1717-1724. [28]JIA Y, SHELHAMER E, DONAHUE J, et al. Caffe: Convolutional architecture for fast feature embedding[C]// Proceedings of the 22nd ACM international conference on Multimedia. Florida: Association for Computing Machinery, 2014: 675-678. [29]HUH M, AGRAWAL P, EFROS A A. What makes imagenet good for transfer learning[J]. arXiv, 2016(6):1-10. [30]YOSINSKI J, CLUNE J, BENGIO Y, et al. How transferable are features in deep neural networks[J]. Advances in Neural Information Processing Systems, 2014 (27): 3320-3328. [31]RUSSAKOVSKY O, DENG J, SU H, et al. Imagenet large scale visual recognition challenge[J]. International journal of computer vision, 2015,115(3): 211-252. [32]DENG J, DONG W, SOCHER R, et al. Imagenet: A large-scale hierarchical image database[C]// 2009 IEEE conference on computer vision and pattern recognition. Florida: IEEE, 2009: 248-255. [33]KENTON J D, TOUTANOVA L K. BERT: pre-training of deep bidirectional transformers for language understanding[C]// Proceedings of NAACL-HLT.Minnesota:NAACL-HLT, 2019: 4171-4186. [34]YANG Z, DAI Z, YANG Y, et al. Xlnet: Generalized autoregressive pretraining for language understanding[J]. Advances in neural information processing systems, 2019 (32):1-11. [35]DONG L, YANG N, WANG W, et al. Unified language model pre-training for natural language understanding and generation[C]// Proceedings of the 33rd International Conference on Neural Information Processing Systems.Vancouver: NeurIPS, 2019: 13063-13075. [36]LIU Y, OTT M, GOYAL N, et al. Roberta: A robustly optimized bert pretraining approach[J]. arXiv, 2019(7):1-13. [37]RAJPURKAR P, ZHANG J, LOPYREV K, et al. SQuAD: 100, 000+ questions for machine comprehension of text[C]// Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Texas: EMNLP, 2016: 2383-2392. [38]LAI G, XIE Q, LIU H, et al. RACE: large-scale reading comprehension dataset from examinations[C]// Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.Copenhagen: EMNLP, 2017: 785-794. [39]SHARMA S, ASRI L E, SCHULZ H, et al. Relevance of unsupervised metrics in task-oriented dialogue for evaluating natural language generation[J]. arXiv, 2017 (6):1-10. [40]ENRICO LOPEZ L, CRUZ D K, BLAISE CRUZ J C, et al. Transformer-based end-to-end question generation[J]. arXiv, 2020 (5):1-9. |