Tatiana O. Shavrina
HSE University, Moscow, Russia; Artifi cial Intelligence Research Institute, Moscow, Russia; Kharkevich Institute for Information Transmission Problems, Russian Academy of Sciences, Moscow, Russia; SberDevices LLC, Moscow, Russia; email@example.com
The article discusses current research in the fi eld of applied linguistics dedicated to the evaluation of artifi cial intelligence (AI) systems. Linguistic tests are used as the principal tool for evaluating the level of intelligence of such systems, being the most aff ordable way of training AI systems and, at the same time, having high variability necessary for the formulation of intellectual tasks. This paper provides an overview of current methodology for training and testing AI systems and describes the gold standards of textual tasks (benchmarks) in the General Language Understanding Evaluation (GLUE) methodology. We also present an overview of how the theoretical apparatus and practices of linguistics are used to create a Russian-language test for examining the abilities of AI systems, the Russian SuperGLUE. Further convergence of machine learning and linguistic methods can fi ll gaps both in the practice of evaluating AI systems and in their eff ective training.
Shavrina T. O. Methods of computational linguistics in the evaluation of artifi cial intelligence systems. Voprosy Jazykoznanija, 2021, 6: 117–138.
The work was supported by the Ministry of Science and Higher Education of Russia, grant No. 075-15-2020-793.