скачать книгу бесплатно
Cross-Validation (k-fold Cross-Validation, Leave-p-out Cross-Validation) (Перекрёстная проверка) – A collection of processes designed to evaluate how the results of a predictive model will generalize to new data sets. k-fold Cross-Validation; Leave-p-out Cross-Validation.
Cryogenic freezing (cryonics, human cryopreservation) is a technology of preserving in a state of deep cooling (using liquid nitrogen) the head or body of a person after his death with the intention to revive them in the future.
Cyber-physical systems (Киберфизические системы) are intelligent networked systems with built-in sensors, processors and drives that are designed to interact with the physical environment and support the operation of computer information systems in real time; cloud computing is an information technology model for providing ubiquitous and convenient access using the information and telecommunications network “Internet” to a common set of configurable computing resources (“cloud”), data storage devices, applications and services that can be promptly provided and relieved from the load with minimal operating costs or almost without the participation of the provider.
Cyber-physical systems (Киберфизические системы) are intelligent networked systems with built-in sensors, processors and drives that are designed to interact with the physical environment and support the operation of computer information systems in real time; cloud computing is an information technology model for providing ubiquitous and convenient access using the information and telecommunications network “Internet” to a common set of configurable computing resources (“cloud”), data storage devices, applications and services that can be promptly provided and relieved from the load with minimal operating costs or almost without the participation of the provider.
“D”
Darkforest (Программа Darkforest) – A computer program, based on deep learning techniques using a convolutional neural network. Its updated version Darkforest2 combines the techniques of its predecessor with Monte Carlo tree search. The MCTS effectively takes tree search methods commonly seen in computer chess programs and randomizes them. With the update, the system is known as Darkforest3.
Dartmouth workshop (Дартмутский семинар) – The Dartmouth Summer Research Project on Artificial Intelligence was the name of a 1956 summer workshop now considered by many (though not all) to be the seminal event for artificial intelligence as a field.
Data (Данные) – Data is a collection of qualitative and quantitative variables. It contains the information that is represented numerically and needs to be analyzed.
Data analysis (Анализ данных) – Obtaining an understanding of data by considering samples, measurement, and visualization. Data analysis can be particularly useful when a dataset is first received, before one builds the first model. It is also crucial in understanding experiments and debugging problems with the system [[128 - Data analysis [Электронный ресурс] // dic.academic.ru URL: https://dic.academic.ru/dic.nsf/ruwiki/1727524 (дата обращения: 16.02.2022)]].
Data analytics (Аналитика данных)
Data analytics is the science of analyzing raw data to make conclusions about that information. Many of the techniques and processes of data analytics have been automated into mechanical processes and algorithms that work over raw data for human consumption. [[129 - Data analytics [Электронный ресурс] www.investopedia.com URL:https://www.investopedia.com/terms/d/data-analytics.asp (https://www.investopedia.com/terms/d/data-analytics.asp) (дата обращения: 07.07.2022)]]
Data augmentation (Увеличение данных в анализе данных) – Data augmentation in data analysis are techniques used to increase the amount of data. It helps reduce overfitting when training a machine learning [[130 - Data augmentation [Электронный ресурс] // ibm.com URL: https://www.ibm.com/docs/ru/oala/1.3.5?topic=SSPFMY_1.3.5/com.ibm.scala.doc/config/iwa_cnf_scldc_scl_dc_ovw.html (дата обращения: 18.02.2022)]].
Data Cleaning (Очистка данных) – Data Cleaning is the process of identifying, correcting, or removing inaccurate or corrupt data records.
Data Curation (Курирование данных) – Data Curation includes the processes related to the organization and management of data which is collected from various sources [[131 - Data Curation [Электронный ресурс] www.geeksforgeeks.org URL: https://www.geeksforgeeks.org/data-curation-lifecycle/ (дата обращения 22.02.2022)]].
Data entry (Ввод данных) The process of converting verbal or written responses to electronic form. [[132 - Data entry [Электронный ресурс] www.umich.edu URL: https://www.icpsr.umich.edu/web/ICPSR/cms/2042#D (https://www.icpsr.umich.edu/web/ICPSR/cms/2042#D) (дата обращения: 07.07.2022)]]
Data fusion (Слияние данных) — The process of integrating multiple data sources to produce more consistent, accurate, and useful information than that provided by any individual data source [].
Data Integration (Интеграция данных) – involves the combination of data residing in different resources and then the supply in a unified view to the users. Data integration is in high demand for both commercial and scientific domains in which they need to merge the data and research results from different repositories [].
Data Lake (Озеро данных) – A type of data repository that stores data in its natural format and relies on various schemata and structure to index the data.
Data markup (Разметка данных) is the stage of processing structured and unstructured data, during which data (including text documents, photo and video images) are assigned identifiers that reflect the type of data (data classification), and (or) data is interpreted to solve a specific problem, in including using machine learning methods (National Strategy for the Development of Artificial Intelligence for the period up to 2030).
Data Mining (Интеллектуальный анализ данных) – is the process of data analysis and information extraction from large amounts of datasets with machine learning, statistical approaches. and many others. [[133 - Data Mining [Электронный ресурс] // bigdataschool.ru URL: https://www.teradata.ru/Glossary/What-is-Data-Mining (https://www.teradata.ru/Glossary/What-is-Data-Mining) (дата обращения: 17.02.2022)]]
Data parallelism (Параллелизм данных) – A way of scaling training or inference that replicates an entire model onto multiple devices and then passes a subset of the input data to each device. Data parallelism can enable training and inference on very large batch sizes; however, data parallelism requires that the model be small enough to fit on all devices. See also model parallelism.
Data protection (Защита данных) is the process of protecting data and involves the relationship between the collection and dissemination of data and technology, the public perception and expectation of privacy and the political and legal underpinnings surrounding that data. It aims to strike a balance between individual privacy rights while still allowing data to be used for business purposes. [[134 - Data protection [Электронный ресурс] www. techopedia.com URL: https://www.techopedia.com/definition/29406/data-protection (https://www.techopedia.com/definition/29406/data-protection) (дата обращения: 07.07.2022)]]
Data Refinement (Уточнение данных) – Data refinement is used to convert an abstract data model in terms of sets for example into implementable data structures such as arrays [[135 - Data Refinement [Электронный ресурс] www.atscale.com URL: https://www.atscale.com/blog/what-is-data-extraction/ (дата обращения 12.01.2022)]].
Data Science (Наука о данных) — A broad grouping of mathematics, statistics, probability, computing, data visualization to extract knowledge from a heterogeneous set of data (images, sound, text, genomic data, social network links, physical measurements, etc.). The methods and tools derived from artificial intelligence are part of this family.
Data set (Набор данных) – a set of data that has undergone preliminary preparation (processing) in accordance with the requirements of the legislation of the Russian Federation on information, information technology and information protection and is necessary for the development of software based on artificial intelligence (National strategy for the development of artificial intelligence for the period up to 2030).
Data Streaming Accelerator (DSA) (Ускоритель потоковой передачи данных) – is an accelerator, that is, a device that performs a specific task, which in this case is the transfer of data in less time than the CPU would do. What makes DSA special is that it is designed for one of the characteristics that Compute Express Link brings with it over PCI Express 5.0, which is to provide consistent access to RAM for all peripherals connected to a PCI Express port, i.e., they use the same memory addresses.
Вы ознакомились с фрагментом книги.
Для бесплатного чтения открыта только часть текста.
Приобретайте полный текст книги у нашего партнера: