Bag-of-words model is a simplifying representation used in natural language processing and information retrieval (IR). In this model, a text (such as a sentence or a document) is represented as the bag (multiset) of its words, disregarding grammar and even word order but keeping multiplicity. The bag-of-words model has also been used for computer vision. The bag-of-words model is commonly used in methods of document classification where the (frequency of) occurrence of each word is used as a feature for training a classifier131.
Baldwin effect – the skills acquired by organisms during their life as a result of learning, after a certain number of generations, are recorded in the genome132.
Baseline is a model used as a reference point for comparing how well another model (typically, a more complex one) is performing. For example, a logistic regression model might serve as a good baseline for a deep model. For a particular problem, the baseline helps model developers quantify the minimal expected performance that a new model must achieve for the new model to be useful133.
Batch – the set of examples used in one gradient update of model training134.
Batch Normalization is a preprocessing step where the data are centered around zero, and often the standard deviation is set to unity135.
Batch size – the number of examples in a batch. For example, the batch size of SGD is 1, while the batch size of a mini-batch is usually between 10 and 1000. Batch size is usually fixed during training and inference; however, TensorFlow does permit dynamic batch sizes136,137.
Bayes’s Theorem is a famous theorem used by statisticians to describe the probability of an event based on prior knowledge of conditions that might be related to an occurrence138.
Bayesian classifier in machine learning is a family of simple probabilistic classifiers based on the use of the Bayes theorem and the «naive» assumption of the independence of the features of the objects being classified139.
Bayesian Filter is a program using Bayesian logic. It is used to evaluate the header and content of email messages and determine whether or not it constitutes spam – unsolicited email or the electronic equivalent of hard copy bulk mail or junk mail. A Bayesian filter works with probabilities of specific words appearing in the header or content of an email. Certain words indicate a high probability that the email is spam, such as Viagra and refinance140.
Bayesian Network, also called Bayes Network, belief network, or probabilistic directed acyclic graphical model, is a probabilistic graphical model (a statistical model) that represents a set of variables and their conditional dependencies via a directed acyclic graph141.
Bayesian optimization is a probabilistic regression model technique for optimizing computationally expensive objective functions by instead optimizing a surrogate that quantifies the uncertainty via a Bayesian learning technique. Since Bayesian optimization is itself very expensive, it is usually used to optimize expensive-to-evaluate tasks that have a small number of parameters, such as selecting hyperparameters142.
Bayesian programming is a formalism and a methodology for having a technique to specify probabilistic models and solve problems when less than the necessary information is available143,144.
Bees’ algorithm is a population-based search algorithm which was developed by Pham, Ghanbarzadeh and et al. in 2005. It mimics the food foraging behaviour of honey bee colonies. In its basic version the algorithm performs a kind of neighbourhood search combined with global search, and can be used for both combinatorial optimization and continuous optimization. The only condition for the application of the bee’s algorithm is that some measure of distance between the solutions is defined. The effectiveness and specific abilities of the bee’s algorithm have been proven in a number of studies145.
Behavior informatics (BI) — the informatics of behaviors so as to obtain behavior intelligence and behavior insights146.
Behavior tree (BT) is a mathematical model of plan execution used in computer science, robotics, control systems and video games. They describe switchings between a finite set of tasks in a modular fashion. Their strength comes from their ability to create very complex tasks composed of simple tasks, without worrying how the simple tasks are implemented. BTs present some similarities to hierarchical state machines with the key difference that the main building block of a behavior is a task rather than a state. Its ease of human understanding makes BTs less error-prone and very popular in the game developer community. BTs have shown to generalize several other control architectures147.
Belief-desire-intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent’s beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer148.
Bellman equation – named after Richard E. Bellman, is a necessary condition for optimality associated with the mathematical optimization method known as dynamic programming. It writes the «value» of a decision problem at a certain point in time in terms of the payoff from some initial choices and the «value» of the remaining decision problem that results from those initial choices. This breaks a dynamic optimization problem into a sequence of simpler subproblems, as Bellman’s «principle of optimality» prescribes149.
Benchmark (also benchmark program, benchmarking program, benchmark test) – test program or package for evaluating (measuring and/or comparing) various aspects of the performance of a processor, individual devices, computer, system or a specific application, software; a benchmark that allows products from different manufacturers to be compared against each other or against some standard. For example, online benchmark – online benchmark; standard benchmark – standard benchmark; benchmark time comparison – comparison of benchmark execution times150.
Benchmarking is a set of techniques that allow
130
Bag-of-words model in computer vision [Электронный ресурс] https://en.wikipedia.org URL: https://en.wikipedia.org/wiki/Bag-of-words_model_in_computer_vision (дата обращения: 10.05.2023)
131
Bag-of-words model [Электронный ресурс] https://machinelearningmastery.ru URL: https://www.machinelearningmastery.ru/gentle-introduction-bag-words-model/ (дата обращения: 11.03.2022)
132
Эффект Балдвина [Электронный ресурс] https://apr.moscow URL: https://apr.moscow/content/data/6/11 Технологии искусственного интеллекта. pdf (дата обращения: 11.07.2023)
133
Baseline [Электронный ресурс] https://developers.google.com URL: https://developers.google.com/machine-learning/glossary#baseline (дата обращения: 28.03.2023)
134
Batch [Электронный ресурс] https://www.primeclasses.in URL: https://www.primeclasses.in/glossary/data-science-course/machine-learning/batch (дата обращения: 20.06.2023)
135
Batch Normalization [Электронный ресурс] https://books.google.ru URL: https://books.google.ru/books?id=Batch Normalization (дата обращения: 20.06.2023)
136
Batch size [Электронный ресурс] https://www.gabormelli.com URL: https://www.gabormelli.com/RKB/Batch_Size (дата обращения: 29.06.2023)
137
Batch size [Электронный ресурс] https://developers.google.com URL: https://developers.google.com/machine-learning/glossary#batch-size (дата обращения: 29.06.2023)
138
Теорема Байеса [Электронный ресурс] https://habr.com URL: https://habr.com/ru/articles/598979/ (дата обращения: 03.07.2023)
139
Байесовский классификатор в машинном обучении [Электронный ресурс] https://wiki.loginom.ru URL: https://wiki.loginom.ru/articles/bayesian_classifier.html (дата обращения: 07.07.2022)
140
Bayesian Filter [Электронный ресурс] https://certsrv.ru URL: http://certsrv.ru/eset_ss.ru/pages/bayes_filter.htm (дата обращения: 12.02.2022)
141
Bayesian Network [Электронный ресурс] https://dic.academic.ru URL: https://dic.academic.ru/dic.nsf/ruwiki/1738444 (дата обращения: 31.01.2022)
142
Bayesian optimization [Электронный ресурс] https://developers.google.com URL: https://developers.google.com/machine-learning/glossary#bayesian-optimization (дата обращения: 28.03.2023)
143
Bayesian programming [Электронный ресурс] https://en.wikipedia.org URL: https://en.wikipedia.org/wiki/Bayesian_programming (дата обращения: 28.03.2023)
144
Байесовское программирование [Электронный ресурс] https://ru.wikipedia.org URL: https://ru.wikipedia.org/wiki/Байесовское_программирование (дата обращения: 28.03.2023)
145
Bees algorithm [Электронный ресурс] https://en.wikipedia.org URL: https://en.wikipedia.org/wiki/Bees_algorithm#cite_note-Pham_&_al,_2005-1 (дата обращения: 27.03.2023)
146
Behavior informatics (BI) [Электронный ресурс] https://en.wikipedia.org URL: https://en.wikipedia.org/wiki/Behavior_informatics (дата обращения: 01.05.2023)
147
Behavior tree (BT) [Электронный ресурс] https://habr.com URL: https://habr.com/ru/company/cloud_mts/blog/306214/ (дата обращения: 31.01.2022)
148
Belief-desire-intention software model (BDI) [Электронный ресурс] https://fccland.ru URL: https://fccland.ru/stati/22848-model-ubezhdeniy-zhelaniy-i-namereniy.html (дата обращения: 31.01.2022)
149
Bellman equation [Электронный ресурс] https://mruanova.medium.com URL: https://mruanova.medium.com/bellman-equation-90f2f0deaa88 (дата обращения 28.02.2022)
150
Benchmark [Электронный ресурс] https://medium.com URL: https://medium.com/@tauheedul/it-hardware-benchmarks-for-machine-learning-and-artificial-intelligence-6183ceed39b8 (дата обращения 11.03.2022)