banner banner banner
PANN: A New Artificial Intelligence Technology. Tutorial
PANN: A New Artificial Intelligence Technology. Tutorial
Оценить:
Рейтинг: 0

Полная версия:

PANN: A New Artificial Intelligence Technology. Tutorial

скачать книгу бесплатно

PANN: A New Artificial Intelligence Technology. Tutorial
Vladimir Matsenko

Boris Zlotin

Progress, Inc., a Michigan corporation, has developed, tested, and patented a fundamentally new type of neural network named PANN (Progress Artificial Neural Network) and Artificial Intelligence based on it.This article describes the Matrix PANN software and its functionality.The Company may provide a software distribution package for self-testing and use as well as training materials for this software application. Demonstrations of the software are also available on request.

PANN: A New Artificial Intelligence Technology

Tutorial

Boris Zlotin

Vladimir Matsenko

Editor Anatol Guin

⠀ Progress, Inc

© Boris Zlotin, 2024

© Vladimir Matsenko, 2024

© Progress, Inc, 2024

ISBN 978-5-0064-2381-7

Created with Ridero smart publishing system

Keywords:

• unique properties,

• transparency of functioning,

• simple mathematical model,

• low cost of implementation and use.

From the authors

The authors, Boris Zlotin, the developer of the theoretical foundations of the PANN and software products based on it, and Vladimir Matsenko, an implementer of these products and participant in the creation and testing of the theory, express their gratitude to those who helped in this work and made a substantial creative contribution:

• Dmitry Pescianschi, the founder of the general idea of a new approach to neural network design.

• Vladimir Proseanic, Anatol Guin, Sergey Faer, Oleg Gafurov, and Alla Zusman, who actively supported the development of PANN with their experience and knowledge in the Theory of Inventive Problem Solving (TRIZ) and their talents.

• Ivan Ivanovich Negreshny – for his constructive criticism, which helped the authors recognize and correct their shortcomings.

Part 1. A New Kind of Neural Network: Progress Artificial Neural Network (PANN)

1. Introduction to the Problem

Where did neural networks come from, and why are we unsatisfied with them?

The development of artificial neural networks began with the work of Turing, McCulloch, Pitts, and

Hebb. Based on their ideas, in 1958, Frank Rosenblatt created the first artificial neural network,

«Perceptron,» capable of recognizing and classifying different objects based on recognition after appropriate training. Unfortunately, the very concept of the perceptron was fraught with a critical flaw, based on then-prevailing Dale’s biological doctrine: «…A neuron uses one and only one neurotransmitter for all synapses.» This doctrine was transferred to all artificial neural networks through a rule: «…One artificial synapse uses one and only one synaptic weight.» This rule may be called the Rosenblatt Doctrine.

In the 1970s, Dale’s doctrine was rejected by biological science. Unfortunately, Rosenblatt’s doctrine remains unchanged for all neural networks (recurrent, resonant, deep, convolutional, LSTM, generative, forward, and backward error propagation networks). This doctrine makes it possible to train networks using an iterative approach known as the gradient descent method, which requires enormous computation. And it is precisely this doctrine that is «to blame» for the inability to construct an adequate working theory of neural networks. Also, these networks are characterized by opacity and incomprehensibility, relatively low training speed, difficulty in completing training, and many other innate problems. For more information on the issues of classical neural networks, see Appendix 1.

Therefore, the development of such networks is mainly by trial and error. This leads to complexity and low reliability, the need for costly equipment, conducting complex power-hungry calculations, and expensive manual labor to provide training.

The critical «Rosenblatt error» was discovered by researchers (TRIZ specialists) of the deep tech company Progress, Inc. They also found a solution to eliminate this error. Thus, it became possible to create a fundamentally new type of neural network called PANN (Progress Artificial Neural Network). PANN networks and their operations are transparent, predictable, and thousands of times less costly, providing a better solution to many intelligent tasks. Eighteen patents in many countries worldwide protect PANN’s designs and operations. Several new software versions have already been created and tested based on these concepts.

2. Scientific and technical foundations of the PANN network

In this chapter, we will describe the main design features and the theoretical basics of the PANN network.

PANN differs from classical neural networks in that it has a unique design for the main element: the so-called formal neuron. A new formal neuron allows for a different way of training. As a result:

1. The network operation has become completely transparent. Establishing a simple and straightforward theory that predicts the results of actions has become possible.

2. PANN can be implemented on low-cost hardware. Its training and operation costs are much lower than those of classical neural networks.

3. PANN trains many times faster than classical neural networks.

4. PANN can be trained to additional (new) data anytime.

5. PANN does not have the harmful effect of «overfitting.»

2.1. A NEW DESIGN OF THE FORMAL NEURON

Classical neural networks are built of typical «bricks» – formal neurons of simple design, described by McCulloch and Pitts and implemented by Rosenblatt. The main problem with neural networks is the poor design of this formal neuron.

A formal Rosenblatt neuron has one synaptic weight. The PANN’s unique feature is a formal Progress neuron with two or more synaptic weights at each synapse.

Fig. 1. Comparison of formal neurons

In the Progress neuron, as in the Rosenblatt neuron, input signals travel to the adder through a single synaptic weight. However, in the Progress neuron, the distributor selects the weight based on the input signal size.

The main characteristics that describe the Progress neuron are:

• The Progress neuron operates with images presented as numerical (digital) sequences. These can be pictures, films, texts, sound recordings, tables, charts, etc.

• Each Progress neuron is connected to all network inputs. The number of inputs equals the number of digits in the digital sequence (image). For images in raster graphics, this is the number of pixels. For example, at a resolution of 16 × 16, the number of inputs I = 256; at a resolution of 32 × 32, the number of inputs I = 1024.

• The number of synaptic weights of the Progress neuron is at least two. When working with black-and-white graphics and simple tables, it is possible to use only two weights («0» and «1»). When working with color pictures, you can use any graphical representation, for example, palettes of 2, 4, 8, 16, 256, and so on. It should be noted that for the effective recognition of different types of images, there are optimal palettes, which are easy to determine by simple testing. At the same time, an unexpected property of PANN appears: the optimal number of colors for recognition is usually small; in experiments, this number was generally between 6 and 10.

• The number of inputs is the number of members of the digital sequence in question; for images in raster graphics, the number of pixels must be the same for all images under consideration. For example, at a resolution of 16 × 16, the number of inputs is I = 256; at a resolution of 32 × 32, the number of inputs is I = 1024. You can use any aspect ratio of rectangular images when working with images. It should be noted that for the effective recognition of different types of images, there are their own optimal resolutions, which are easy to determine with simple testing. At the same time, an unexpected property of PANN manifests itself – the optimal number of pixels for recognition is usually small; for example, for the recognition of various kinds of portraits, the best resolution can be 32 × 32.

Fig. 2. Single-neuron two-level PANN network

Fig. 3. Single-neuron multi-level PANN network

2.2. PROGRESS NEURON TRAINING

Training a PANN network is much easier than training any classical network.

The difficulties of training classical neural networks are related to the fact that when training several different ones, some images affect the synaptic weights of other images and introduce distortions in training into each other. Therefore, one must select weights so their set corresponds to all images simultaneously. To do this, they use the gradient descent method, which requires many iterative calculations.

A fundamentally different approach was developed to train the PANN network: «One neuron, one image,» in which each neuron trains its own image. At the same time, there are no mutual influences between different neurons, and training becomes fast and accurate.

The training of the Progress neuron to a specific image boils down to the distributor determining the signal level (in the simplest case, its amplitude or RGB value) and closing the switch corresponding to the range of weights in which this value falls.

Fig. 4. Trained single-neuron multi-level PANN network

The above training procedure of the Progress neuron gives rise to several remarkable properties of the PANN network:

1. Training does not require computational operations and is very fast.

2. One neuron’s set of synaptic weights is independent of other neurons. Therefore, the network’s neurons can be trained individually or in groups, and then the trained neurons or groups of neurons can be combined into a network.

3. The network can retrain – i.e., it is possible to change, add, and remove the necessary neurons at any time without affecting the neurons unaffected by these changes.

4. A trained image neuron can be easily visualized using simple color codes linking the included weights’ levels to the pixels’ brightness or color.

2.3. THE CURIOUS PARADOX OF PANN

At first glance, the PANN network looks structurally more complex than classical Artificial Neural Networks. But in reality, PANN is simpler.

The PANN network is simpler because:

1. The Rosenblatt neuron has an activation factor; in other words, the result is processed using a nonlinear logistic (sigmoid) function, an S-curve, etc. This procedure is indispensable, but it complicates the Rosenblatt neuron and makes it nonlinear, which leads to substantial training problems. In contrast, the Progress neuron is strictly linear and does not cause any issues.

2. The Progress neuron has an additional element called a distributor, which is a simple logic device: a demultiplexer. It switches the signal from one input to one of several outputs. In the Rosenblatt neuron, weights are multi-bit memory cells that can store numbers over a wide range, while in PANN, the most superficial cells (triggers) can be used, which can store only the numbers 1 and 0.

3. Unlike classic networks, PANN does not require huge memory and processing power of a computer, so cheap computers can be used, and much less electricity is required.

4. PANN allows you to solve complex problems on a single-layer network.

5. PANN requires tens or even hundreds of times fewer images in the training set.

Thus, it is possible to create full-fledged products based on PANN, using computer equipment that is not very expensive and economical in terms of energy consumption.

Fig. 5. Long and expensive training vs. fast and cheap

2.4. THE MATHEMATICAL BASIS OF RECOGNITION

ON THE PROGRESS NEURON

The linearity of the Progress neuron leads to the fact that the network built on these neurons is also linear. This fact ensures its complete transparency, the simplicity of the theory describing it, and the mathematics applied.

In 1965, Lotfi Zadeh introduced the concept of «fuzzy sets» and the idea of «fuzzy logic.» To some extent, this served as a clue for our work in developing PANN’s mathematical basis and logic. Mathematical operations in PANN aim to compare inexactly matching images and estimate the degree of their divergence in the form of similarity coefficients.

2.4.1. Definitions

In 2009, an exciting discovery was made called the «Marilyn Monroe neuron» or, in other sources, «grandmother’s neuron.» In the human mind, knowledge on specific topics is «divided» into individual neurons and neuron groups, which are connected by associative connections so that excitation can be transmitted from one neuron to another. This knowledge and the accepted paradigm of «one neuron, one image» made building the PANN recognition system possible.

Let’s introduce the «neuron-image» concept – a neuron trained for a specific image. In PANN, each neuron-image is a realized functional dependency (function) Y = f (X), wherein:

X is a numerical array (vector) with the following properties:

for X = A, f (A) = N

for X ≠ A, f (A) <N

A is a given value.

N is the dimension of vector X, the number of digits in this vector.

This format, called the Binary Comparison Format (BCF), is a rectangular binary digital matrix in which:

• The number of columns is equal to the length N (the number of digits) of the array.

• The number of rows equals the number of weight levels K selected for the network.

• Each significant digit is denoted by one (1) in the corresponding line, and the absence of a digit is denoted by zero (0).

• Each string corresponds to some significant digit of the numeric array to be written, i.e., in a string marked as «zero,» the digit «1» corresponds to the digit «0» in the original array, and in a string marked as «ninth,» the digit «1» corresponds to the digit 9 in the array.

• In each column of the matrix, one unit corresponds to the value of this figure, and all other values in this column are equal to 0.

• The sum of all units in the array matrix is equal to the length N of the array; for example, for an array of 20 digits, it is 20.

• The total number of zeros and ones in the matrix of each array is equal to the product of the length N of this array and the value of the base of the number system used.

Example: BCF notation of an array of 20 decimal digits [1, 9, 3, 6, 4, 5, 4, 9, 8, 7, 7, 1, 0, 7, 8, 0, 9, 8, 0,2].

Fig. 6. BCF image as a sparse binary matrix

A feature of the PANN network is that the image training of neurons typical of neural networks can be replaced by reformatting files that carry numerical dependencies to the BCF format or simply loading files in this format to the network.

Type X arrays in BCF format are denoted as matrices |X|.

2.4.2. Comparing Numeric Arrays

Comparing objects or determining similarities and differences