+7 (495) 987 43 74 ext. 3304
Join us -              
Рус   |   Eng

Journal archive

№3(93) June 2021 year

Content:

For the anniversary of the scientist

Authors: Maxim Dli, Yu. Rubin

V. P. Meshalkin is the founder of the new scientific direction "Theoretical foundations of engineering, ensuring reliability, logistics management of energy resource efficiency of chemical and technological systems for the outputing of high-quality products". The article describes the main scientific achievements of academician V. P. Meshalkin, who is a leading scientist in several fields of study, such as analysis and synthesis of highly reliable energy-saving chemical-technological systems; managing the operation of low-waste production facilities with optimal specific consumption of raw materials, energy, water and structural materials. The main projects, which are currently successfully carried out under the general guidance of the Academician of the Russian Academy of Sciences V. P. Meshalkin, are presented, including projects on the development of scientific foundations for the rational use of mineral raw materials, methods of engineering and management of the usage of energy-effi nt environmentally safe digitalized production of industrial waste processing, etc.

IT management

Performance management

Currently, one of the main directions in the field of banking process automation is the creation and implementation of integrated management decision support systems. In the context of growing competition and general digitalization of the economy, the issue of improving the efficiency of bank management is most acute. Most of the automated systems used in this area are aimed at identifying "gaps" in existing business processes and further optimizing their individual parts. Moreover, such systems are not based on economic and mathematical models and algorithms for their solution. This article presents a description of an intelligent computer software package that allows you to simulate the optimization of software and adaptive management of specific business processes - managing the number of personnel and the sales system of the retail block of a commercial bank. The basis of the developed software package is a discrete dynamic economic and mathematical model of the investigated business processes and the developed optimization algorithms for software and adaptive control of these processes. The process of making decisions on the recruitment/reduction of the staff of various categories of employees of the Retail block of a commercial bank, as well as on the management of the sales system provided by the relevant employees. The paper presents the main stages of creating the proposed controlled dynamic model with a vector quality criterion. Based on computer modeling with the help of the developed intelligent computer software complex, the results of optimal solutions for various options for practical examples were obtained. The results are graphically illustrated and analyzed. Based on the proposed dynamic model, it is possible to solve other problems of optimizing software and adaptive management of processes that determine banking activities and develop automated information systems for implementing support for managerial decision-making in this area.

The article contains the study of the experience of operation of the specific Sber financial ecosystem as a new form of entrepreneurial activity in the competitive economic environment which is driven by the impact of digitalization on the economic convergence processes – the modern trend in the social development in general. The study of the experience of the Sber financial ecosystem which is one of the most highly developed ones in Russia is both of theoretical and practical interest. The purpose of the article is to describe the actual experience of Sber ecosystem’s operation. The results of the performed analysis are as follows. Definitely the Sber ecosystem is a form of organization of joint business implemented in the framework of intersectoral convergence driven by digitalization. The impact of intersectoral convergence is manifested in the fact that the creation of the ecosystem was initiated by a financial institution - the largest Russian savings bank; and the participants in this ecosystem are representatives of a wide variety of sectors and segments of the economy. The impact of digitalization shows in the fact that the basis of joint business is a modern digital base which includes IT, IT platforms and networks. The modern mathematical and instrumental methods of data processing and IT startups are not only the digital specifics of the ecosystem functioning, but also effective tools to attract to a joint business – on a voluntary basis only – the partners from various fields of activity, and provide the Sber ecosystem with undoubted competitive advantages.

Software engineering

Algorithmic efficiency

Widespread use of web-based systems in business, marketing, e-learning, etc. makes it necessary to take into account and analyze the information needs of the user in order to optimize interaction with him. One of the main problems of creating adaptive web-based systems is the task of classifying information resources (pages) of the portal describing the offered product or service, for the subsequent formation of the user profile and personalized recommendations of services. Data mining and machine learning methods can be used to solve this problem. The article presents a new approach to creating adaptive web-based information systems using the reinforcement learning algorithms to classify information resources and to form personalized recommendations to users based on their preferences. An adaptive approach is proposed and justified, based on the use of Reinforcement Learning procedures, which allows you to automatically find the most effective strategies for the correct classification of the site's resources and the formation of user groups with the same type of requests and preferences. The proposed scheme allows you to create procedures for evaluating and ranking information resources of the system based on the analysis of user behavior on the site online. The reinforcement learning algorithms used make it possible to evaluate the relevance of each page of the site to the requests and preferences of the users from different categories in order to optimize the structure and content of the site, as well as to build an effective system of recommendations in accordance with the user's interests to be able to choose the most suitable products or services.

Defense software

In this paper, we evaluate the crypto resistance of known cryptographic methods and methods based on the use of noise-like signals, similar in properties to "limited" white noise and used to spread spectrum of transmitted messages, to the destructive effect of "viewing transmitted data" (decipher), based on the search of code structures (brute force), in the case of quantum computers. It’s established that the required value of the number of code structures (key space), taking into account the constantly improving and developing computing power of quantum computers, for the next few years should be considered a value of 1032 of the number of code structures (key space) and higher, providing crypto resistance for a minimum of 3 years. It’s shown that the Grover algorithm is similar to the destructive effect of "viewing transmitted data" (decipher), based on a complete search of all code structures (brute force) using modern super- computers. It’s established that well-known symmetric cryptographic methods can potentially be used in the post-quantum era and methods based on noise-like signals potentially, provided they are detected and aware of the methods underlying them (without knowledge of the key), cannot be applied in the post-quantum era. According to the authors, a promising approach in the post-quantum era for information security issues is the use of chaotic signals.

Software engineering

Author: Dmitry Chitalov

The present study is devoted to the development of a software module that converts computational meshes created on the basis of the OpenFOAM platform into the msh format, used in numerical experiments using the ANSYS FLUENT package. Thanks to this conversion, the user is able to use both products in parallel. The ANSYS FLUENT functionality can, for example, be used within the framework of post-processing of a numerical model in most fundamental problems of continuum mechanics (CM), including in hydrodynamics, aerodynamics, and solid mechanics. The existing analogues of the OpenFOAM platform, such as Salome, Helyx-OS, Visual-CFD, have already implemented tools for solving this problem, but due to their partial commercial distribution, the need to pay for technical support services and the lack of full-fledged Russian documentation, the problem of the lack of a graphical shell to simplify the procedure conversion remains relevant. The process of converting computational meshes generated by means of the OpenFOAM platform into the msh-format used in the ANSYS FLUENT package is the subject of this study. The purpose of the work is to develop the source code of a software module that automates the process of determining conversion parameters and starting the conversion process. The work presents a diagram corresponding to the algorithm of a specialist's work with the considered software module. A stack of technologies for typing, debugging and running program code is presented, a stack of tools for using the module in question is presented. The results of the research have been determined, the provisions of its scientific novelty and supposed practical significance have been formulated. The results of testing the application are presented on the example of one of the classic experiments based on the OpenFOAM platform.

Simulation

Theory and practice

Author: D. Sorokin

The author of this article represents his own work DVCompute Simulator, which is a collection of general-purpose programming libraries for discrete event simulation. The aim of the research was to create a set of simulators in the Rust language, efficient in terms of speed of execution, based on a unified approach and destined for different simulation modes. The simulators implement such modes as ordinary sequential simulation, nested simulation and distributed simulation. The article describes that nested simulation is related to Theory of Games, while distributed simulation can be used for running large-scale simulation models on supercomputers. It is shown how these different simulation modes can be implemented based on the single approach that combines many paradigms: the event-oriented paradigm, the process-oriented one, blocks similar to the GPSS language and even partially agent-based modeling. The author's approach is based on using the functional programming techniques, where the simulation model is defined as a composition of computations. The results of testing two modules are provided, where the modules support both the optimistic and conservative methods of distributed simulation.

Laboratory

Models and Methods

Author: Y. Lavrenkov

We consider the synthesis of a hybrid neural convolutional network with the modular topology-based architecture, which allows to arrange a parallel convolutional computing system to combine both the energy transfer and data processing, in order to simulate complex functions of natural biological neural populations. The system of interlayer neural commutation, based on the distributed resonance circuits with the layers of electromagnetic metamaterial between the inductive elements, is a base for simulation of the interaction between the astrocyte networks and the neural clusters responsible for information processing. Consequently, the data processing is considered both at the level of signal transmission through neural elements, and as interaction of artificial neurons and astrocytic networks ensuring their functioning. The resulting two-level neural system of data processing implements a set of measures to solve the issue based on the neural network committee. The specific arrangement of the neural network enables us to implement and configure the educational procedure using the properties absent in the neural networks consisting of neural populations only. The training of the convolutional network is based on a preliminary analysis of rhythmic activity, where artificial astrocytes play the main role of interneural switches. The analysis of the signals moving through the neural network enables us to adjust variable components to present information from training bunches in the available memory circuits in the most efficient way. Moreover, in the training process we observe the activity of neurons in various areas to evenly distribute the computational load on neural network modules to achieve maximum performance. The trained and formed convolutional network is used to solve the problem of determining the optimal path for the object moving due to the energy from the environment.

A method is proposed for preliminary assessment of the pragmatic value of information in the problem of classifying the state of an object based on deep recurrent networks of long short-term memory. The purpose of the study is to develop a method for predicting the state of a controlled object while minimizing the number of used prognostic parameters through a preliminary assessment of the pragmatic value of information. This is an especially urgent task under conditions of processing big data, characterized not only by significant volumes of incoming information, but also by information rate and multiformatness. The generation of big data is now happening in almost all areas of activity due to the widespread introduction of the Internet of Things in them. The method is implemented by a two-level scheme for processing input information. At the first level, a Random Forest machine learning algorithm is used, which has significantly fewer adjustable parameters than a recurrent neural network used at the second level for the final and more accurate classification of the state of the controlled object or process. The choice of Random Forest is due to its ability to assess the importance of variables in regression and classification problems. This is used in determining the pragmatic value of the input information at the first level of the data processing scheme. For this purpose, a parameter is selected that reflects the specified value in some sense, and based on the ranking of the input variables by the level of importance, they are selected to form training datasets for the recurrent network. The algorithm of the proposed data processing method with a preliminary assessment of the pragmatic value of information is implemented in a program in the MatLAB language, and it has shown its efficiency in an experiment on model data.

Information security

Data protection

Author: I. Lebedev

The relevance of the topic considered in the article lies in solving problematic issues of identifying rare events in imbalance conditions in training sets. The purpose of the study is to analyze the capabilities of a classifier’s ensemble trained on different imbalanced data subsets. The features of the heterogeneous segments state analysis of the Internet of Things network infrastructure based on machine learning methods are considered. The prerequisites for the unbalanced data emergence during the training samples formation are indicated. A solution based on the use of a classifier’s ensemble trained on various training samples with classified events imbalance is proposed. The possibility analysis of using unbalanced training sets for a classifier’s ensemble averaging of errors occurs due to the collective voting procedure, is given. An experiment was carried out using weak classifying algorithms. The estimation of features values distributions in test and training subsets is carried out. The classification results are obtained for the ensemble and each classifier separately. An imbalance is investigated consists in the events number ratios violation a certain type within one class in the training data subsets. The data absence in the training sample leads to an increase in the scatter effect responses is averaged by an increase in the model complexity including various classifying algorithms in its composition. The proposed approach can be applied in information security monitoring systems. A proposed solution feature is the ability to scale and combine it by adding new classifying algorithms. In the future, it is possible to make changes during operation to the classification algorithms composition, it makes possible to increase the indicators of the identifying accuracy of a potential destructive effect.