+7 (495) 987 43 74
Join us -              
Рус   |   Eng


Dli Maxim I.

Dr of Technique, Professor, Deputy Director of the National Research University MPEI Branch in Smolensk, Higher Mathematics and Natural Sciences Chair, Moscow University for Industry and Finance "Synergy"

Constructing integrated model for risk management of metallurgical enterprise

A method of risk management in the metallurgical enterprise is proposed. Activities specific features are considered and risk management process is decomposed in such a way that every task is mathematically described and allocated to one of the steps originated from the decomposition.


Method for intellectual management of industrial enterprise information resources

Industrial enterprise information resources management method describing separate components of a control system is considered. The method is based on using the set of interconnected mathematical models. The models presented incorporate graph theory, fuzzy logic and cognitive modeling methods modifications.


Information and transport networks projects management under uncertainty

The article deals with the problem of project management for the development of information and transport enterprise networks. A formalized statement of the problem is as well as modification of the algorithm based on the ant colony using fuzzy logic and fuzzy production rules to take account of the uncertainty of demand in different nodes are presented.

A three-level fuzzy cognitive model for region innovation development analysis

The necessity of the use of cognitive maps for the simulation of innovative development of the region is proved. The main innovation of modeling is in fuzzy cognitive maps. New kind of fuzzy cognitive maps incorporating uncertainty and variability of system performance are elaborated.

The trajectory estimation model for project management in creation and organization of high-technology industrial products production

The projects of creation and organization the high-technology industrial products production include interrelated tasks (activities) of envisioning, planning, design and developing. Such projects have several specifics, such as different structural relationships between activities, high level of information uncertainty and large amount of controlled parameters. The quantity of these parameters depends on external factors and internal connections of the project. The described peculiarities determine the necessity of modifying of widely practiced project management formal methods and models. The article describes requirements to project models, on which the proposed method of model creation is based. The method includes the following stages: the decomposition of the project to subprojects; the creation of network models of the subprojects; the creation of the model that consists of activities belonging to different subprojects and having common input/output connections; the identification of the project goals with the help of indicators. The indicators may be of different types: quantitative (point ant interval estimations) and qualitative. The indicators for each goal-oriented project state are integrated with the use of proposed algorithm and form the project trajectory. The considered model makes possible to estimate the project trajectory in various time points. As a result the project management becomes stable under uncertainty.

Simulation modeling and fuzzy logic in real-time decision-making of airport services

Decision making by the aircrafts services of the international airport, which provides for intensive traffic of aircraft and their ground handling, becomes a very topical issue. If earlier it was believed that the intensity is provided only by the number of runways, nowadays a large accumulation of aircraft on the airport platform-field creates equally complex difficulties in comparison with aircraft take-offs and landings. Solving such problems with the use of «crisp methods» of queuing theory gives little. This article deals with modern «fuzzy methods» based on simulation modeling and fuzzy logic.

Outsource-using integration methods of business entities information systems

Business entities performance is inextricably linked with information resources sharing today. This problem can be solved by information systems integration. Often IT is not company’s core function and information system integration is carried out using IT-outsource. Architecture patterns choice and information system development project management are keys in this case. This article suggests relevant to modern conditions information systems integration architecture patterns. Information system life cycle stages and information systems development project stages interrelation model taking in the account information risks influence is suggested for effective information system development project management. Decision support system architecture is suggested for information risks minimization during information systems development project. Suggested results practical utilization is assumed towards the development of business entities information system integration project management.

Economical information system lifecycle management based on decentralized application theory

Effective business processes become a competitive edge of the organization, and enable it to timely respond to changes in external environment. Research and development (innovation) are the key business process providing the basic value of organization products/services, along information system lifecycle management is one of components of this business process. The goal of this work is software developing and maintenance projects efficiency improving by transaction costs reducing. This article suggests economical information system lifecycle management model based on decentralized application theory, that tries to reduce the cost of information search by securely storing the lifecycle process output and project documentation versions; to reduce the cost of coordination by automating the lifecycle process output verification; to reduce the cost of contracting by using auto-implementing smart contracts and eliminating the dependence on the necessity of establishing «trust» relations between parties to the lifecycle. The practical use of the results is expected in developing information system lifecycle management tools.

Formation of the structure of the intellectual system of analyzing and rubricating unstructured text information in different situations

The analysis of electronic text documents written in natural language is one of the most important tasks implementing in systems of automated analyzing linguistic information. Today the most complicated problem is analyzing unstructured text documents coming to various organizations and authorities through the electronic communications. The increasing volume of such documents leads to the need to rubricate incoming messages, i.e. to solve the classification task. The analysis of the scientific works in this field has showed the impossibility of constructing a unified model for rubricating unstructured electronic text documents in various situations. The main reasons are the lack of statistical data, the dynamism of the thesaurus and the small size of the incoming document. To solve this problem, we propose a multimodel approach to the rubrication that is characterized by the combined use of intellectual and probabilistic-statistical methods of the text document analysis. The choice of a specific model is carried out using fuzzy logic algorithms based on the proposed characteristics (the size of document, the degree of rubric thesaurus intersection, the frequency of meaningful keywords, etc.). The implementation of the proposed multimodel approach will improve the accuracy of attributing unstructured electronic text documents to concrete rubrics taking into account their specificity and various objectives of practical application in the organization.

Developing the economic information system for automated analysis of unstructured text documents

The study of tasks and methods of automated text rubrication was conducted and their prospects for the analysis of unstructured electronic text documents were evaluated taking into account the peculiarities of appeals received from citizens to the authorities. The architecture of the information system of automated analysis of such documents is developed. It implements the proposed multi-model approach to the rubrication based on the integrated use of intelligent and probabilistic-statistical methods. The procedure of processing citizens’appeals received by the authorities using the document management system and the developed information system is given.

Algorithms for the formation of images of the states of objects for their analysis by deep neural networks

Algorithms of visualization of numerical data characterizing the state of objects and systems of various nature with the aim of finding hidden patterns in them using convolutional neural networks are presented. The algorithms used methods for obtaining images from numerical data on the basis of the discrete Fourier transform of time series fragments, as well as on the basis of the application of visualization using three-component system diagrams, if such a three-component representation of the system is possible. The software implementation of the proposed algorithms was performed in the Linux environment in the Python 3 language using the Keras open neural network library, which is a superstructure above the TensorFlow machine learning framework. For the learning process of the neural network, a Nvidia graphics processor was used that supports the technology of the CUDA parallel computing software and hardware architecture, which significantly reduced the learning time. The proposed approach is the recognition States of the objects according to their visualized data are based on the recognition of no boundaries or forms of the figures in the images and their textures. Also presented is a program that generates sets of images to implement the process of learning and testing convolutional neural networks in order to pre-tune them and assess the quality of the proposed algorithms.Keywords: Internet, Internet security, parental control applications, user security, information security, Internet threats.

Using fuzzy decision trees to rubricate unstructured small-sized text documents

Every day, a large number of appeals (statements, proposals or complaints) submitted in unstructured text form are received on Internet portals and e-mails of public authorities. The quality and speed of automatic processing of such electronic messages directly depend on the correctness of their classification (rubrication). It consists in assigning the received message to one or several thematic rubrics that determine the directions of the departments. The choice of a mathematical approach to analysis and rubrication directly depends on the characteristics of incoming appeals. The analysis of their specifics (small size, the presence of errors, a free-style of the problem statement, etc.) has revealed the impossibility of using classical approaches to the classification of text documents. The article suggests using the apparatus of fuzzy decision tree for rubricating small-sized unstructured text documents arriving at Internet portals and e-mails of public authorities. It allows classification under conditions of the rubric intersection and a lack of statistical information for applying probabilistic and neural network methods. The proposed model for the document rubrication is distinguished by the consideration of syntactic relationships and roles of words in the sentences based on the use of binary fuzzy decision tree. The tree is constructed on the basis of the results of analysis of the degree of rubric thesaurus intersection and the distances between rubrics in the n-dimensional feature space.

Analysis of the influence of the architecture of the input layers of convolution and subsampling of a deep neural network on the quality of image recognition

The results of the study of the influence of the characteristics of convolution and subsampling layers (sub-sampling layer) at the input of a deep convolutional neural network on the quality of pattern recognition are presented. For the convolution layer, the variable parameters were the size of the convolution kernel; the varied parameters of the architecture of the down sampling layer were the size of the receptive field, which determines which region of the input feature will be processed to form the output of the layer. All the parameters listed that determine the architecture of the input layers of convolution and subsampling, the neural network developers have to select, based on their experience, known good practices. This choice is influenced by a preliminary analysis of the parameters of the processed images: image size, number of color channels, features of signs determining the classification of recognizable objects in different classes (recognition of silhouette, texture) and more. To take into account the noted factors when creating the architecture of the input convolution and subsampling layers, it is proposed to use numerical characteristics calculated based on the analysis of histograms of input images and pixel color intensity dispersions. A histogram of both the entire image and the fragments is constructed, as well as the calculation of the total variance and local variances of the fragments, compared with the total dispersion. Based on these comparisons, recommendations were developed for choosing the size of the convolution kernel, which will reduce the time needed to search for a suitable neural network architecture. A study of the influence of the above parameters on the quality of image recognition by a convolutional neural network was carried out experimentally, using a network created in Python using the Keras and Tensorflow libraries. To visualize and control the learning process of the neural network, the TensorBoard cross-platform solution was used. Network training was carried out on the Nvidia GeForce GTX 1060 GPU, supporting CUDA technology for hardware and software parallel computing architecture. Read more...

Rubrication of text documents based on fuzzy difference relations

One of the key areas of informatization of public authorities is to develop and implement the systems of automated processing the electronic appeals (applications, complaints, suggestions) of individuals and legal entities that arrive on official websites and portals of government. The rubrication plays an important role in solving this problem. It consists in the appeals’ distribution according to thematic rubrics determining the directions of the activity of departments carrying out processing and preparation of the corresponding response. The results of the analysis of the specific features of such text messages (small size, markup lack, the errors’ presence, thesaurus unsteadiness, etc.) confirmed the impossibility of using traditional approaches to rubrication and justified the feasibility of using data mining methods. The article proposes a new approach to the analysis and rubrication of electronic unstructured text documents arrived on official websites and portals of public authorities. It involves the formation of a tree-like structure of the rubric field, based on fuzzy relationships of differences between the syntactic characteristics of documents. The analysis is based on determining the fuzzy correspondence of these documents by their syntactic characteristics with the values of the clusters’ centers. It is carried out sequentially from the root to the leaves of the constructed fuzzy decision tree. The proposed rubrication method is programmatically implemented and tested in the automated processing and analysis of appeals (applications, complaints and suggestions) of citizens entering the Administration of Smolensk Region. This made it possible to ensure prompt and high-quality updating of rubrics and document analysis under conditions of non-stationary composition of the thesaurus and the importance of rubric words. Read more...

Rubrication of text information based on the voting of intellectual classifiers

The practical implementation of the concept of electronic government is one of the priorities of Russian state policy. The organization of effective interaction between authorities and citizens is an important element of this concept. In addition to providing public services, it should include the processing of electronic appeals (applications, complaints, suggestions, etc.). Research has shown that the speed and efficiency of appeal processing largely depend on the quality of determining the thematic rubric, i.e. solving the rubrication task. The analysis of citizens' appeals received by the e-mail and official websites of public authorities has revealed several specific features (small size, errors in the text, free presentation style, description of several problems) that do not allow the successful application of traditional approaches to their rubrication. To solve this problem, it has been proposed to use various methods of intellectual analysis of unstructured text data (in particular, fuzzy logical algorithms, fuzzy decision trees, fuzzy pyramidal networks, neuro-fuzzy classifi convolutional and recurrent neural networks). The article describes the conditions of the applicability of six intellectual classifiers proposed for rubricating the electronic citizens’ appeals. They are based on such factors as the size of the document, the degree of intersection of thematic rubrics, the dynamics of their thesauruses, and the amount of accumulated statistical information. For a situation where a specific model cannot make an unambiguous choice of a thematic rubric, it is proposed to use the classifier voting method, which can significantly reduce the probability of rubrication errors based on the weighted aggregation of solutions obtained by several models selected using fuzzy inference. Read more...

Creation of a chemical-technological system digital twin using the Python language

Currently, when modeling complex technological processes in cyber-physical systems, procedures for creating so-called "digital twins" (DT) have become widespread. DT are virtual copies of real objects which reflect their main properties at various stages of the life cycle. The use of digital twins allows real-time monitoring of the current state of the simulated system, and also provides additional opportunities for engineering and deeper customization of its components to improve the quality of products. The development of the "digital twin" technology is facilitated by the ongoing Fourth Industrial Revolution, which is characterized by the massive introduction of cyber-physical systems into production process. These systems are based on the use of the latest technologies for data processing and presentation and have a complex structure of information chain between its components. When creating digital twins of such systems elements, it is advisable to use programming languages, that allow visualization of simulated processes and provide a convenient and developed apparatus for working with complex mathematical dependencies. The Python programming language has similar characteristics. In the article, as an example of a cyber- physical system, a chemical-technological system based on a horizontal-grate machine is considered. This system is designed to implement the process of producing pellets from the apatite-nepheline ore mining wastes. The article describes various aspects of creating a digital twin of its elements that carry out the chemical-technological drying process in relation to a single pellet. The digital twin is implemented using the Python 3.7.5 programming language and provides the visualization of the process in the form of a three-dimensional interactive model. Visualization is done using the VPython library. The description of the digital twin software operation algorithm is given, as well as the type of the information system interface, the input and output information type, the results of modeling the investigated chemical-technological process. It is shown that the developed digital twin can be used in three versions: independently (Digital Twin Prototype), as an instance of a digital twin (Digital Twin Instance), and also as part of a digital twins set (Digital Twin Aggregate). Read more...

Валерию Павловичу Мешалкину – 80 лет

V. P. Meshalkin is the founder of the new scientific direction "Theoretical foundations of engineering, ensuring reliability, logistics management of energy resource efficiency of chemical and technological systems for the outputing of high-quality products". The article describes the main scientific achievements of academician V. P. Meshalkin, who is a leading scientist in several fields of study, such as analysis and synthesis of highly reliable energy-saving chemical-technological systems; managing the operation of low-waste production facilities with optimal specific consumption of raw materials, energy, water and structural materials. The main projects, which are currently successfully carried out under the general guidance of the Academician of the Russian Academy of Sciences V. P. Meshalkin, are presented, including projects on the development of scientific foundations for the rational use of mineral raw materials, methods of engineering and management of the usage of energy-effi nt environmentally safe digitalized production of industrial waste processing, etc. Read more...

Preliminary assessment of the pragmatic value of information in the classifiсation problem based on deep neural networks

A method is proposed for preliminary assessment of the pragmatic value of information in the problem of classifying the state of an object based on deep recurrent networks of long short-term memory. The purpose of the study is to develop a method for predicting the state of a controlled object while minimizing the number of used prognostic parameters through a preliminary assessment of the pragmatic value of information. This is an especially urgent task under conditions of processing big data, characterized not only by significant volumes of incoming information, but also by information rate and multiformatness. The generation of big data is now happening in almost all areas of activity due to the widespread introduction of the Internet of Things in them. The method is implemented by a two-level scheme for processing input information. At the first level, a Random Forest machine learning algorithm is used, which has significantly fewer adjustable parameters than a recurrent neural network used at the second level for the final and more accurate classification of the state of the controlled object or process. The choice of Random Forest is due to its ability to assess the importance of variables in regression and classification problems. This is used in determining the pragmatic value of the input information at the first level of the data processing scheme. For this purpose, a parameter is selected that reflects the specified value in some sense, and based on the ranking of the input variables by the level of importance, they are selected to form training datasets for the recurrent network. The algorithm of the proposed data processing method with a preliminary assessment of the pragmatic value of information is implemented in a program in the MatLAB language, and it has shown its efficiency in an experiment on model data. Read more...