IT management |
|
Performance management |
|
|
The article describes the technical concept of organizing data exchange between a specialized settlement center that carries out billing of consumed heat energy, and an energy sales company that supplies heat energy to industrial enterprises, government agencies and the population. The article describes the features of the technical problem of data exchange, which determine the parameters of mathematical models for calculating the volumes and costs of consumed energy resources, and then reviews approaches to solving this class of problems. To solve the technical problem, the features of the data preparation stage for the initial data exchange were formalized and schemes for organizing a regular data flow based on an ontological data model were proposed. The originality of the proposed approach was expressed in the definition of classes and their properties for concepts reflecting sets of information about the parameters of energy supply facilities, parameters for calculating volumes, prices and costs of energy resources, which made it possible, using an ontology editor, to form graphically formalized semantics, which became the basis for the formation of rules data processing for information exchange. The concepts of the ontological model were related to each other by sets of classified predicates, the use of which was illustrated by examples of descriptive logic queries. The implemented data exchange process based on the ontological model is illustrated with a data flow diagram. The ontological approach to solving the described problem made it possible to organize an end-to-end connection between the formalized reflection of the calculation models required for billing and the exchange data model, which made it possible to balance and comply with management and information technology requirements for this procedure.
|
---|---|
Software engineering |
|
Algorithmic efficiency |
|
|
The research is aimed at improving the reliability of complex technical systems, including a component such as an asynchronous electric motor, by monitoring their current state. The general objective of the study is to substantiate the possibility of using such a factor as the stator current vector, the oscillations of which are taken as an observable parameter, to diagnose a fault in the mechanical circuit of an electric motor. To substantiate the legitimacy of such a decision, a mathematical modeling apparatus was used with a computational experiment. For this purpose, the purpose and conditions for its implementation were preliminarily formulated, which made it possible to obtain a sufficient evidence base. The mathematical model of an asynchronous electric motor is supplemented with pulse load functions in the form of Fourier series, represented by rectangular alternating and constant sign pulses, the repetition frequency of which is equal to the rotational speed of the electric motor shaft. To conduct a computational experiment, a problem-oriented algorithm and a Maple program were developed. An original feature of the mathematical model, algorithm and program is the linking of the load torque with the angle of rotation of the rotor, which is typical for damaged mechanical circuits of asynchronous electric motors. Using a mathematical model, the effects on a serial asynchronous electric motor of load torque pulses from a damaged bearing are revealed, as well as the effects of torque pulses from periodic impacts occurring in a damaged mechanical circuit. In both cases, oscillations of the modulus of the stator current vector were recorded, coinciding with the rotor rotation frequency. The results of the computational experiment indicate that stator current fluctuations make it possible to reliably judge the state of the mechanical circuit of the electric motor. The research materials can be used by operating organizations to create systems for instrumental monitoring of the current technical condition of a fleet of asynchronous electric motors.
|
Models and methods |
|
|
The article presents an approach to reducing the likelihood of accidents caused by combinations of individually relatively harmless events of various origins during the operation of civil unmanned aircraft systems. The relevance of the stated problem is considered, as well as an overview of the available official documents regulating the field of application of unmanned aircraft systems. These systems are considered from the point of view of the control object, the main groups of technical violations leading to undesirable behavior are listed. In this work, we propose a new variation of the formulation of the control problem of preventing emergency combinations of events, and formulate a general approach to solving this problem at various control time intervals. Considering that accidents are caused by certain, individually non-dangerous basic events that occurred in a certain sequence, it is necessary to establish links between them and identify possible combinations. A logical-probabilistic safety analysis is used to depict the relationship between accidents and events. It is proposed to model the processes of development of emergency combinations of events using failure trees that take into account the events of the system and the external environment. The minimum tree sections represent emergency combinations of events, and the ways of successful functioning provide options for preventing the accidents under consideration. The selected depth of event decomposition in the construction of a set of basic events is assumed to be large enough so that a specific set of fairly simple and concise actions can be proposed to fend off individual events and reduce the probability of an accident to acceptable values. A generalized algorithm of actions is proposed to prevent the development of emergency combinations of events. An example of the application of the results of work in the periods of preparation for flights is considered.
|
Software engineering |
|
|
An analysis of the presented formulations of problems of distribution of computing resources has shown that to date there is no formalization of taking into account the properties of geodistribution, heterogeneity and dynamics of computing environments, subject to the existence of a limit on the execution time of user tasks. The purpose of this article is to develop a new general formulation of the problem of distribution of computing resources for geo-distributed heterogeneous computing environments with dynamics and a set of methods for solving it. The novelty of the research results is the new formulation of the problem for the specified class of computing environments, which differs from the existing ones by the complex integration of controlled parameters for the use of computing resources for data transit and the computational complexity of the procedure for allocating resources in the formal formulation of the problem of distribution of computing resources, as well as a set of methods for solving the problem, which differ from existing ones, taking into account the parameters of the computational complexity of the procedure for distributing the computational load and the characteristics of nodes in transit sections of the network. Within the framework of the study, discrete optimization methods are used, including iterative stochastic numerical optimization methods. The developed set of methods reduces the use of computing resources during the operation of the computing environment and, as a consequence, other resources that depend on the load on the computing nodes. The experimental results confirm the effectiveness of the developed set of methods, making it possible to reduce the use of computing resources for the process of their allocation by up to 2 times, as well as to reduce the execution time of a set of tasks by up to 2 times while maintaining the level of device load due to the selection of algorithmic implementation of data processing.
|
Laboratory |
|
Researching of processes and systems |
|
|
Coronary artery disease is the most dangerous heart disease caused by coronary artery disease. In clinical practice, X-ray coronary angiography is the main imaging method used to diagnose coronary artery disease. High cost and complexity of analysis of a large amount of data by a cardiac surgeon made it necessary to automate the process of image processing and diagnosis of stenoses. In this paper we considered models of deep detection, localisation and characterisation of stenoses using popular models SSD, R-FCN, Faster-RCNN, RetinaNet, EfficientDet. The models were pre-trained on the COCO image set and varied on the underlying neural network architecture. Computational experiments on stenosis detection from X-ray images were performed on the coronary angiography data used. The data consist of 9378 clinically acquired video sequences from invasive coronary angiography performed in DICOM format and labelled into individual frames for each video containing coronary artery stenosis. A total of 1593 image sequences with a resolution of (512×512) pixels were annotated. A comparative analysis of the models in terms of the main performance indicators: mAP accuracy, image processing time, number of model parameters was carried out. The obtained results allow us to state that the Faster R-CNN (ResNet101) and EfficientDet D4 (ResNet101) models are the detectors of choice in the detection of coronary artery stenosis. They have high detection accuracy and image processing speed compared to other models, as well as relatively low weights of parametrics. Comparative analysis of their performance with the results of other researchers showed superior or comparable results obtained in this work.
|
|
The results of a study are presented, the purpose of which was to develop the structure of a hybrid digital model for managing the processes of processing small-ore raw materials, as well as an algorithm for converting technological data in accordance with this structure, ensuring improved management quality and, as a consequence, the economic efficiency of processing. The original idea underlying the hybrid digital model is the use of neural ordinary differential equations (Neural ODE) to calculate the dynamics of technological objects and the processes implemented in them. Neural ODEs are a type of physics-motivated neural networks that use physical laws during their learning process. The resulting digital intelligent machine learning system is capable of highly accurate reconstruction of the dynamics function using observational data of a technological object or process. The proposed hybrid model provides for the joint use of Neural ODE and Simulink simulation models of technological processes for processing fine ore raw materials when calculating control actions. This allows you to quickly model and analyze the reaction of dynamic objects to control inputs and quickly make the necessary changes without waiting for the reaction of the physical original. Numerical experiments have shown that the use of Neural ODE as part of a hybrid digital model accurately reproduces the dynamics of technological objects under various initial conditions. For comparison, experiments were carried out with a model in which an LSTM recurrent neural network was used instead of Neural ODE. Experiments demonstrated that in the latter case, the dynamics were simulated with high accuracy only under the original initial conditions, and when they changed, it was severely degraded. At the same time, the use of Neural ODE instead of LSTM has shown consistently high accuracy in displaying dynamics under these changes, which will help improve the quality of control of technological processes for processing fine ore raw materials and their economic efficiency.
|
|
This study was carried out as part of a project to develop a subsystem for predicting the deterioration of the condition of patients with cardiovascular diseases on the platform of the medical information system "1C: Medicine. Hospital". The relevance of this task is due to the particularly high danger of this group of diseases and the necessity to make timely decisions about hospitalization or treatment when there is a risk of deterioration of the patient’s condition. The goal of this work was to create a tool that allows the attending physician to quickly obtain a reasonable assessment of the risk of deterioration of the patient’s condition based on available medical indicators. As a part of this study, an analysis of more than 30 thousand records containing patient health indicators downloaded from the regional medical information system was performed. The data set was labeled in accordance with the available information about medical decisions made (by attending physicians at the clinic and hospital). The lack of a standardized input of health indicators into the medical system required a significant amount of work to pre-process the input data and prepare it for modeling purposes. The prepared data was used to build a predictive model applying machine learning methods. Based on the results of the computational experiments, gradient boosting was chosen as the learning algorithm; the optimal parameters of this algorithm were selected. The prediction quality of the trained models was tested on data from the labeled set that did not participate in the training process. The quality indicators of the best model on test data were precision = 0.87; recall = 0.96; AUC-ROC = 0.97. The integration of trained models with the attending physician’s automated workstation in the 1C: Medicine. Hospital system was implemented. Thus, an algorithm for processing patient health indicators from downloading primary data from the medical accounting system to obtaining a forecast was developed, taking into account the peculiarities of data storage in the system and allowing the doctor to quickly receive information about identified risk cases after each update of indicator values in the system. It was shown that standardizing the values of medical research results entered into the system will help to improve the quality of forecasting by increasing the model’s stability to changes in input data.
|
Information security |
|
Data protection |
|
|
The article discusses the tasks of modernizing the information security system of a distributed computing system. In an increasingly aggressive environment, objects whose security was previously considered sufficient now require additional protection measures. Examples include significant objects of critical information infrastructure, which are currently subject to government regulation. To solve the problem of synthesizing the component composition of information security systems, the article proposes a complex algorithm that includes steps in which various mathematical tools are used. Using this approach allows to select an acceptable option for a set of software and hardware tools that provide the ability to block attacks at a given level of protection. The problem is supposed to be solved based on the basis of the functionality of software and hardware components, their parameters and functional relationships. The novelty of the research results lies in the presentation of a discrete model of an information security system in the form of a simulation model (a special case of stochastic programming), which makes it possible to take into account the functional features of hardware and software when modernizing the information security system. A simulation algorithm is proposed that takes into account the characteristics of the information security system, which can take on both deterministic and probabilistic values. At the same time, the necessary definitions are introduced, the provisions of which are illustrated with numerical examples. A simulation algorithm is proposed that takes into account the characteristics of the information security system, which can take on both deterministic and probabilistic values. Also the necessary definitions are introduced, the provisions of which are illustrated with numerical examples. Calculations make it possible to identify the most scarce resources, establish how successful the specialization and structure of the information security system are, evaluate the results of changes in the information security system, redistribution of its functions and material resources.
|