IT management |
|
Performance management |
|
|
When managing complex projects related to the development and organization of production of innovative products, the decision-making process is influenced by many situational aspects. This complicates the assessment of the quality of decisions, which are usually multi-variant and require taking into account random influences. In such cases, a significant effect can be achieved by using bioinspired methods that allow one to find a solution acceptable for a specific situation, in which elements of fuzzy set theory are used to describe NON-factors. The article proposes a generalized approach to creating a model based on the specified methods, which is intended to support decision-making in managing an innovative project. This model is distinguished by the comprehensive use of fuzzy bioinspired methods for selecting and justifying options for action in strategic and operational planning and situational management of project activities, taking into account the general and specific characteristics of the project stages, as well as the dynamic nature of external and internal factors. The proposed approach forms the basis of the developed fuzzy method for selecting equipment for conducting experimental design work and organizing the production of innovative products using a model of the behavior of a pack of wolves during hunting. The method is distinguished by the use of a fuzzy Euclidean measure of proximity between the quality indicators of the options being evaluated and the three best ones selected at a given iteration (alpha, beta, and delta solutions) to determine the direction of the search for a rational set of equipment, a modification of the rules for searching for solutions (movement of individuals) based on the consideration of the “depth of matches” and the increment of the effect, including for finding a reasonable balance between directed and random search, and the use of a base of fuzzy production rules when choosing a method for forming the basis for an alpha solution at subsequent iterations. The method is implemented in Python 3.12.0. The effectiveness of the proposed approach is confirmed by data from a computational experiment.
|
---|---|
Performance management |
|
|
Territorial inequality in access to healthcare remains a pressing issue for the healthcare system of the Russian Federation. Significant disparities in transport accessibility, staffing levels, and the spatial distribution of medical facilities complicate evidence-based decision-making, especially in regions with uneven population density and fragmented infrastructure. This creates the need for formalized and reproducible approaches to assessing healthcare accessibility that are adapted to regional specificities and suitable for digital implementation. The aim of this study is to develop a methodology for assessing the potential accessibility of medical facilities, based on a modified gravity model and implemented as an algorithm that accounts for travel time, facility capacity, and overlapping service areas. Unlike traditional models such as 2SFCA and classical gravity models, the proposed approach allows for parameter calibration based on empirical data and incorporates territorial competition for healthcare resources. The methodological foundation includes an exponential distance-decay function and dual normalization by total service supply. The novelty of the methodology lies in the integration of these components into a unified, computable index of potential spatial accessibility suitable for scalable digital implementation. The algorithm was developed in the R programming environment using the OSRM routing engine to calculate travel times over the road network. The model was tested using data from the municipalities of Sverdlovsk oblast. The results (R² = 0.252, mean absolute percentage error MAPE < 28%) confirmed the model’s interpretability and practical relevance. The proposed approach can be used for monitoring healthcare accessibility, identifying underserved areas, and informing spatial resource allocation. Moreover, the methodology can be adapted for other types of social infrastructure.
|
Software engineering |
|
Algorithmic efficiency |
|
|
Underwater pipelines, being critical infrastructure for the transportation of hydrocarbons and other resources, require regular inspection of their condition, taking into account the economic and environmental nature of the consequences of possible accidents. Therefore, one of the key technological challenges today is the development of reliable methods for recognizing underwater pipelines for the purpose of their inspection using video information received by an autonomous unmanned underwater vehicle. A method is proposed for recognizing and tracking an underwater pipeline using optical images using an autonomous underwater vehicle, based on a multi-stage computational data processing scheme, including: vectorization of initial images on a contour basis, selection of visible boundaries of the pipeline in images and calculation of its spatial centerline. The method is based on the use of the author's modification of the Hough Transform algorithm with adaptive limitation of the analysis area and a new version of the author's method for constructing contours using the Otsu's method. The contours obtained using the method have minimal redundancy and sufficient accuracy to identify visible pipeline boundaries using a modified Hough algorithm. The method is characterized by low computational costs in comparison with analogues. The easy calculation of the centerline is carried out on the basis of the application of the local recognition algorithm previously developed by the authors. Computational experiments were conducted to obtain comparative estimates of reliability and computational performance in relation to the contour algorithms of Canny, K-means, Otsu and the boundary detection method (modification of the Hough method). Including comparison assessments with some analogues. The obtained assessments of the effectiveness of the proposed solutions confirmed their effectiveness.
|
Information security |
|
Models and methods | |
|
A method for forecasting a nonequidistant (irregular) time series with an irregular sampling interval is presented. Data presented as irregular time series are often encountered in various fields, such as healthcare, biomechanics, economics, climatology, and others. Forecasting irregular time series is in demand in these fields for early warning and proactive decision-making, but there is no universal method for taking into account the unevenness of sampling in the forecast, which determines the relevance of research in this area. The purpose of the study was to develop a method for forecasting a nonequidistant series based on deep neural networks, which allows for good forecast accuracy with a relatively lightweight network architecture. The novelty of the research results lies in the developed method for forecasting nonequidistant time series, the architecture of the deep neural network, and the algorithm that implements the proposed forecast method. The method uses a closed loop, in which the forecast results at the current step are used at the following steps. The original feature of the proposed forecasting method is the use of a multilayer perceptron to forecast the duration of the next irregular sampling interval. This interval is calculated taking into account the correlation time calculated based on the autocovariance function of the durations of irregular sampling intervals. A distinctive feature of the proposed architecture is the presence of a separate input channel of neural network data for analyzing the values of sampling intervals, which allows forecasting the next value of the series taking into account the duration of the forecasted sampling interval. The method is developed for a one-dimensional series, but it can be extended to multidimensional series if the synchronicity of the sampling of the components of the series is observed. The computational experiments showed that with low requirements for computing resources, the accuracy of the forecast based on the proposed method is comparable to modern forecast models within the correlation interval.
|
Laboratory |
|
Researching of processes and systems |
|
|
This article shows how the modified U-Net, a neural network, can be used to find differences in visible and radio frequency spectrum images. The neural network was modified, with its convolutional layers replaced by the convolution blocks with neuromorphic microbiological cells, which partially destroy the cellular skeletal structure and change their conductivity through controlled biocorrosion. The author developed a method of training a modified neural network based on stimulation of the bacterial layer for the corrosion of conductive components. Functional analysis demonstrated the high efficiency of neural network element configuration and showed that the elements can form interconnected active structures. The author found out that, thanks to the neural network cell’s feature, neutral units can autogenerate signals. This is how information passing through the network can be processed both in passive mode and through interaction with local electrical activity. The author also researched generated activities, which revealed the integral effect of adding signals from neuromorphic cells, resulting in a complex response that includes the spectral components of all neighbouring cells. The modified network has an advantage over similar neural network structures: training can be managed by changing the total activity of neurons, rather than by evaluating the network’s response to test data. When it comes to a trained and formed neural network in which conductive structures are configured, spontaneous activity occurs much less frequently than in the initial configuration where the cells were not subjected to biocorrosion and therefore had maximum conductivity. The experiments demonstrated that the modified U-Net can be used to find differences in visible and radio frequency spectrum images. To successfully find differences hidden by the geometric features of the terrain, the author used a comprehensive strategy for image comparison using visible and radio spectra. The practical research is novel in that it offers a newly developed modification of neuromorphic cells. They achieve high speed of task solution due to the massively parallel organisation of detecting changes in images.
|
Information security |
|
Data protection |
|
|
The research is devoted to solving the problem of developing a method for protecting cloud platforms of critical information infrastructure based on cyber immunity. The analysis of existing approaches to protecting various digital systems has been conducted. It has been established that current approaches do not fully account for the specific characteristics of critical information infrastructure cloud platforms, namely: complex multi-layered architectures; the potential presence of undetected vulnerabilities leading to previously unknown threats of violation of computational semantics; elevated requirements for resilience; and the necessity for restoration of normal operation. The research sets out the objective of developing a new method for protecting cloud platforms of critical information infrastructure based on cyber immunity. A hypothesis has been formulated, stating that ensuring the required level of resilience for cloud platforms under cyberattacks is possible by adjusting the countermeasure parameters within a range of necessary and sufficient values, defined with consideration of the aforementioned requirements. The idea has been substantiated and the method for protecting cloud platforms of critical information infrastructure with cyber immunity has been developed. The method ensures the resilience of cloud platforms under computer attacks by varying the cyber immunity coverage ratio, taking into account the probability of achieving operational goals and the full execution time of program cycles. The scientific novelty of the proposed method lies in the fact that a modified bisection method has been applied to find the required value of the cyber immunity coverage coefficient. Furthermore, a criterion for verifying the existence of a necessary and sufficient value of this coefficient has been substantiated and implemented for the first time. Theoretical and experimental studies of the developed method have been conducted, confirming the proposed hypothesis.
|
|
Currently, the problem of ensuring information security of critical information infrastructure is steadily increasing and acquiring strategic importance, which is caused by the explosive growth of complex targeted attacks on infrastructure facilities. The solution to this problem requires the development of new approaches for assessing information security threats that combine the relevance of data provided by threat intelligence technology with a deep understanding of the specifics of the protected systems. An analysis of the state of the problem shows that existing approaches for assessing information security threats to critical information infrastructure facilities have such shortcomings as a gap between threat intelligence data and the context of a specific system, subjectivity of qualitative assessments, and the complexity of ranking threats given many conflicting criteria. To overcome these shortcomings, the article proposes a method for multi-criteria assessment of information security threats to critical information infrastructure facilities that integrates threat intelligence and digital twin technologies, where the digital twin technology is designed to provide the necessary understanding of object specifics. A system of indicators has been developed, structured according to five projections of threat assessment: severity of consequences, intruder capabilities, vulnerability of the facility, complexity of the attack, and effectiveness of protection. A conceptual model of an information security threat assessment system based on the technologies of digital twins and threat intelligence has been developed. A multi-criteria threat assessment methodology is presented, in which the integral threat index and Pareto-optimal threat ranks are calculated based on a set of criteria. Experimental testing on synthetic data confirmed the consistency of the results of these calculations. Practical application of the proposed method allows for threat analysis both as a whole and within individual projections of the indicator system.
|