+7 (495) 987 43 74 ext. 3304
Join us -              
Рус   |   Eng

Journal archive

№6(120) November-december 2025 year

Content:

IT management

Performance management

Unmanned aerial vehicles are currently widely used in maritime search and rescue missions to survey waters and locate victims. The success of the missions largely depends on the effectiveness of the search strategy. The article proposes a deterministic method for delimiting search areas and planning flight trajectories for a group of unmanned aerial vehicles during maritime search operations. An unmanned aerial vehicle detects a target if it is within the coverage area of the remote sensing equipment installed on the vehicle. The coverage area is specified as a circle of a certain radius. The problem of planning trajectories for complete coverage of the search area is solved. A trajectory of minimum length is considered effective. Two search strategies are considered: without taking into account and taking into account restrictions on energy resources. The problem of complete coverage of the search area divided into sectors is solved. Each sector is assigned to one device, which searches for a target in its sector according to a given algorithm. A geometric model of the search trajectory is presented. Four algorithms implementing the two specified search strategies are presented: 1) an algorithm for deploying a group of devices from the central point of the search area without taking into account energy resource constraints; 2) an algorithm for deploying a group of devices from the approach point to the search area without taking into account energy resource constraints; 3) an algorithm for deploying a group of devices from the central point of the search area taking into account energy resource constraints; 4) an algorithm for deploying a group of devices from the approach point to the search area taking into account energy resource constraints. The algorithms are tested.

Software engineering

Anomaly detection is a pressing research problem in many subject areas, the solution of which enables timely management decision-making. This study proposes a method for identifying anomalies in economic indicators characterizing the internal and external environment of a manufacturing organization. This method can be applied in the algorithmic support of business decision support systems. The method is based on the use of an artificial neural network with an autoencoder architecture trained to replicate input data at the output. After training the autoencoder on normal data, the error in reconstructing the input at the output will be small. However, when fed anomalous data, the error will increase, which can serve as an anomaly indicator. The proposed method uses a convolutional autoencoder, so the input data is first converted into images (signatures), for which an original method for their formation is proposed. The method involves representing the historical behavior of each economic indicator as a heat matrix. Each heat matrix forms one channel, and their combination forms a signature, which is then fed to the autoencoder input for further analysis. The autoencoder utilizes depthwise separable convolutions, allowing for autonomous tuning of convolutional filters for individual signature channels. The novelty of the research results lies in the developed method for detecting anomalies in economic indicator arrays, which enables localization of collective and individual anomalies (outliers), as well as in the developed software used to test the method. Computational experiments demonstrated that the method achieves anomaly detection accuracy comparable to some modern models.

Algorithmic efficiency

The article presents the developed packing algorithms that make it possible to solve the problem of optimized placement of a given set of flat objects taking into account additional geometric and technological constraints specified when arranging real objects in production. A description of the procedure for applying individual indents between object boundaries is given. To work with areas of placement of arbitrary geometry, restricted areas in the form of fixed objects of various geometries are introduced. An algorithm for uniform placement of a given set of objects throughout the given placement space is proposed. An algorithm for placing objects taking into account several placement start points is described, ensuring their placement as close as possible to one or two pre-marked points in the placement space. A speed-optimized algorithm for placing flat objects of arbitrary geometry, presented in the form of orthogonal polyhedra, is proposed, implementing fast layout of objects of complex geometry when arranging taking into account specified indents and placement start points. An algorithm for arranging flat objects is developed taking into account individual constraints on the minimum distance between special points of objects. A heuristic algorithm for selecting the best variant of orthogonal orientation of rectangular objects is proposed, minimizing the density of the formed layout. Examples of various layouts of objects obtained using the developed placement algorithms are given. Examples of solving some particular problems of arranging rectangular objects with various restrictions on the minimum distance specified between special points of objects are presented. The use of developed packing algorithms, taking into account various geometric and technological limitations, makes it possible to solve practical problems of arranging objects in real production conditions.

In the field of digital signal processing, restoring their shape at a high level of noise component is one of the main problems. Its relevance is due to the widespread use of digital technologies and it becomes particularly acute in those areas where interference inevitably affects the registration quality, recognition, and signals interpretation. A common type of naturally occurring interference is thermal noise, which is directly related to the measuring operation and recording equipment. It is impossible to completely eliminate this noise kind, but modern digital processing methods are capable of significantly reducing its negative impact. Currently, researchers’ attention is increasingly focused on developing heuristic algorithms that represent alternative ways of suppressing the noisy component while preserving the useful signal’s form. These algorithms are characterized by their ability to find approximate solutions where traditional analytical and technical methods lose their effectiveness. They are aimed at adapting to the stochastic nature of thermal noise and offer a reasonable compromise between labor intensity and the useful signal reproduction accuracy. This article continues previous published research into the heuristic algorithms development for recovering the shape of heavily distorted discrete signals. The goal is to propose an alternative approach to solving this problem based on the sequential application idea of numerical integration and differentiation operations combined with integral curve approximation procedure. As a result, the noise component influence is eliminated, and the restored signal retains information components of the useful signal. The proposed algorithm efficiency was determined using a test signal superimposed with artificial noise simulated via computer simulation of a pseudo-random number generator. The results were compared with two previously developed heuristic algorithms: one based on piecewise linear approximation by least squares method and another based on averaging instantaneous values of the signal over partition intervals. Analysis demonstrated that the developed algorithm compares favorably in terms of accuracy with these algorithms, but differs in greater efficiency when processing discrete nonperiodic signals with natural noise contamination.

Laboratory

Researching of processes and systems

A wide variety of applied fields, including medicine, security, economics, and industry, are concerned with modeling the processes of various events occurring, such as a patient’s recovery, a company’s financial bankruptcy, industrial equipment failure, etc. Their modeling can be performed within the framework of survival analysis, a statistical method for analyzing time-to-event data whose distinctive feature, setting it apart from many other statistical and machine learning methods, is the presence of censored data. This occurs when an event is not observed and it is only known that it did not happen before a certain point in time. Censored data significantly complicates the modeling and prediction of critical events. Machine learning is an effective tool for survival analysis in the presence of censored data. In particular, modern transformer-based machine learning models demonstrate promising results in survival analysis due to their ability to account for complex dependencies. However, the standard attention mechanism in these models often ignores the fundamental structure of time-to-event data, namely, the distinction between censored and uncensored observations. To overcome this shortcoming, this paper proposes a new model and a new approach to implementing an attention mechanism that redefines attention weights by incorporating prior characteristics of survival analysis based on the Beran estimator or the Cox model. Instead of relying solely on distances between feature vector representations, as is done in current models, the proposed model computes attention weights as a weighted linear combination of components derived from key prior characteristics of survival analysis, such as distances between survival function estimates or time-to-event expectations for different training objects. The proposed approach enables a significant expansion of the class of transform models for survival analysis, achieving higher prediction accuracy. The algorithm implementing the proposed model is the basis for transformers. Experiments on real datasets confirm that the generalized model provides the best prediction among a number of known models.

In the context of rapid digitalization and the growing importance of online commerce, the task of forecasting sales volumes on marketplaces has become critically important for supporting managerial and strategic decision-making. Despite significant advances in neural network models, their practical application within digital platforms faces a number of limitations, including high demand volatility, data sparsity, the presence of numerous heterogeneous factors with varying dynamics, scalability challenges, as well as high requirements for computational resources and training data volumes. Furthermore, many neural network models operate as “black boxes”, which hinders their use in tasks requiring transparency and justification of forecasts, underscoring the relevance of developing specialized models that combine high predictive accuracy with interpretable results. The aim of this study is to develop and empirically validate a hybrid architecture of neural network model designed to overcome such limitations, taking into account the specific operational characteristics of marketplaces. The proposed model integrates a recurrent encoder for extracting temporal context, modified decoder blocks performing decomposition of the time series into a learnable basis of latent components, and a controlled fusion mechanism enabling adaptive incorporation of contextual information at each decoding level. The applied approach, which forms forecasts as an additive sum of specialized components, each trained to extract certain structural elements, provides an context-aware and structured representation of the time series, enables more accurate capture of long-term trends and periodic fluctuations, and enhances model robustness to noise and data sparsity. Experimental evaluation using data from the Wildberries marketplace demonstrated the model’s superiority in forecasting accuracy over classical and baseline models, confirming its applicability in environments typical of digital trading platforms.

Information security

Data protection

The paper presents a study of the issues of detecting attacks on modification of digital models of products (details) intended for 3D printing in modern intelligent additive manufacturing systems. In general, such systems are networks that include multiple 3D printers (i. e. 3D farms) operating in parallel, capable of printing series of products at user requests, for instance elements of physical structures of robots and vehicles, blades of unmanned aerial vehicles and other parts made of plastic, metal and other materials. Existing examples of such 3D installations are vulnerable to the actions of attackers who try to make a hidden unauthorized modification by influencing digital models. After such an attack, end products may have a design defect with visual characteristics that are almost indistinguishable from the original sample of such a product. For instance, by influencing a defective element of the UAV body, an attacker may reduce its controllability and even lead to its crash. The paper considers an experimental substantiation of the hypothesis on the possibility of detecting modification attacks on digital models of products based on processing and analysis of the program code of such models. The features of defects in 3D product models presented in the G-code language and selected from open 3D model databases are analyzed. A data set consisting of original and modified product models is compiled. An approach to modification detection using embedding to transform data into numerical vectors and train classifiers on them using supervised learning methods is proposed. Experiments on test data samples demonstrated the feasibility of the proposed approach to modification detection and the prospects for its further development and application in practice.