The article discusses the tasks of modernizing the information security system of a distributed computing system. In an increasingly aggressive environment, objects whose security was previously considered sufficient now require additional protection measures. Examples include significant objects of critical information infrastructure, which are currently subject to government regulation. To solve the problem of synthesizing the component composition of information security systems, the article proposes a complex algorithm that includes steps in which various mathematical tools are used. Using this approach allows to select an acceptable option for a set of software and hardware tools that provide the ability to block attacks at a given level of protection. The problem is supposed to be solved based on the basis of the functionality of software and hardware components, their parameters and functional relationships. The novelty of the research results lies in the presentation of a discrete model of an information security system in the form of a simulation model (a special case of stochastic programming), which makes it possible to take into account the functional features of hardware and software when modernizing the information security system. A simulation algorithm is proposed that takes into account the characteristics of the information security system, which can take on both deterministic and probabilistic values. At the same time, the necessary definitions are introduced, the provisions of which are illustrated with numerical examples. A simulation algorithm is proposed that takes into account the characteristics of the information security system, which can take on both deterministic and probabilistic values. Also the necessary definitions are introduced, the provisions of which are illustrated with numerical examples. Calculations make it possible to identify the most scarce resources, establish how successful the specialization and structure of the information security system are, evaluate the results of changes in the information security system, redistribution of its functions and material resources. Continue... | |
Software production has become one of the largest industries in the world economy today, and in terms of key indicators growth rates in recent years it ranks first among all major industries. In conditions of significantly limited availability of software solutions from foreign manufacturers, the supply from domestic software manufacturers is increasing and, as a result, the need for models and methods that allow you to control the software development process, guarantee the cost of development, timing and quality of the result. The uniqueness of the industry does not allow us to count on the success of using traditional project management models in software projects, especially with regard to quantitative assessments of project parameters. The main differences from other types of project management are that the result of a software development project is intangible, the technologies used in the project change rapidly, and the experience of managing a separate software development project is often not applicable to other projects. The fundamental difference between software development projects and other complex projects is related to the features of the key stage – software construction, including coding and debugging, as well as verification, modular and integration testing. Errors made at the software construction stage have the most significant impact on the project result, since they increase the initially planned amount of work. In the known models of the software development process, the amount of work is considered initially specified, and the construction stage is not allocated as a separate circuit determine the stochastic nature of the work amount. The goal of this paper is to build a simulation model of the software construction process, taking into account the dependencies according to which the main parameters of the simulated process change over time. The model provides an opportunity to quantify and optimize project parameters according to a selected criterion (one or more). The model is built within the framework of a system-dynamic approach; the AnyLogic system is used as a simulation environment. The results of simulation experiments are presented to demonstrate the possibility of using the proposed model to study the software construction process or as a mechanism to support managerial decision-making. Continue... | |
This paper is devoted to the development of mathematical models of stock price volatility in financial markets, with a focus on the GARCH family models. The paper proposes to consider these models from a new perspective: as recurrent rather than autoregressive. The main idea is that GARCH econometric models can be interpreted as recurrent neural networks, especially after introducing an activation function into the equation of variance dynamics. The relevance of the study stems from the constant need to improve the accuracy of volatility forecasting in modern financial markets, especially in the context of the Russian financial system, where accurate forecasts play a key role in financial decision making. The aim of the study is to evaluate the possibility of representing GARCH models in the form of recurrent neural networks and to assess their applicability for volatility forecasting in Russian financial markets. The main objectives are to develop and test recurrent neural networks based on GARCH, combining the advantages of econometric models and machine learning models. The article proposes a modification of the standard GARCH model called GARCH-RNN, which is a recurrent neural network with multidimensional hidden state and the ReLU activation function. The methods used include econometric analysis of stock price volatility and comparison of forecast accuracy using Moscow Stock Exchange data with GARCH and GARCH-RNN models. The results of experiments on said data showed that the GARCH-RNN model provides volatility forecasting accuracy comparable to that of traditional GARCH models. Results of the study confirmed the potential of the new approach for volatility forecasting on financial markets in Russia, opening prospects for improving forecasts and making informed decisions in the market. Continue... | |
№ 5(113)
30 october 2024 year
Rubric: Models and methods Authors: Rozhkov V., Lavrov I., Malashchenkov I., Shulakova D. |
The article uses developed dynamic mathematical models to explore the problems of processes that arise during the uncontrolled connection of a reactive load to high-voltage three-phase networks with a solidly grounded neutral. When switching transformer or reactor electrical equipment to the network at an unfavorable moment, shock inrush currents can occur that are tens of times higher than the rated current. These currents contain aperiodic components that magnetize the steel cores of the devices. Then it is necessary to adjust the settings of the relay protection against current surges, which leads to a decrease in its sensitivity and performance when triggered in real short circuit modes. An effective technical solution to reduce shock currents is the use of a controlled phase-by-phase drive of the main contacts of switches. Simulation of circuit dynamics was carried out in the MatLab system and the MultiSim software to assess favorable switching moments. An analytical expression is derived for neutralizing the aperiodic component of the flux linkage of magnetic cores. Under this condition, non-sinusoidal surges of magnetizing currents do not exceed the specified values controlled by the protection. The difficulty of shock-free connection in practice of power transformers in idle mode, containing secondary windings in star and delta circuits, is noted. Then in the secondary windings, with the initial setting of the phase-by-phase switching of the main contacts of the switch, the symmetry of the phase flux linkages is broken. The results of the simulation confirmed a possible solution to the soft switching problem in this case. It consists of changing the design of the transformer by introducing high-voltage switches into the delta phases of the corresponding secondary winding, which must be open during the start-up of the transformer and then closed at a predictable moment. A block diagram of the operation algorithm of the information part for soft phase-by-phase switching of the main contacts of a high-voltage circuit breaker has been generated. The developed package of dynamic mathematical models allows, based on processing data on instantaneous values of network phase voltages, to form a shock-free phase-by-phase connection of a reactive load with the absence of aperiodic current components. Continue... |
№ 5(113)
30 october 2024 year
Rubric: Algorithmic efficiency Authors: Bulygina O. V., Prokimnov N., Vereikina E., Yartsev D. |
Effective functioning of complex socio-economic systems in conditions of uncertainty is impossible without solving many problems of supporting management decision-making. These include improving the quality of manufactured products, reducing production costs, ensuring energy and resource conservation, reducing transportation costs, increasing the reliability of the supply chain, forming a balanced portfolio of projects, and others. Their mathematical formulation in a typical case requires searching for a global extremum of the objective function; in the case of a multi-criteria formulation, it involves convolutions of criteria that must be met taking into account various constraints. In this case, finding an optimal solution is usually not necessary, and a result close to it is considered acceptable. Some of the most popular methods for solving problems in this simplified formulation include stochastic methods, which allow us to obtain a solution in 102–103 times less time than the execution time of algorithms based on exhaustive search. Of particular interest recently has been metaheuristic methods, which are inspired by the cooperative behavior of a decentralized self-organizing colony of living organisms (bees, ants, bacteria, cuckoos, wolves, etc.) to achieve certain goals, usually to satisfy food needs. According to the relatively recently proven “no free lunch” theorem, there is no universal algorithm capable of producing better results regardless of the problem being solved. For this reason, the focus of developers' efforts is shifting toward creating and improving specialized algorithms. This paper aims to establish approaches to constructing methods based on swarm intelligence and fuzzy logic algorithms. Based on their classification and analysis, possible directions for the “development” of swarm intelligence algorithms at various stages of their implementation (initiation of a population, migration of individuals, quality assessment and screening of unpromising solutions) are proposed by introducing elements of fuzziness to increase their efficiency in solving problems of multidimensional optimization of parameters of complex socio-economic systems. Continue... |
Outliers in statistical data, which are the result of erroneously collected information, are often an obstacle to the successful application of machine learning methods in many subject areas. The presence of outliers in training data sets reduces the accuracy of machine learning models, and in some cases, makes the application of these methods impossible. Currently existing outlier detection methods are unreliable. They are fundamentally unable to detect some types of outliers, while observations that are not outliers are often classified as outliers by these methods. Recently emerging neural network methods for outlier detection are free from this drawback, but they are not universal, since the ability of neural networks to detect outliers depends both on the architecture of the neural network itself and on the problem being solved. The purpose of this study is to develop an algorithm for creating and using neural networks that can correctly detect outliers regardless of the problem being solved. This goal is achieved by using the property of some specially created neural networks to demonstrate the largest training errors on those observations that are outliers. The use of this property, as well as the implementation of a series of computational experiments and the generalization of their results using a mathematical formula, which is a modification of the consequence of the Arnold – Kolmogorov – Hecht-Nielsen theorem, made it possible to achieve the stated goal. The use of the developed algorithm turned out to be especially effective in solving the problems of forecasting and controlling interdependent thermophysical and chemical-energy-technological processes of processing ore raw materials, occurring at existing serial metallurgical enterprises, where the presence of outliers in statistical data is almost inevitable, and without their identification and exclusion, the construction of neural network systems that are acceptable in accuracy models are generally impossible. Continue... |