IT management |
|
Performance management |
|
|
Software production has become one of the largest industries in the world economy today, and in terms of key indicators growth rates in recent years it ranks first among all major industries. In conditions of significantly limited availability of software solutions from foreign manufacturers, the supply from domestic software manufacturers is increasing and, as a result, the need for models and methods that allow you to control the software development process, guarantee the cost of development, timing and quality of the result. The uniqueness of the industry does not allow us to count on the success of using traditional project management models in software projects, especially with regard to quantitative assessments of project parameters. The main differences from other types of project management are that the result of a software development project is intangible, the technologies used in the project change rapidly, and the experience of managing a separate software development project is often not applicable to other projects. The fundamental difference between software development projects and other complex projects is related to the features of the key stage – software construction, including coding and debugging, as well as verification, modular and integration testing. Errors made at the software construction stage have the most significant impact on the project result, since they increase the initially planned amount of work. In the known models of the software development process, the amount of work is considered initially specified, and the construction stage is not allocated as a separate circuit determine the stochastic nature of the work amount. The goal of this paper is to build a simulation model of the software construction process, taking into account the dependencies according to which the main parameters of the simulated process change over time. The model provides an opportunity to quantify and optimize project parameters according to a selected criterion (one or more). The model is built within the framework of a system-dynamic approach; the AnyLogic system is used as a simulation environment. The results of simulation experiments are presented to demonstrate the possibility of using the proposed model to study the software construction process or as a mechanism to support managerial decision-making.
|
---|---|
Software engineering |
|
Algorithmic efficiency |
|
|
This paper is devoted to the development of mathematical models of stock price volatility in financial markets, with a focus on the GARCH family models. The paper proposes to consider these models from a new perspective: as recurrent rather than autoregressive. The main idea is that GARCH econometric models can be interpreted as recurrent neural networks, especially after introducing an activation function into the equation of variance dynamics. The relevance of the study stems from the constant need to improve the accuracy of volatility forecasting in modern financial markets, especially in the context of the Russian financial system, where accurate forecasts play a key role in financial decision making. The aim of the study is to evaluate the possibility of representing GARCH models in the form of recurrent neural networks and to assess their applicability for volatility forecasting in Russian financial markets. The main objectives are to develop and test recurrent neural networks based on GARCH, combining the advantages of econometric models and machine learning models. The article proposes a modification of the standard GARCH model called GARCH-RNN, which is a recurrent neural network with multidimensional hidden state and the ReLU activation function. The methods used include econometric analysis of stock price volatility and comparison of forecast accuracy using Moscow Stock Exchange data with GARCH and GARCH-RNN models. The results of experiments on said data showed that the GARCH-RNN model provides volatility forecasting accuracy comparable to that of traditional GARCH models. Results of the study confirmed the potential of the new approach for volatility forecasting on financial markets in Russia, opening prospects for improving forecasts and making informed decisions in the market.
|
|
Outliers in statistical data, which are the result of erroneously collected information, are often an obstacle to the successful application of machine learning methods in many subject areas. The presence of outliers in training data sets reduces the accuracy of machine learning models, and in some cases, makes the application of these methods impossible. Currently existing outlier detection methods are unreliable. They are fundamentally unable to detect some types of outliers, while observations that are not outliers are often classified as outliers by these methods. Recently emerging neural network methods for outlier detection are free from this drawback, but they are not universal, since the ability of neural networks to detect outliers depends both on the architecture of the neural network itself and on the problem being solved. The purpose of this study is to develop an algorithm for creating and using neural networks that can correctly detect outliers regardless of the problem being solved. This goal is achieved by using the property of some specially created neural networks to demonstrate the largest training errors on those observations that are outliers. The use of this property, as well as the implementation of a series of computational experiments and the generalization of their results using a mathematical formula, which is a modification of the consequence of the Arnold – Kolmogorov – Hecht-Nielsen theorem, made it possible to achieve the stated goal. The use of the developed algorithm turned out to be especially effective in solving the problems of forecasting and controlling interdependent thermophysical and chemical-energy-technological processes of processing ore raw materials, occurring at existing serial metallurgical enterprises, where the presence of outliers in statistical data is almost inevitable, and without their identification and exclusion, the construction of neural network systems that are acceptable in accuracy models are generally impossible.
|
|
Effective functioning of complex socio-economic systems in conditions of uncertainty is impossible without solving many problems of supporting management decision-making. These include improving the quality of manufactured products, reducing production costs, ensuring energy and resource conservation, reducing transportation costs, increasing the reliability of the supply chain, forming a balanced portfolio of projects, and others. Their mathematical formulation in a typical case requires searching for a global extremum of the objective function; in the case of a multi-criteria formulation, it involves convolutions of criteria that must be met taking into account various constraints. In this case, finding an optimal solution is usually not necessary, and a result close to it is considered acceptable. Some of the most popular methods for solving problems in this simplified formulation include stochastic methods, which allow us to obtain a solution in 102–103 times less time than the execution time of algorithms based on exhaustive search. Of particular interest recently has been metaheuristic methods, which are inspired by the cooperative behavior of a decentralized self-organizing colony of living organisms (bees, ants, bacteria, cuckoos, wolves, etc.) to achieve certain goals, usually to satisfy food needs. According to the relatively recently proven “no free lunch” theorem, there is no universal algorithm capable of producing better results regardless of the problem being solved. For this reason, the focus of developers' efforts is shifting toward creating and improving specialized algorithms. This paper aims to establish approaches to constructing methods based on swarm intelligence and fuzzy logic algorithms. Based on their classification and analysis, possible directions for the “development” of swarm intelligence algorithms at various stages of their implementation (initiation of a population, migration of individuals, quality assessment and screening of unpromising solutions) are proposed by introducing elements of fuzziness to increase their efficiency in solving problems of multidimensional optimization of parameters of complex socio-economic systems.
|
Information security |
|
Models and methods | |
|
The article uses developed dynamic mathematical models to explore the problems of processes that arise during the uncontrolled connection of a reactive load to high-voltage three-phase networks with a solidly grounded neutral. When switching transformer or reactor electrical equipment to the network at an unfavorable moment, shock inrush currents can occur that are tens of times higher than the rated current. These currents contain aperiodic components that magnetize the steel cores of the devices. Then it is necessary to adjust the settings of the relay protection against current surges, which leads to a decrease in its sensitivity and performance when triggered in real short circuit modes. An effective technical solution to reduce shock currents is the use of a controlled phase-by-phase drive of the main contacts of switches. Simulation of circuit dynamics was carried out in the MatLab system and the MultiSim software to assess favorable switching moments. An analytical expression is derived for neutralizing the aperiodic component of the flux linkage of magnetic cores. Under this condition, non-sinusoidal surges of magnetizing currents do not exceed the specified values controlled by the protection. The difficulty of shock-free connection in practice of power transformers in idle mode, containing secondary windings in star and delta circuits, is noted. Then in the secondary windings, with the initial setting of the phase-by-phase switching of the main contacts of the switch, the symmetry of the phase flux linkages is broken. The results of the simulation confirmed a possible solution to the soft switching problem in this case. It consists of changing the design of the transformer by introducing high-voltage switches into the delta phases of the corresponding secondary winding, which must be open during the start-up of the transformer and then closed at a predictable moment. A block diagram of the operation algorithm of the information part for soft phase-by-phase switching of the main contacts of a high-voltage circuit breaker has been generated. The developed package of dynamic mathematical models allows, based on processing data on instantaneous values of network phase voltages, to form a shock-free phase-by-phase connection of a reactive load with the absence of aperiodic current components.
|
Laboratory |
|
Information processes modeling |
|
|
A neuro-fuzzy model of resource provision of innovative activity of an industrial enterprise is proposed. The model implements a two-stage procedure for describing and managing innovative activity of an industrial enterprise: at the first stage, interaction resources are classified based on the supplemented VRIO analysis of the interaction profile; at the second stage, an innovative activity strategy is selected. The neuro-fuzzy model of resource provision is based on stacking of private machine learning models, such as the k-nearest neighbors method, random forest, and multilayer perceptron. The classification results of private models are combined using a trained tree of fuzzy inference systems that performs the final classification, which ensures an increase in its accuracy compared to individual private models. A distinctive feature of the model is the use of a fuzzy logical inference system to assess the probability of resource availability used in planning the need for it, which allows taking into account expert judgments as input data. Testing of the neuro-fuzzy model, carried out in the MatLab software system using the example of solving the problem of assessing the resource provision of an innovation process during the interaction of a regional instrument-making enterprise with one of the counterparties, demonstrated the model’s performance and high accuracy of classifying the resources of innovative interaction.
|
Researching of processes and systems |
|
|
50 years ago, in 1974, A. A. Denisov proposed a theory based on a dialectical generalization of the laws of functioning and development of systems of various physical natures, which he called the theory of the information field. The theory is based on the use of the apparatus of mathematical field theory to explain the laws of information reflection, which determined its name. Based on this theory, significant results were obtained for the study of motion control processes in continuous spatio-temporal and arbitrarily evolving situations. The use of this theory for the study of open distributed information systems seems promising. Subsequently, a discrete version of the theory was developed, which allows explaining the process of reflection and transformation of information and became the basis for the development of a number of practical applications, some of which are given in this article. This article characterizes the prerequisites for the emergence of this theory, the main ideas and concepts of the theory and its contribution that, over the 50 years, scientists and students united by the Scientific and Pedagogical School “System Analysis in Engineering and Control”, have made on the basis of this theory in the development systems theory, computer science and other sciences of systems. Information is provided on the application of A. A. Denisov’s ideas and the development of models for specific applications based on them. The authors of the article, including Anatoly Alekseevich’s students, was developing, appliing and are currently developing models of information theory of A. A. Denisov, proving the usefulness of theoretical knowledge for solving practical problems.
|