+7 (495) 987 43 74 ext. 3304
Join us -              
Рус   |   Eng

articles

№ 6(96) 24 december 2021 year
Rubric: Performance management
Authors: Suvorov A., Neprina A., Petrenko A.

Купить статью

Download the first page

The article considers a comparative analysis of the effectiveness in use of ARIMA, ARCH, GARCH models, а multi-factor forecasting model, and a decision tree model. Model functionality can be evaluated on the practical examples presented in the article. The results of applying the Dickey-Fuller test according to various data to verify the presence of non- stationarity are obtained. Parametric arguments for the models under study are described. The initial data, the order of the study, the results and charts are presented. Using the R programming language, practical studies focused on the functionality of the technical and fundamental analysis models were carried out to obtain the forecast values of PJSC "Sberbank" stock rate. The software modeling process showed the strengths and weaknesses of each of the models considered. The best results were shown by the multi-factor model. The paper gives quantitative indicators of the forecast values. A comparative table of the statistical indicators showing the results of the forecast models is presented and the conclusions are drawn based on the suitability of their modeling. Current study was carried out to identify models of technical and fundamental analysis that give the most accurate forecast of the stock price with the possibility of further implementation in a computer program. Continue...
№ 6(96) 24 december 2021 year
Rubric: Models and methods
Authors: Butenko I. I., Sapozhkov A., Stroganov A. V.

Купить статью

Download the first page

The article presents a method for extracting Russian-language multicomponent terms from scientific and technical texts based on structural models of terminological collocations. The existing approaches to term extraction on the basis of the method of stable word combination extraction, statistical and hybrid methods are described, and the linguistic aspects of terminology, not covered by the listed methods, are noted. The lexical composition of scientific and technical texts is characterized, the classification of special vocabulary in scientific and technical texts is given. The structural features of terminological vocabulary have been studied. The most productive models of multi-component terminological word combinations in Russian are presented. A method for extracting Russian-language multicomponent terms from scientific and technical texts is offered, and its stages are described. It is shown that the first stage involves morphological and syntactic analysis of the text by attributing to each word its grammatical characteristics. Then there is the exclusion of parts of speech, which can not be part of the Russian multisyllabic terms, as well as stop-words, which together with the term form free word combinations. The resulting word chains are further correlated with the templates of terminological word combinations available in the database of structural models of terms, as well as the terminological dictionary for the presence of the studied candidate term. The necessity of involving a terminologist to resolve ambiguous cases is substantiated. Each step of the method for extracting Russian-language multicomponent terms in scientific and technical texts is illustrated by examples. Further research perspectives are listed, and the necessity of complicating the methods of text extraction, by further classification of terminological vocabulary according to formal and semantic structures, types of anthropomorphic terms, nomenclatural names, normativity/non-normativity of terminological units is substantiated. Continue...
№ 6(96) 24 december 2021 year
Rubric: Models and methods
The author: Kalayda S.

Купить статью

Download the first page

The article contains the study of the capabilities of choosing such a form of joint business organization by an economic convergence initiator which would ensure the maximum growth of such business under the influence of digitalization and thereby the maximum improvement of the initiator's competitive capability. The author has developed a model of creating an economic ecosystem as a really effective ecosystem which takes into account the influence of both positive and possible negative consequences of the impact of digitalization on a joint business and ensures the maximum economic benefits of the joint business. An algorithm has been developed to implement the model in the framework of a certain level and a specific product of digitalization. The main parameters of the model and the algorithm for the implementation of the same are such terms introduced by the author as an economic ecosystem, potentially the most effective economic ecosystem, a real ecosystem, and a really effective ecosystem, which are described by indicators and costs of the economic effect. Following the steps of the algorithm which implements the model will eventually make it possible to create such a version of an economic ecosystem which would bring the highest economic effect to the initiator on a certain convergence level considered when a certain digitalization product is used in the joint business. Comparison of the versions of real ecosystems obtained for each digitalization level and product makes it possible to select the final version which would produce the maximum economic effect for the initiator with respect to convergence levels and digitalization products and hence the maximum growth of the initiator's competitive capacity. The economic ecosystem formed on the basis of the developed model and the algorithm that implements it gives the initiator of the joint business the greatest advantages in a competitive economy. Continue...
№ 6(96) 24 december 2021 year
Rubric: Models and methods
Authors: Solopov R., Samulchenkov A., Ziryukin V.

Купить статью

Download the first page

Evolutionary modeling is one of the areas of artificial intelligence, which essence is the computational processes interpretation and the final forms of integral computational algorithms construction from their existence, variability and development the points of view in natural systems. All evolutionary modeling methods are of an optimization nature due to the basic use of the theory of natural selection principles. One of the most common methods of evolutionary modeling is the genetic algorithm (GA). It is the method of adaptive search for solutions based on the principles of the evolution and the natural selection with the preservation of biological terminology in a simplified form theories. Its essence is to determine the most fit individual (solution) by the value of its fitness function during evolution, considering the analysis of the heredity influences and the external environment. Despite the biological terminology, genetical algorithms are a universal computational tool that can be used to solve a wide range of complex problems, including the electric power industry. The authors considered the issue of the genetic algorithm use in the framework of calculating the steady state of the electrical network (SS EN), since the mathematical electrical network model is a system of high-order nonlinear equations, where all the restrictions imposed by the physical properties of the object under consideration are taken into account. Its solution is a rather laborious optimization problem, due to the operating electrical networks complexity. The correct solution of this system is the most critical stage in the calculation of the SS EN. It is the reason for importance and urgency of the search for SS EN calculating optimal methods task. This paper presents the development results of an analytical apparatus that made it possible to search for a solution to the problem of calculating electrical networks steady-state modes using the genetic algorithm based on special software. Continue...
№ 6(96) 24 december 2021 year
Rubric: Algorithmic efficiency
Authors: Lozhkarev A., Timofeev  I.

Купить статью

Download the first page

The use of global binarization thresholds in image processing does not always give the correct result. This is especially common when processing images with uneven illumination. In some areas of the image, the automatically determined binarization threshold makes it possible to obtain sufficiently well visualized objects, while in other areas, the objects necessary for analysis become "overexposed" or, conversely, "shaded". In cases where it is necessary to localize all objects of interest in the image, binarization plays a very important role, especially in cases where the object of interest contains information that will be used in the next stages of processing. Multi-gradation images can contain many objects of interest, such as car license plates, train car numbers, people's faces, and defects in manufactured products. Each of these cases requires high-quality processing for subsequent recognition. If there are noises on the processed image or the brightness indicators are unevenly distributed, then the binarization process can lead to the loss of important information – the loss of a part of the symbol, the breakage of the object's contour, or, conversely, the emergence of new areas that are mistakenly added to the object of interest – shadows of other objects, dirt on the license plate sign. Therefore, the binarization process requires a very accurate preliminary calibration for all possible shooting conditions – daylight and dark hours of the day, taking into account possible noise (interference in signal transmission), extreme situations (strong hail or rain). In this article, the authors investigate the process of binarization of images with uneven illumination using several local binarization thresholds instead of one global one. It is proposed to check the histograms of the obtained fragments for the number of peaks or "modes". If the histogram of a binarized fragment is single-mode, then the given fragment is not subject to further processing and the binarization threshold on it is defined correctly. The study of the relationship between the binarization threshold and such image parameters as dispersion and smoothness has been carried out. On those fragments where the value of the average brightness measure differs from the average for all fragments, the binarization threshold is determined incorrectly. If you set the threshold value higher, closer to the average value for all fragments, then as a result binarization will be carried out correctly. Continue...
№ 6(96) 24 december 2021 year
Rubric: Software engineering
Authors: Mironov V., Gusarenko A., Yusupova N.

Купить статью

Download the first page

The article discusses the use of situation-oriented approach to software processing word-documents. The documents under consideration are prepared by the user in the environment of the Microsoft Word processor or its analogs and are used in the future as data sources. The openness of the Office Open XML and Open Document Format made it possible to apply the concept of virtual documents mapped to ZIP archives for programmatic access to XML components of word documents in a situational environment. The importance of developing preliminary agreements regarding the placement of information in the document for subsequent search and retrieval, for example, using pre-prepared templates, is substantiated. For the DOCX and ODT formats, the article discusses the use of key phrases, bookmarks, content controls, custom XML components to organize the extraction of entered data. For each option, tree-like models of access to the extracted data, as well as the corresponding XPath expressions, are built. It is noted that the use of one or another option depends on the functionality and limitations of the word processor and is characterized by varying complexity of developing a blank template, entering data by the user and programming data extraction. The applied solution is based on entering metadata into the article using content controls placed in a stub template and bound to elements of a custom XML component. The developed hierarchical situational model of HSM provides extraction of an XML component, loading it into a DOM object and XSLT transformations to obtain the resulting data: an error report and JavaScript code for subsequent use of the extracted metadata. Continue...