IT management |
|
Performance management |
|
|
Solving the problems of effective business management is associated with a variety of current goals facing the same and, by implication, requires the construction of appropriate models of efficient business. The article presents two problems of doing business which, apart from their common target being an improvement of business efficiency, have different current goals. The creation or development of any business involves the construction of a specific business plan for it, including a list of those areas of business development, the implementation of which will increase its efficiency. The first problem considered in the article is related to the phased implementation of all areas of efficiency improvement in order to ultimately obtain the greatest efficiency of their realization. The second one solves the problem of increasing efficiency by partially implementing efficiency improvement directions from the initial list, taking into account certain limitations, for example, in conditions of limited company resources. For the construction of models which would meet the problems set, an efficiency criterion is substantiated and proposed in the article, and Algorithms 1 and 2 are developed which made it possible to build the efficient business models which take into account the difference in its current goals. The authors have developed a multi-stage Algorithm 1 for the generation of individual sets of areas for improvement of efficiency to be used to solve the tasks at hand. Algorithm 2 implemented at each stage of Algorithm 1 has been developed by the authors by using the Pareto optimality method but supplemented by taking into account the features and objectives of the current tasks set for the business. The use of such algorithms has made it possible to build efficient business models enabling not only to obtain an economic effect inherent to each efficiency improvement area, but also to ensure additional growth thereof driven by the properties of the developed algorithms.
|
---|---|
Software engineering |
|
|
A neural network structure is designed based on the ability of a certain class of calculators to recombine internal resources in order to produce neuromorphic elements to solve applied problems. This approach is rooted in a composite material with controlled local conductivity to form volumetric inhomogeneities capable of responding to and influencing external electrostatic effects. Such compounds aggregate into stable clusters suitable for modelling the processes that occur during information processing in natural neuronal entities. The use of conductive transitions between substrate-formed neuromorphic clusters as a learning structure makes it possible to increase the reliability of the neural network system. Long-term, non-volatile storage of information about the elements of the training sample in variable structures is possible. The basic approach to information conversion is to manage the electrostatic influence as it passes through the layered structures formed. The response to the input is not formed by propagating the signal through conductive elements with variable conductivity, but by passing the energy impact through a limited volume of metamaterial. Thus, a massively parallel processing of information can be achieved with the implementation of a mechanism for combining the opinions of independent neural network clusters that influence the final decision. Furthermore, this method of spreading effects in such an environment greatly simplifies the process of adding elements to the neural network. The lack of direct electrical interconnection facilitates the easy addition of new computational elements without significant rearrangement of the conductive media. Networks of this type are capable of significant growth without loss of experience. The input conversion process using modified delta coding prevents premature wear and tear on reconfigurable network elements. The manner in which information is presented and the manner in which neural network computing is organised enabled the creation of limited autonomous oscillations within the volume of the calculator to maintain circulating memory and the ability to gradually accumulate network experience for its subsequent recording in configurable elements. The identified features resulted in the application of this kind of calculators in the task of developing radio frequency management plans for the organisation of stable communication in a complex electromagnetic environment.
|
|
Segmentation of a brain tumor is one of the most difficult tasks in the analysis of medical images. The purpose of brain tumor segmentation is to create an accurate outline of brain tumor areas. Gliomas are the most common type of brain tumors. Diagnosis of patients with this disease is based on the analysis of the results of magnetic resonance imaging and segmentation of the tumor boundaries manually. However, due to the time-consuming nature of the manual segmentation process and errors, there is a need for a fast and reliable automatic segmentation algorithm. In recent years, deep learning methods have shown promising effectiveness in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of methods based on deep learning have been applied to segmentation of brain tumors, and promising results have been achieved. The article proposes a hybrid method for solving the problem of segmentation of brain tumors based on its MRI images based on the U-Net architecture, the encoder of which uses a model of a deep convolutional neural network pre-trained on a set of ImageNet images. Among such models were used VGG16, VGG19, MobileNetV2, Inception, ResNet50, EfficientNetb7, InceptionResnetV2, DenseNet201, DenseNet121. Based on the hybrid method, the TL-U-Net model was implemented, and numerical experiments were carried out to train it with different encoder models for segmentation of brain tumors based on its MRI images. Computer experiments on a set of MRI images of the brain showed the effectiveness of the proposed approach, the best encoder model turned out to be the neural network Densenet121, which provided indicators of segmentation accuracy MeanIoU=90.34%, MeanDice=94.33%, accuracy=94.17%. The obtained estimates of segmentation accuracy are comparable or exceed similar estimates obtained by other researchers.
|
Algorithmic efficiency |
|
|
The problem of recognition of natural and synthetic diamonds (diamonds) is relevant today. A technique for computer processing of the luminescence spectra of diamonds using the Origin mathematical package is proposed. The processing technique is presented on specific examples. The spectra were measured using a RAOS-3 spectrometer-fluorimeter. A laser with a wavelength of 532 nm was used to excite diamond luminescence. A method is proposed for identifying diamonds of unknown origin by the number of bands of elementary components in the luminescence spectrum when decomposed into Gaussian curves. Luminescence spectra in faceted diamonds (brilliants) are widely used to study their physical properties. Synthetic faceted diamonds are significantly inferior to natural ones in terms of luminescence intensity. The light signal of photoluminescence in the former, in some cases, is comparable with the noise level of the measuring device. As a result, the instantaneous value of the useful signal can take both positive and negative values over the entire wavelength range of the spectrum. Therefore, the detection of a useful signal against the background of interference is of great importance. Along with this, to identify a diamond, it is necessary to solve the problem of decomposing the spectrum into elementary components in the form of Gaussian curves. Since it has been established that the spectra of natural diamonds consist of two peaks, while synthetic diamonds contain from three to eight peaks, which indicates a loose structure of the diamond crystal lattice. The efficiency of solving a number of these problems can be significantly improved by using software applications with special functionality. To demonstrate the features and advantages of the automated technique, the Origin mathematical package was taken, which, in particular, makes it possible to improve the quality of the results of processing a low luminescence spectrum and to find the number of peaks for Gaussian curves with sufficient accuracy.
|
|
In modern conditions of constant growth in prices for fuel and energy resources, the problem of increasing the energy and resource efficiency of technological processes of industrial enterprises has acquired particular relevance. It is especially acute for energy-intensive industries, which include high-temperature processing of mining and chemical raw materials. To reduce the energy intensity of complex chemical-technological processes, it is proposed to use the possibilities of computer simulation, for example, to optimize the operating regimes of existing equipment. The article has considered the scientific and practical problem of optimizing the charge heating regimes in various zones of the roasting conveyor machine used to produce phosphorite pellets from apatite-nepheline ore waste stored in dumps of mining and processing plants. The specifics of the optimization task (nonlinearity of the objective function, large dimension of the search space, high computational complexity) are significant limitations for the use of traditional deterministic search methods. It led to the choice of population algorithms, which are based on modeling the collective behavior and are distinguished by the possibility of simultaneous processing of several options. The cuckoo search algorithm, which is distinguished by a small number of “free” parameters that affect the convergence, was used to solve the stated optimization task. To select the optimal values of these parameters, it was proposed to use the idea of coevolution, which consists in the parallel launch of several versions of the selected algorithm with different “settings” for each subpopulation. The management of the chemical-technological system for the processing of apatite-nepheline ore waste, taking into account the basis of the results obtained, will minimize the amount of return and ensure an energy-saving operating regime of the roasting conveyor machine.
|
Laboratory |
|
|
In this article, we design a user interface for a prototype desktop application using the capabilities of the author’s neural network for recognizing texts in Japanese written by one of the two Japanese alphabets – katakana or hiragana. During the design, the UML notation, a Use-Case Diagram, was used to build scenarios for using the program, and the BPMN notation was used to describe a program’s main algorithm. In the beginning of this article short versions of previous two articles were also given – the basics of proposed method for preprocessing of machine learning data and the main parameters of the proposed convolutional neural network model including its efficiency against reference model EfficientNetB0. In the work, the principles and the tool base for designing the interface of the software solution were defined, the scenarios for using the program, the algorithms of the program were designed, a prototype of the user interface was created.
|
Researching of processes and systems |
|
|
An inadequate diet can cause a number of illnesses with some of them posing major threats for humanity. Poor diet largely originates from behavioral and social issues rather than environmental factors. With simulation being a grand tool to analyze and address behavior issues, relatively few studies focus on computational modeling of nutrition at behavioural level. Furthermore, we have overviewed several popular approaches to computational modeling and simulating dietary decision-making and found no clear favorite. Further still, modelers rarely pay attention to one of the key behavioural factors – motivation. In the vast majority of models, either motivation is assumed to be exogenously given and, hence, is left out of the model, or motivation is not taken into account in any form, even though ignoring incentives significantly reduces adaptive capabilities of any human-like goal-directed model entity. We aimed to outline a modelling approach that would fit into the food choice topic and would improve on the available models. This implies offering an intelligible algorithm that would be easily applied to statistical data yet offering a depth of analysis despite its seeming simplicity. Thus, we present our view of the food choice simulation problem which employs eating incentives and an original choice mechanism that is different both from traditional maximizing approaches common to economics and artificial intelligence research and from the dominant psychological computational approaches. We outlined the programming conceptual algorithm that involves sequential incentive (which can result from the biological necessities, social, intellectual or spiritual needs alike) selection, incentive-foodstuff coupling (a relation can be either fixed or dynamic) and elimination of undesirable food options based on incentives ranking (qualitative ranking seems to be preferable over quantitative ranking, forasmuch as it resembles the way of thinking of a regular person more closely) supplemented by pseudocode segments. The algorithm suits agent-based simulation paradigm, yet it is not tied to it and can be fitted with other simulation approaches as well. The algorithm is supposed to be implemented in Java. Since the offered algorithm is conceptual it requires an implementation to bring about robust conclusions which is our goal to reach next.
|
Information security |
|
Data protection |
|
|
The results of the study, the purpose of which was to build a software model of a multi-stage integrated system for processing finely dispersed ore raw materials, are presented. The role of such raw materials can be processed waste at mining and processing plants of apatite-nepheline and other types of ores, which accumulate in large volumes in tailing dumps. They create a significant environmental threat in the territories adjacent to the plants due to weathering, dust formation, penetration into the soil and aquifers of chemical compounds and substances hazardous to human health. Therefore, the improvement of existing production processes, the development of new technological systems for mining and processing plants, including the application of the principles of the circular economy, waste recycling, justifies the relevance of the chosen research area. The proposed program model is based on the use of trainable trees of systems (blocks) of fuzzy inference of the first and second types. This approach made it possible to avoid unnecessary complication of the bases of fuzzy inference rules when using only one fuzzy block when building a multi-parameter model of the entire multi-stage complex system. The use of several fuzzy inference blocks that describe the behavior of individual units of the system and their configuration in accordance with the physical structure of the system allows the use of relatively simple sets of rules for individual blocks. The joint selection of their parameters when training a tree of fuzzy blocks makes it possible to achieve high accuracy of the solutions obtained. The novelty of the research results is the proposed software fuzzy model of an integrated system for processing finely dispersed ore raw materials. The results of a simulation experiment conducted in the MatLab environment using a synthetic data set generated in Simulink are presented. The results showed that the trained fuzzy model provides good fidelity of the parameters and variables from the test part of the synthetic set.
|
SIMULATION |
|
Theory and practice |
|
|
The decomposition method of discrete event simulation models is represented based on the author’s own work DVCompute++ Simulator, which is a collection of general-purpose programming libraries in C++ for creating and running simulation models. The aim of the research was to find an approach based on which arbitrary models could be divided into parts, then these parts of the model could be divided into less components and so on, where the result would be a hierarchy of nested sub-models that could be considered in isolation as independent entities. Now such sub-models can be created in C++ code, but, in the future, they can be created graphically as diagrams or as some text written in the specialized modeling language, where the sub-models can be used repeatedly, which makes them similar to library units from GPSS STUDIO. The mentioned ways of creating sub-models can be combined in any order on any level of nested hierarchy, where this work can be performed by different people with different skills. Moreover, it is shown in the article that the considered decomposition method can be applied to the case of distributed simulation, which is supported by DVCompute++ Simulator too. All this is possible due to the fact that the author applied functional programming techniques, where the simulation model is considered as a composition of computations. Then the model decomposition is the splitting of computations into parts, which can be connected to each other like constructor. There are two basic computations: blocks similar to the GPSS language and discrete signal computations similar to reactive programming. The diagrams of sub-models and the corresponding C++ code are provided in the article, based on which the suggested author’s method of decomposing the discrete event simulation models is illustrated.
|