A neural network structure is designed based on the ability of a certain class of calculators to recombine internal resources in order to produce neuromorphic elements to solve applied problems. This approach is rooted in a composite material with controlled local conductivity to form volumetric inhomogeneities capable of responding to and influencing external electrostatic effects. Such compounds aggregate into stable clusters suitable for modelling the processes that occur during information processing in natural neuronal entities. The use of conductive transitions between substrate-formed neuromorphic clusters as a learning structure makes it possible to increase the reliability of the neural network system. Long-term, non-volatile storage of information about the elements of the training sample in variable structures is possible. The basic approach to information conversion is to manage the electrostatic influence as it passes through the layered structures formed. The response to the input is not formed by propagating the signal through conductive elements with variable conductivity, but by passing the energy impact through a limited volume of metamaterial. Thus, a massively parallel processing of information can be achieved with the implementation of a mechanism for combining the opinions of independent neural network clusters that influence the final decision. Furthermore, this method of spreading effects in such an environment greatly simplifies the process of adding elements to the neural network. The lack of direct electrical interconnection facilitates the easy addition of new computational elements without significant rearrangement of the conductive media. Networks of this type are capable of significant growth without loss of experience. The input conversion process using modified delta coding prevents premature wear and tear on reconfigurable network elements. The manner in which information is presented and the manner in which neural network computing is organised enabled the creation of limited autonomous oscillations within the volume of the calculator to maintain circulating memory and the ability to gradually accumulate network experience for its subsequent recording in configurable elements. The identified features resulted in the application of this kind of calculators in the task of developing radio frequency management plans for the organisation of stable communication in a complex electromagnetic environment. Continue... | |
Segmentation of a brain tumor is one of the most difficult tasks in the analysis of medical images. The purpose of brain tumor segmentation is to create an accurate outline of brain tumor areas. Gliomas are the most common type of brain tumors. Diagnosis of patients with this disease is based on the analysis of the results of magnetic resonance imaging and segmentation of the tumor boundaries manually. However, due to the time-consuming nature of the manual segmentation process and errors, there is a need for a fast and reliable automatic segmentation algorithm. In recent years, deep learning methods have shown promising effectiveness in solving various computer vision problems, such as image classification, object detection and semantic segmentation. A number of methods based on deep learning have been applied to segmentation of brain tumors, and promising results have been achieved. The article proposes a hybrid method for solving the problem of segmentation of brain tumors based on its MRI images based on the U-Net architecture, the encoder of which uses a model of a deep convolutional neural network pre-trained on a set of ImageNet images. Among such models were used VGG16, VGG19, MobileNetV2, Inception, ResNet50, EfficientNetb7, InceptionResnetV2, DenseNet201, DenseNet121. Based on the hybrid method, the TL-U-Net model was implemented, and numerical experiments were carried out to train it with different encoder models for segmentation of brain tumors based on its MRI images. Computer experiments on a set of MRI images of the brain showed the effectiveness of the proposed approach, the best encoder model turned out to be the neural network Densenet121, which provided indicators of segmentation accuracy MeanIoU=90.34%, MeanDice=94.33%, accuracy=94.17%. The obtained estimates of segmentation accuracy are comparable or exceed similar estimates obtained by other researchers. Continue... | |
№ 3(105)
16 june 2023 year
Rubric: Algorithmic efficiency Authors: Borisov V. V., Bulygina O. V., Vereikina E. |
In modern conditions of constant growth in prices for fuel and energy resources, the problem of increasing the energy and resource efficiency of technological processes of industrial enterprises has acquired particular relevance. It is especially acute for energy-intensive industries, which include high-temperature processing of mining and chemical raw materials. To reduce the energy intensity of complex chemical-technological processes, it is proposed to use the possibilities of computer simulation, for example, to optimize the operating regimes of existing equipment. The article has considered the scientific and practical problem of optimizing the charge heating regimes in various zones of the roasting conveyor machine used to produce phosphorite pellets from apatite-nepheline ore waste stored in dumps of mining and processing plants. The specifics of the optimization task (nonlinearity of the objective function, large dimension of the search space, high computational complexity) are significant limitations for the use of traditional deterministic search methods. It led to the choice of population algorithms, which are based on modeling the collective behavior and are distinguished by the possibility of simultaneous processing of several options. The cuckoo search algorithm, which is distinguished by a small number of “free” parameters that affect the convergence, was used to solve the stated optimization task. To select the optimal values of these parameters, it was proposed to use the idea of coevolution, which consists in the parallel launch of several versions of the selected algorithm with different “settings” for each subpopulation. The management of the chemical-technological system for the processing of apatite-nepheline ore waste, taking into account the basis of the results obtained, will minimize the amount of return and ensure an energy-saving operating regime of the roasting conveyor machine. Continue... |
№ 3(105)
16 june 2023 year
Rubric: Algorithmic efficiency Authors: Zienko S., Yakimenko I., Zhbanova V. |
The problem of recognition of natural and synthetic diamonds (diamonds) is relevant today. A technique for computer processing of the luminescence spectra of diamonds using the Origin mathematical package is proposed. The processing technique is presented on specific examples. The spectra were measured using a RAOS-3 spectrometer-fluorimeter. A laser with a wavelength of 532 nm was used to excite diamond luminescence. A method is proposed for identifying diamonds of unknown origin by the number of bands of elementary components in the luminescence spectrum when decomposed into Gaussian curves. Luminescence spectra in faceted diamonds (brilliants) are widely used to study their physical properties. Synthetic faceted diamonds are significantly inferior to natural ones in terms of luminescence intensity. The light signal of photoluminescence in the former, in some cases, is comparable with the noise level of the measuring device. As a result, the instantaneous value of the useful signal can take both positive and negative values over the entire wavelength range of the spectrum. Therefore, the detection of a useful signal against the background of interference is of great importance. Along with this, to identify a diamond, it is necessary to solve the problem of decomposing the spectrum into elementary components in the form of Gaussian curves. Since it has been established that the spectra of natural diamonds consist of two peaks, while synthetic diamonds contain from three to eight peaks, which indicates a loose structure of the diamond crystal lattice. The efficiency of solving a number of these problems can be significantly improved by using software applications with special functionality. To demonstrate the features and advantages of the automated technique, the Origin mathematical package was taken, which, in particular, makes it possible to improve the quality of the results of processing a low luminescence spectrum and to find the number of peaks for Gaussian curves with sufficient accuracy. Continue... |
The decomposition method of discrete event simulation models is represented based on the author’s own work DVCompute++ Simulator, which is a collection of general-purpose programming libraries in C++ for creating and running simulation models. The aim of the research was to find an approach based on which arbitrary models could be divided into parts, then these parts of the model could be divided into less components and so on, where the result would be a hierarchy of nested sub-models that could be considered in isolation as independent entities. Now such sub-models can be created in C++ code, but, in the future, they can be created graphically as diagrams or as some text written in the specialized modeling language, where the sub-models can be used repeatedly, which makes them similar to library units from GPSS STUDIO. The mentioned ways of creating sub-models can be combined in any order on any level of nested hierarchy, where this work can be performed by different people with different skills. Moreover, it is shown in the article that the considered decomposition method can be applied to the case of distributed simulation, which is supported by DVCompute++ Simulator too. All this is possible due to the fact that the author applied functional programming techniques, where the simulation model is considered as a composition of computations. Then the model decomposition is the splitting of computations into parts, which can be connected to each other like constructor. There are two basic computations: blocks similar to the GPSS language and discrete signal computations similar to reactive programming. The diagrams of sub-models and the corresponding C++ code are provided in the article, based on which the suggested author’s method of decomposing the discrete event simulation models is illustrated. Continue... | |
№ 3(105)
16 june 2023 year
Rubric: Data protection Authors: Dli M. I., Okunev B., Prokimnov N., Puchkov A. |
The results of the study, the purpose of which was to build a software model of a multi-stage integrated system for processing finely dispersed ore raw materials, are presented. The role of such raw materials can be processed waste at mining and processing plants of apatite-nepheline and other types of ores, which accumulate in large volumes in tailing dumps. They create a significant environmental threat in the territories adjacent to the plants due to weathering, dust formation, penetration into the soil and aquifers of chemical compounds and substances hazardous to human health. Therefore, the improvement of existing production processes, the development of new technological systems for mining and processing plants, including the application of the principles of the circular economy, waste recycling, justifies the relevance of the chosen research area. The proposed program model is based on the use of trainable trees of systems (blocks) of fuzzy inference of the first and second types. This approach made it possible to avoid unnecessary complication of the bases of fuzzy inference rules when using only one fuzzy block when building a multi-parameter model of the entire multi-stage complex system. The use of several fuzzy inference blocks that describe the behavior of individual units of the system and their configuration in accordance with the physical structure of the system allows the use of relatively simple sets of rules for individual blocks. The joint selection of their parameters when training a tree of fuzzy blocks makes it possible to achieve high accuracy of the solutions obtained. The novelty of the research results is the proposed software fuzzy model of an integrated system for processing finely dispersed ore raw materials. The results of a simulation experiment conducted in the MatLab environment using a synthetic data set generated in Simulink are presented. The results showed that the trained fuzzy model provides good fidelity of the parameters and variables from the test part of the synthetic set. Continue... |