IT management |
|
Performance management |
|
|
Line personnel occupy the vast majority of positions in many organizations, which determines the importance of timely and successful filling of such vacancies. The search for candidates for such positions is carried out through mass recruitment, which is characterized by high labor intensity, budgetary and time constraints, and the need for regular repetition due to high staff turnover rates. The noted features make it impossible to carry out this process without the use of modern software. Since mass recruitment does not require finding the best candidate for each vacancy and is limited to searching for specialists based on formal criteria from their resume, the main share of labor and time costs falls on the primary selection of candidates. Existing software does not have sufficient functionality to effectively automate this process. Given the need to process large volumes of multidimensional data, they do not provide a comprehensive accounting of different types of candidate characteristics and automatic adjustment of selection criteria taking into account their priority for the vacancy being filled. To solve the problem, an automated method for forming a set of candidates for linear positions was developed. It is based on the integrated use of an adaptive neuro-fuzzy inference system and a bioinspired algorithm inspired by the behavior of a fish school. The developed hybrid method was implemented as a computer program using the Python language. The results of its testing showed the convergence of the optimization algorithm, and their comparison with manual selection confirmed the prospects for using it to solve tasks of mass recruitment of line personnel.
|
---|---|
Software engineering |
|
|
In this paper we propose a method for analyzing incomplete and inaccurate data in order to identify factors for predicting the volume of mudflows. The analysis is based on the mudflow activity inventory data for the south of Russia, which is poorly formalized, has missing values in the mudflow types field, and requires significant additional processing. Due to the lack of information on the mudflow type in the cadastral records, the primary objective of the study is to develop and apply a methodology for classifying mudflow types to fill in the missing data. For this purpose, a comparative study of machine learning methods was performed, including neural networks, support vector machines, and logistic regression. The experimental results indicate that the neural network-based model has the highest prediction accuracy among the methods considered. However, the support vector machine method demonstrated a higher sensitivity rate for classes represented by a small number in the test sample. In this regard, it was concluded that an integrated approach is appropriate, combining the strengths of both methods, which can help improve the overall classification accuracy in this subject area. Forecasting the volume of material removal and data clustering showed the presence of nonlinear dependencies, incompleteness and poor structuring of data even after filling in missing values of the mudflow type, which required a transition from numerical data to categorical data. This transition increased the model’s resistance to outliers and noise, allowing for a highly accurate forecast of a one-time removal. Since the forecast does not reveal the factors influencing its result, an analysis was conducted to identify these factors and present the found patterns in the form of logical rules. The formation of logical rules was carried out using two methods: the associative analysis method and the construction of a logical classifier. As a result of applying associative analysis, rules were found that reflect some patterns in the data, which, as it turned out, need significant correction. The use of the developed logical methods made it possible to clarify and correct the patterns identified using associative rules, which, in turn, ensured the determination of a set of factors influencing the volume of the mudflow.
|
Algorithmic efficiency |
|
|
Probabilistic models for forecasting and assessing the reliability of navigation parameters in intelligent transportation systems are proposed. The relevance of the study is driven by the need to enhance the reliability of robotic transportation systems operating in dynamically changing urban environments. In such environments, sensor failures, signal distortions, and a high degree of data uncertainty are possible. The proposed approach is based on the application of probabilistic analysis methods and statistical control to detect anomalies in navigation parameters such as coordinates, speed, and orientation. The concept of navigation data reliability is introduced as a quantitative measure characterizing the degree of correspondence between the measured parameters and the actual state of the system. Key validity criteria are defined: confidence probability, significance level and confidence coefficients. To improve the reliability of parameter assessment, a combination of statistical analysis methods and filtering algorithms is proposed. Forecasting involves preliminary data processing aimed at smoothing noise and verifying data consistency. Outlier detection is performed using statistical methods, including confidence intervals and variance minimization. An forecasting model based on the Kalman filter and dynamic updating of probabilistic estimates has been developed. The integration of various methods into a unified system minimizes the impact of random and systematic errors, ensuring more accurate assessment of navigation parameters. The proposed approach is applicable to the development of navigation systems for autonomous robots and unmanned vehicles, enabling them to adapt to external conditions without the need for precise a priori data.
|
Models and methods |
|
|
The performance reliability indicators characterize the operability of the “test object – test tool” system and significantly depend on the performance reliability parameters of the testing equipment. Consequently, they can serve as criteria for selecting the necessary tools at the design stage of digital devices and assessing their effectiveness. The paper proposes quantitative criteria for assessing the effectiveness of the hardware testing method based on the assumption that a digital device performs a certain generalizing function, the values of which depend on a set of quantities reflecting individual operating modes of the device, and can be classified as correct only if there are no errors in the operating device. To quantitatively assess the performance reliability of the equipment, it is proposed to use the probability value that the digital device as a whole will function error-free provided that there is no detectable fault. The calculated evaluation value allows you to select the best of several possible test circuit options or synthesize a new one. Cases of organizing test procedures based on various principles and their combinations are considered. An optimization problem of placing test circuits in a tested device is formulated and a technique for solving it under certain restrictions is proposed. A distinctive feature of the proposed approach is the elimination of the need to use the values of conditional probabilities of detected faults, on the use of which known methods are built, although their practical receipt is very labor-intensive. The operation of the method of rational placement of control circuits is illustrated by the example of a control signal block.
|