ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
86
Human Recognition in a Collaborative Robot-Manipulator Working Area
Based on MobileNetV2 Deep Neural Network in Real Time
Svitlana Maksymova 1, Dmytro Gurin 1, Vladyslav Yevsieiev 1,
Ahmad Alkhalaileh 2
1 Department of Computer-Integrated Technologies, Automation and Robotics,
Kharkiv National University of Radio Electronics, Ukraine
2 Senior Developer Electronic Health Solution, Amman, Jordan
Abstract:
The article deals with the development of a human recognition system
in a collaborative robot-manipulator working area based on MobileNetV2 deep neural
network. The purpose of the research is to implement an accurate and fast real-time
recognition algorithm to improve security and work efficiency. Using the MobileNetV2
model allows you to achieve high accuracy with minimal resource consumption. The
results of the experiments demonstrate the high reliability of the system in conditions
of changing lighting and moving obstacles, which opens up new opportunities for the
integration of recognition in industrial collaborative robot.
Key words:
Industry 5.0, Collaborative Robot, Work Area, Computer Vision
Introduction
Joint, i.e. collaborative, work between humans and robots brings significant
benefits, as it expands human capabilities, complementing them with robot capabilities,
and vice versa [1]-[17]. And it is not surprising that the use of collaborative robots is
constantly increasing and expanding, entering new areas of science and technology.
Human recognition in a collaborative robots-manipulator working area is a
critical task in the context of the development of Industry 5.0, where the integration of
robots into production processes is focused on harmonious interaction with people. In
today's manufacturing environments, where workplaces are increasingly filled with
intelligent machines, there is a need to ensure safe and effective collaboration between
robots and humans [15]-[21]. Various methods and approaches can be used here [22]-
[39]. The application of deep neural networks, such as MobileNetV2, for real-time
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
87
human recognition is becoming a key technology that enables continuous monitoring
of the work area, identifying human presence and analyzing human activity. This
research is relevant not only from the point of view of improving safety, but also in the
context of the growing importance of personalized production, where work processes
are adapted to specific human needs. The use of MobileNetV2 allows for high accuracy
and speed of visual data processing, which is a decisive factor in dynamic production
environments. Thus, research in this area contributes to the creation of new models of
human-robot cooperation, responding to the challenges of Industry 5.0, which focuses
on interaction, safety and sustainable production
.
Related works
In collaborative work between a robot and a human, the central problem is
undoubtedly ensuring safety. At the same time, the main task is recognizing a person
in the robot's work area. Naturally, many scientific papers are devoted to this problem.
And we will consider several of these papers here.
Fan, J., and others in [40] provide a systematic review of computer vision-based
holistic scene understanding in HRC scenarios, which mainly takes into account the
cognition of object, human, and environment along with visual reasoning to gather and
compile visual information into semantic knowledge for subsequent robot decision-
making and proactive collaboration.
The authors in [41] present a context awareness-based collision-free human-
robot collaboration system that can provide human safety and assembly efficiency at
the same time. The system can plan robotic paths that avoid colliding with human
operators while still reach target positions in time. Human operators’ poses can also be
recognised with low computational expenses to further improve assembly efficiency.
The scientists in [42] propose a status recognition system to enable the early
execution of robot tasks without human control during the HRC mold assembly
operation.
The study [43] automatically facial expression recognition presents, which was
trained and evaluated on the AffectNet database, to predict the valence and arousal of
48 subjects during an HRC scenario.
Researchers in [44] propose an algorithm for constructing a video descriptor and
solve the problem of classifying a set of actions into predefined classes. The proposed
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
88
algorithm is based on capturing three-dimensional subvolumes located inside a video
sequence patch and calculating the difference in intensities between these sub-volumes.
Wen, X., & Chen, H. [45] consider high-precision and long-timespan sub-
assembly recognition. To solve the problem they propose a 3D long-term recurrent
convolutional networks (LRCN) by combining 3D convolutional neural networks
(CNN) with long short-term memory (LSTM).
Human-Robot Collaboration enabling mechanisms require real-time detection of
potential collisions among human and robots [46]. This article presents a novel
approach for the identification of human and robot collision based on vision systems.
Moreover, Artificial Intelligent algorithms are required to classify the captured data
near real-time and to provide a score about the collision status (contact or non-contact)
between a human and the robot.
So we see that the problems in human-robot collaboration are quite diverse. The
approaches to their solution are also diverse. Later in this article, we will propose our
approach to human recognition in the workspace of a collaborative robot.
Mathematical representation of the method of determining key points on the
human div for the analysis of postures and movements in real time in the
working environment of a robot-manipulator.
Let us assume that the input data is a video stream or an image coming from the
camera and we will denote as
I(t)
where
t
is a moment in time. Each image in the stream
has dimension
W
x
H
x3, where
W
and
H
are the width and height of the image,
respectively, and 3 is the number of RGB channels. We will perform pre-processing of
the image including pixel normalization (for example, scaling to a range or [0,1] or [-
1,1] and resizing to the given network parameters:
I’(t)=f(I(t)),
(1)
I(t)
- an input signal that is an image or video frame at a point in time
t
;
f(I(t))
- a preprocessing function that is applied to the input image
I(t)
. Function
f
performs operations necessary to prepare the image for further processing in the neural
network. These operations may include pixel normalization (such as scaling pixel
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
89
values to a range [0,1] or [-1,1], resizing the image to fixed dimensions, rotating,
denoising, etc.;
I’(t)
- is the result of the function
f
, that is, a processed image ready to be fed to
the input of a neural network.
I’(t)
is the version of
I(t)
after all the transformations that
are done so that the image is optimally prepared for analysis by a deep neural network.
The input to the trained deep neural network (CNN) MobileNetV2 is a
I’(t)
-
processed image. The output of the network is the coordinates of the key points
P={(x
i
,y
i
,z
i
)}
, where
x
i
,y
i
are the coordinates of the key point on the image plane,
z
i
is
depth. The network returns a set of keypoints
P
, where each keypoint
P
p
i
corresponds to a specific div element (eg shoulder, knee, elbow). Each point
p
i
is
defined by:
p
i
=(x
i
,y
i
,z
i
),
(2)
x
i
,y
i
- normalized to image dimensions,
z
i
- an additional parameter for three-
dimensional modeling.
Key points (
P
) are connected to a skeletal model
S
, which consists of a set of
segments (bones) between key points that represent anatomical landmarks on the
human div. Let's describe the mathematical representation of how they are combined
into a skeletal model
Let
P={P
1
,P
2
,…,P
n
}
be the set of key points on the human div, where
P
i
={(x
i
,y
i
,z
i
)}
are the coordinates of the
i-
th key point in three-dimensional space. In the
case of two-dimensional space
P
i
=(x
i
,y
i
)
. Then the skeleton model
S
can be represented
as follows:
S={S
1
,S
2
,..,S
m
},
(3)
S={S
1
,S
2
,..,S
m
}
- a set of segments (bones) of a skeletal model
S
j
, where each
segment connects two key points
P
aj
and
P
bj
, 1≤aj, bj≤n.
According to 3, each segment
S
j
is a vector connecting two key points
P
aj
and
P
bj
. Then the segment vector
S
j
can be written:
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
90
S
j
=
P
bj
-
P
aj
.
(4)
Then for the two-dimensional case:
S
j
=
(x
bj
-
x
aj
, y
bj
-
y
aj
)
.
(5)
For the three-dimensional case:
S
j
=
(x
bj
-
x
aj
, y
bj
-
y
aj
, z
bj
-
z
aj
)
.
(6)
The length of the segment
S
j
, which connects the points
P
aj
and
P
bj
, is defined as
the Euclidean distance between these points, respectively, for the two-dimensional case
(7) and the three-dimensional case (8).
2
2
)
(
)
(
aj
bj
aj
bj
j
y
y
x
x
S
.
(7)
2
2
2
)
(
)
(
)
(
aj
bj
aj
bj
aj
bj
j
z
z
y
y
x
x
S
.
(8)
A complete skeletal model
S
consists of all segments:
S={(P
a1
,P
b1
),( P
a2
,P
b2
),…,( P
am
,P
bm
)}.
(9)
Where each pair
P
aj
,P
bj
represents a connection between the key points
P
aj
and
P
bj
through the segment
S
j
.
Thus, a skeletal model
S
is created by constructing segments between key points,
where each segment represents a part of the div, and together they form a complete
skeletal structure.
The skeleton
S
can be represented as a graph:
G=(V,E),
(10)
V
- set of vertices (key points);
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
91
E
- the set of edges connecting these points.
As a result, the human pose can be defined as a set of vectors between connected
key points:
E
j
i
t
i
p
t
j
p
t
ij
v
t
Po
)
,
(
|
)
(
)
(
)}
(
{
)
(
1
1
,
(11)
)
(
1
t
ij
v
- the direction and length of the bone in the skeleton.
To track the position of a person relative to the robot’s working area, the
transformation of coordinates from the coordinate space of the camera to the coordinate
system of the robot is used
(X,Y,Z)
)
(
Transform
(x,y,z)
, this transformation can include
scaling, rotation and translation
The relative position of the human relative to the robot can be defined as the
distance between the center of the skeletal model:
P
i
i
P
t
p
t
C
1
1
)
(
)
(
(12)
and robot:
2
2
2
))
(
(
))
(
(
))
(
(
)
(
t
Z
Z
t
Y
Y
t
X
X
t
d
c
r
с
r
с
r
.
(13)
If the distance
d(t)
is less than a certain threshold
d
min
, the system can activate a
protective mechanism or signal a danger.
Using deep neural networks, how MobileNetV2 allows in real time to determine
the key points on the human div, analyze its posture, and transmit this data to robot
to ensure safety and efficiency of interaction. Mathematical models, such as pose
vectors and decision functions, allow the integration of this data into the robot's control
system, ensuring optimal performance in a dynamic environment.
Software implementation of the method of determining key points on the
human div for analysis of postures and movements in real time
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
92
The choice of the Python language for the development of a human recognition
program in the a collaborative robots-manipulator working area based on the deep
neural network MobileNetV2 is due to its wide support for machine learning libraries,
in particular TensorFlow and Keras, which allow easy integration and training of
complex models. Python has a simple and understandable syntax, which accelerates the
development and testing of algorithms, especially in the context of a fast-changing
environment where you need to quickly respond to new challenges. PyCharm was
chosen as a development environment due to its advanced features for working with
Python, including support for integration with version control systems, convenient
debugging, automatic code completion, and extensive customization options. PyCharm
also supports powerful tools for working with machine learning libraries, making it
ideal for developing software that requires high performance and reliability in real-time
environments.
Based on the considered mathematical models of the method of determining key
points on the human div for the analysis of postures and movements in real time, the
following general algorithm of the program was developed, which is presented in
Figure 1.
Based on the general algorithm (Fig. 1) and the considered mathematical models
(1-13) of the method of determining key points on the human div for the analysis of
postures and movements in real time, a program was developed to implement some of
the functions listed below.
# Mediapipe initialization for position recognition
mp_pose = mp.solutions.pose
mp_drawing = mp.solutions.drawing_utils
This code snippet initializes the Mediapipe library for human pose recognition.
In particular, it loads the mp_pose module, which is responsible for defining div key
points, and mp_drawing, which provides the tools to render these points in an image or
video. This is necessary for further processing and display of the human skeletal model
in real time.
pose = mp_pose.Pose(
static_image_mode=False,
min_detection_confidence=0.5,
min_tracking_confidence=0.5
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
93
This code snippet creates an instance of the `Pose` class from the Mediapipe
library, configuring it to recognize human poses in real-time. The parameter
`static_image_mode=False` indicates that the model will work with the video stream
and not with individual static images. The parameters `min_detection_confidence=0.5`
and `min_tracking_confidence=0.5` set the minimum confidence level for keypoint
detection and tracking to ensure reliable position recognition and tracking.
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
94
Start
Initialize the environment
Video capture
Frames processing
Human recognition
Analysis of human position
Visualization of results
Is there a
next
frame?
Finish
No
Yes
Import the necessary libraries (TensorFlow,
Keras, OpenCV, others). Download the pre-
trained MobileNetV2 model.
Initialization of a video stream from a camera or
other source.
Capturing a frame from a video stream. Pre-
processing of the frame (resizing,
normalization).
Predicting the presence of a person using the
MobileNetV2 model. Determination of key
points and construction of a skeleton model.
Checking whether a person is in a safe zone.
Responding to dangerous situations (for
example, stopping the operation of the robot).
Display frame with superimposed key points and
skeleton. Displaying a security status message.
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
95
Figure 1: The general algorithm of the program for determining key points on
the human div
rgb_frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
results = pose.process(rgb_frame)
This piece of code converts the `frame` image from the BGR format (used by
OpenCV) to the RGB format required to work with Mediapipe. The transformed image
is then processed using the `pose.process` function, which detects key points on the
human div. The resulting result is stored in the `results` variable for further analysis
or visualization.
if results.pose_landmarks:
# Drawing points and lines on the image
mp_drawing.draw_landmarks(
image=frame,
landmark_list=results.pose_landmarks,
connections=mp_pose.POSE_CONNECTIONS,
landmark_drawing_spec=mp_drawing.DrawingSpec(color=(0, 255, 0),
thickness=2, circle_radius=2),
connection_drawing_spec=mp_drawing.DrawingSpec(color=(255, 0, 0),
thickness=2)
)
This piece of code checks if div landmarks (`pose_landmarks`) have been
found in the image. If so, these points and the connections between them are drawn on
the `frame` image using the `mp_drawing.draw_landmarks` function. The
`DrawingSpec` parameters define the color, line weight, and radius of the points for
rendering the skeletal model on the image. An example of the program work is shown
in Figure 2.
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
96
Figure 2: User interface of the human recognition program in a collaborative
robot-manipulator working area
Based on the developed program, several experiments can be conducted that will
reveal various aspects of human recognition in a collaborative robot-manipulator
working area:
- testing the reliability of pose recognition in different lighting conditions, which
makes it possible to change the level of illumination in the working area and observe
how it affects the accuracy of identifying key points and building a skeletal model. This
will allow us to assess how resistant the system is to changes in external conditions;
- an experiment with different positions and viewing angles, which makes it
possible to check how well the system recognizes human poses when the person is at
different angles relative to the camera or the robot. This will help determine the
limitations of the algorithm in recognizing partially visible or distorted poses;
- the influence of the number of people in the frame on the accuracy of
recognition, which makes it possible to introduce several people into the working area
and check how the system copes with the recognition of poses and their correct
identification. This is important for assessing whether the system will be able to
function correctly in conditions where several people are present at the same time;
- analysis of the reaction time to the detection of a person, which makes it
possible to measure the delay between the appearance of a person in the frame and the
moment when the system successfully recognizes his pose. This will help assess
whether the system can respond to potential threats in real time.
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
97
These experiments will reveal the strengths and weaknesses of the human
recognition system and identify areas for further improvement to ensure the safety and
efficiency of the robot manipulator in real production conditions.
As a result of the conducted experiments, the following results were obtained,
which are given in tables 1-4.
Table 1:
Reliability testing of pose recognition in different lighting conditions
Lighting level (lux)
Percentage
of
successful recognition (%)
The number of key
points
that
were
not
detected
100
~95%
2
300
~98%
1
500
~99%
0
800
~97%
1
1000
~95%
3
1500
~90%
5
2000
~87%
7
Table 2:
Experiment with different positions and viewing angles
Viewing
angle
(degrees)
Percentage
of
successful recognition (%)
Number
of
undefined items
0 (forward)
~98%
1
30
~95%
~2-3
60
~90%
3
90
~85%
~3-4
120
~79%
6-7
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
98
150
~73%
8
180 (backward)
~70%
~8-10
Table 3:
Effect of the number of people in the frame on recognition accuracy
The
number
of
people in the frame
Percentage
of
successful recognition (%)
Cases of mistaken
identity
1
~99%
0
2
~95%
1
3
~90%
~1-2
4
~85%
~2-3
5
~80%
3
Table 4: Analysis of human detection reaction time.
Distance to the camera (meters)
Average response time (ms)
0.5
~51
1
~ 64
1.5
~ 72
2.0
~ 85
2.5
~ 93
3
~ 104
Based on the results obtained in Table 1, it can be concluded that the reliability
of pose recognition depends on the level of illumination. Under moderate lighting
conditions (300-1000 lux), the system demonstrates high accuracy, with a minimal
number of undetected key points. However, under excessive lighting (over 1500 lux),
the recognition accuracy decreases, which indicates the vulnerability of the algorithm
to bright light sources that can create glare or dazzle the camera.
Table 2 shows that the system has certain limitations when recognizing poses at
different viewing angles. At a straight angle (0 degrees), pose recognition is most
effective, but as the angle increases, the accuracy gradually decreases, especially at
angles over 90 degrees. This indicates that the algorithm is less effective at recognizing
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
99
partially visible or distorted poses, which can be critical in real-world collaborative
robot environments where operators may be in non-standard positions.
According to Table 3, the system shows high performance in recognizing the
poses of one or two people in the frame, but the increase in the number of people leads
to a decrease in accuracy and an increase in cases of false identification. This may
indicate that the system has limitations in working with multiple people at the same
time, which can be a problem in complex production environments with many
personnel.
The analysis of the reaction time for human detection in Table 4 shows that the
system works quickly when the object is close to the camera, but the increase in the
distance leads to an increase in the delay. This can affect the system's ability to respond
to potentially dangerous situations in a timely manner, which is important in the context
of working with a collaborative robot, where prompt detection and response to human
presence is critical to safety.
For the convenience of analyzing the obtained data and comparing them, we will
present them in the form of the following combined graph, which is presented in Figure
3.
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
100
Figure 3:
Combined graph of the comparison of the obtained results of the
conducted experiment.
The combined graph (Fig. 3) compares different aspects of accuracy and
response time under different conditions. The graph shows the following:
- accuracy compared to lighting conditions: shows how accuracy decreases with
increasing lighting conditions;
- accuracy versus viewing angle: illustrates the decrease in accuracy as the
viewing angle between the camera and the person increases;
- accuracy and number of people: shows how accuracy is affected by more people
present in the frame;
- dependence of response time on distance: demonstrates how the response time
increases as the distance between the camera and the person increases.
This visualization helps to understand the trade-offs and performance
characteristics of a human recognition system in different environmental scenarios.
Conclusion
The article presents the development of a human recognition system in a
collaborative manipulator robot working area, which is based on the use of deep neural
network, in particular MobileNetV2, for real-time analysis. The main goal of the
research was to create highly accurate and effective algorithms for human
identification, which allows the robot to safely and accurately interact with the operator
or the objects around him. In the implementation process, the MobileNetV2 model was
used, which is noted for its ease and speed of processing due to optimization based on
deep convolutional networks. This made it possible to achieve high accuracy of
recognition with low consumption of computing resources, which is critical for real-
time work.
Experiments have shown that the system is able to effectively identify a person
in conditions of different lighting and the presence of obstacles, which indicates its high
reliability. A detailed analysis of tracking speed and detection accuracy was also carried
out, which confirmed the possibility of implementing such solutions in industrial
conditions. The implementation of this technology opens up new opportunities for the
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
101
development of collaborative robots, increasing their level of safety and interactivity in
work environments. Thus, the developed system is an effective solution for the
integration of real-time human recognition into collaborative robots, which contributes
to the improvement of interaction between people and robots, and also improves the
security and productivity of work processes.
References:
1.
Kuzmenko, O., & et al. (2024). Robot Model For Mines Searching
Development. Multidisciplinary Journal of Science and Technology, 4(6), 347-355.
2.
Yevsieiev, V., & et al. (2024). Object Recognition and Tracking Method
in the Mobile Robot’s Workspace in Real Time. Technical science research in
Uzbekistan, 2(2), 115-124.
3.
Samoilenko, H., & et al. (2024). Review for Collective Problem-Solving
by a Group of Robots. Journal of Universal Science Research, 2(6), 7-16.
4.
Bortnikova, V., & et al. (2019). Structural parameters influence on a soft
robotic manipulator finger bend angle simulation. In 2019 IEEE 15th International
Conference on the Experience of Designing and Application of CAD Systems
(CADSM), IEEE
5.
Gurin, D., & et al. (2024). Using Convolutional Neural Networks to
Analyze and Detect Key Points of Objects in Image. Multidisciplinary Journal of
Science and Technology, 4(9), 5-15.
6.
Yevsieiev, V., & et al. (2024). The Canny Algorithm Implementation for
Obtaining the Object Contour in a Mobile Robot’s Workspace in Real Time.
Journal of
Universal Science Research, 2(3), 7–19.
7.
Baker, J. H., Laariedh, F., Ahmad, M. A., Lyashenko, V., Sotnik, S., &
Mustafa, S. K. (2021). Some interesting features of semantic model in Robotic Science.
SSRG International Journal of Engineering Trends and Technology, 69(7), 38-44.
8.
Sotnik, S., Mustafa, S. K., Ahmad, M. A., Lyashenko, V., & Zeleniy, O.
(2020). Some features of route planning as the basis in a mobile robot. International
Journal of Emerging Trends in Engineering Research, 8(5), 2074-2079.
9.
Matarneh, R., Maksymova, S., Deineko, Z., & Lyashenko, V. (2017).
Building robot voice control training methodology using artificial neural net.
International Journal of Civil Engineering and Technology, 8(10), 523-532.
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
102
10.
Nevliudov, I., Yevsieiev, V., Lyashenko, V., & Ahmad, M. A. (2021).
GUI Elements and Windows Form Formalization Parameters and Events Method to
Automate the Process of Additive Cyber-Design CPPS Development. Advances in
Dynamical Systems and Applications, 16(2), 441-455.
11.
Lyashenko, V., Abu-Jassar, A. T., Yevsieiev, V., & Maksymova, S.
(2023). Automated Monitoring and Visualization System in Production. International
Research Journal of Multidisciplinary Technovation, 5(6), 9-18.
12.
Abu-Jassar, A. T., Attar, H., Lyashenko, V., Amer, A., Sotnik, S., &
Solyman, A. (2023). Access control to robotic systems based on biometric: the
generalized model and its practical implementation. International Journal of Intelligent
Engineering and Systems, 16(5), 313-328.
13.
Al-Sharo, Y. M., Abu-Jassar, A. T., Sotnik, S., & Lyashenko, V. (2023).
Generalized Procedure for Determining the Collision-Free Trajectory for a Robotic
Arm. Tikrit Journal of Engineering Sciences, 30(2), 142-151.
14.
Ahmad, M. A., Sinelnikova, T., Lyashenko, V., & Mustafa, S. K. (2020).
Features of the construction and control of the navigation system of a mobile robot.
International Journal of Emerging Trends in Engineering Research, 8(4), 1445-1449.
15.
Gurin, D., & et al. (2024). MobileNetv2 Neural Network Model for
Human Recognition and Identification in the Working Area of a Collaborative Robot.
Multidisciplinary Journal of Science and Technology, 4(8), 5-12.
16.
Abu-Jassar, A., & et al. (2023). Obstacle Avoidance Sensors: A Brief
Overview.
Multidisciplinary Journal of Science and Technology, 3(5), 4-10.
17.
Funkendorf, A., & et al. (2019). 79 Mathematical Model of Adapted
Ultrasonic Bonding Process for MEMS Packaging. In 2019 IEEE XVth International
Conference on the Perspective Technologies and Methods in MEMS Design
(MEMSTECH), IEEE, 79-82.
18.
Gurin, D., & et al. (2024). Using the Kalman Filter to Represent
Probabilistic Models for Determining the Location of a Person in Collaborative Robot
Working Area. Multidisciplinary Journal of Science and Technology, 4(8), 66-75.
19.
Yevsieiev, V., & et al. (2024). Building a traffic route taking into account
obstacles based on the A-star algorithm using the python language. Technical Science
Research In Uzbekistan, 2(3), 103-112.
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
103
20.
Gurin, D., & et al. (2024). Effect of Frame Processing Frequency on
Object Identification Using MobileNetV2 Neural Network for a Mobile Robot.
Multidisciplinary Journal of Science and Technology, 4(8), 36-44.
21.
Bortnikova, V., & et al. (2019). Mathematical model of equivalent stress
value dependence from displacement of RF MEMS membrane. In 2019 IEEE XVth
International Conference on the Perspective Technologies and Methods in MEMS
Design (MEMSTECH), IEEE, 83-86.
22.
Al-Sharo, Y. M., Abu-Jassar, A. T., Sotnik, S., & Lyashenko, V. (2021).
Neural networks as a tool for pattern recognition of fasteners. International Journal of
Engineering Trends and Technology, 69(10), 151-160.
23.
Abu-Jassar, A. T., Al-Sharo, Y. M., Lyashenko, V., & Sotnik, S. (2021).
Some Features of Classifiers Implementation for Object Recognition in Specialized
Computer systems. TEM Journal: Technology, Education, Management, Informatics,
10(4), 1645-1654.
24.
Ahmad, M. A., Baker, J. H., Tvoroshenko, I., Kochura, L., & Lyashenko,
V. (2020). Interactive Geoinformation Three-Dimensional Model of a Landscape Park
Using Geoinformatics Tools. International Journal on Advanced Science, Engineering
and Information Technology, 10(5), 2005-2013.
25.
Baranova, V., Zeleniy, O., Deineko, Z., & Lyashenko, V. (2019, October).
Stochastic Frontier Analysis and Wavelet Ideology in the Study of Emergence of
Threats in the Financial Markets. In 2019 IEEE International Scientific-Practical
Conference Problems of Infocommunications, Science and Technology (PIC S&T) (pp.
341-344). IEEE.
26.
Al-Sherrawi, M. H., Lyashenko, V., Edaan, E. M., & Sotnik, S. (2018).
Corrosion as a source of destruction in construction. International Journal of Civil
Engineering and Technology, 9(5), 306-314.
27.
Lyashenko, V., Ahmad, M. A., Sotnik, S., Deineko, Z., & Khan, A. (2018).
Defects of communication pipes from plastic in modern civil engineering. International
Journal of Mechanical and Production Engineering Research and Development, 8(1),
253-262.
28.
Ляшенко В. В. (2007). Интерпретация и анализ статистических
данных, описывающих процессы экономической динамики. Бизнес Информ,
9(2), 108-113.
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
104
29.
Слюніна, Т. Л., Бережний, Є. Б., & Ляшенко, В. В. (2007). Розвиток
вітчизняної мережі банківських установ: особливості та регіональні аспекти.
Вісник ХНУ ім. В. Н. Каразіна. Економічна серія, 755. 84–88.
30.
Kuzemin, A., Lуashenko, V., Bulavina, E., & Torojev, A. (2005). Analysis
of movement of financial flows of economical agents as the basis for designing the
system of economical security (general conception). In Third international conference
«Information research, applications, and education (pp. 27-30).
31.
Lyubchenko, V., & et al.. (2016). Digital image processing techniques for
detection and diagnosis of fish diseases. International Journal of Advanced Research in
Computer Science and Software Engineering, 6(7), 79-83.
32.
Lyashenko, V. V., Matarneh, R., Kobylin, O., & Putyatin, Y. P. (2016).
Contour Detection and Allocation for Cytological Images Using Wavelet Analysis
Methodology. International Journal, 4(1), 85-94.
33.
Drugarin, C. V. A., Lyashenko, V. V., Mbunwe, M. J., & Ahmad, M. A.
(2018). Pre-processing of Images as a Source of Additional Information for Image of
the Natural Polymer Composites. Analele Universitatii'Eftimie Murgu', 25(2).
34.
Lyubchenko, V., Veretelnyk, K., Kots, P., & Lyashenko, V. (2024).Digital
image segmentation procedure as an example of an NP-problem. Multidisciplinary
Journal of Science and Technology, 4(4), 170-177.
35.
Abu-Jassar, A., Al-Sharo, Y., Boboyorov, S., & Lyashenko, V. (2023,
December). Contrast as a Method of Image Processing in Increasing Diagnostic
Efficiency When Studying Liver Fatty Tissue Levels. In 2023 2nd International
Engineering Conference on Electrical, Energy, and Artificial Intelligence (EICEEAI)
(pp. 1-5). IEEE.
36.
Tahseen A. J. A., & et al.. (2023). Binarization Methods in Multimedia
Systems when Recognizing License Plates of Cars. International Journal of Academic
Engineering Research (IJAER), 7(2), 1-9.
37.
Abu-Jassar, A. T., Attar, H., Amer, A., Lyashenko, V., Yevsieiev, V., &
Solyman, A. (2024). Remote Monitoring System of Patient Status in Social IoT
Environments Using Amazon Web Services (AWS) Technologies and Smart Health
Care. International Journal of Crowd Science.
38.
Abu-Jassar, A. T., Attar, H., Amer, A., Lyashenko, V., Yevsieiev, V., &
Solyman, A. (2024). Development and Investigation of Vision System for a Small-
ISSN (E): 2181-4570 ResearchBib Impact Factor: 6,4 / 2023 SJIF 2024 = 5.073/Volume-2, Issue-9
105
Sized Mobile Humanoid Robot in a Smart Environment. International Journal of Crowd
Science.
39.
Color correction of the input image as an element of improving the quality
of its visualization / M. Yevstratov, V. Lyubchenko, Abu-Jassar Amer, V. Lyashenko
// Technical science research in Uzbekistan. – 2024. – № 2(4). – P. 79-88.
40.
Fan, J., & et al. (2022). Vision-based holistic scene understanding towards
proactive
human–robot
collaboration.
Robotics
and
Computer-Integrated
Manufacturing, 75, 102304.
41.
Liu, H., & Wang, L. (2021). Collision-free human-robot collaboration
based on context awareness. Robotics and Computer-Integrated Manufacturing, 67,
101997.
42.
Liau, Y. Y., & Ryu, K. (2021). Status recognition using pre-trained
YOLOv5 for sustainable human-robot collaboration (HRC) system in mold assembly.
Sustainability,
13
(21), 12044.
43.
Dinges, L., & et al. (2021). Using facial action recognition to evaluate user
perception in aggravated HRC scenarios. In 2021 12th International Symposium on
Image and Signal Processing and Analysis (ISPA), IEEE, 195-199.
44.
Zhdanova, M., & et al. (2020). Human activity recognition for efficient
human-robot collaboration. In Artificial Intelligence and Machine Learning in Defense
Applications II, SPIE, 11543, 94-104.
45.
Wen, X., & Chen, H. (2020). 3D long-term recurrent convolutional
networks for human sub-assembly recognition in human-robot collaboration. Assembly
Automation, 40(4), 655-662.
46.
Makris, S., & Aivaliotis, P. (2022). AI-based vision system for collision
detection in HRC applications. Procedia CIRP, 106, 156-161.
