Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-25T04:15:57.596Z Has data issue: false hasContentIssue false

Wireless vision-based digital media fixed-point DSP processor depending robots for natural calamities

Published online by Cambridge University Press:  15 March 2024

S. Mary Joans*
Affiliation:
Department of Electronics and Communication Engineering, Velammal Engineering College, Anna University, Chennai, Tamil Nadu, India
N. Gomathi
Affiliation:
Department of Computer Science and Engineering, Vel Tech University, Chennai, Tamil Nadu, India
P. Ponsudha
Affiliation:
Department of Electronics and Communication Engineering, Velammal Engineering College, Anna University, Chennai, Tamil Nadu, India
*
Corresponding author: S. Mary Joans; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Natural calamities are affecting many parts of the world. Natural disasters, terrorist attacks, earthquakes, wildfires, floods and all unpredicted phenomena. Disasters cause emergency conditions, so imperative to coordinate the prompt delivery of essential services to the sufferers. Often, disasters lead many people to perish by becoming trapped inside, but many more also perish as a result of individuals receiving rescue either too late or not at all. The implementation and design of a Receiver module utilizing Davinci code processor DVM6437, Wireless camera receiver, Zigbee Transceiver and Global Positioning System (GPS) is proposed in this manuscript for Wireless Vision-based Semi-Autonomous rescue robots that are employed in rough terrain. The receiver side’s Zigbee transceiver module eliminates the limitations of tele-operating rescue robots by enabling the control station to receive GPS data signals and aids in robot management by sending control signals wirelessly. Half and full-duplex communication are supported by the Davinci processor DVM6437, a digital media fixed-point DSP processor that relies on Very Long Instruction Words. It includes an extensive instruction set that is ideal for real-time salvage operations. DVM processor is coded utilizing MATLAB Simulink. MATLAB codes and Simulink blocks are employed under Embedded IDE link.

Type
Research Article
Copyright
© The Author(s), 2024. Published by Cambridge University Press

1. Introduction

Rescue robotics is a young field compared to the robotics field. Its objective is “to provide rescuers in the operational areas of disaster,” that is, events caused by environment or human [Reference Chen, Shao and Zhu1Reference Machaiah and Akshay4]. Rescue robots can allow operators to access difficult conditions [Reference Dinh, Vishnevsky, Le, Kirichek and Koucheryavy5Reference Behairy, El-Rahman, Aly, Fahmy and Abd-Elhakim7]. A rescue robot visually examines and maps the interior of collapsed building, inspects for damage, places acoustic or seismic sensors for monitoring the situation, or rapidly removes for facilitating victim extraction [Reference Çoban, Scaparra and O’Hanley8Reference Azpúrua, Rezende, Potje, Júnior, Fernandes, Miranda and Freitas10]. Rescue robots are arranged around the world for mine accidents, earthquakes, etc. [Reference Tan, Guo, Mohanarajah and Zhou11, Reference Nosirov, Shakhobiddinov, Arabboev, Begmatov and Togaev12]. The need for these robots throughout the whole catastrophe life cycle is anticipated to rise as a result of the increasing effects of both man-made and natural disasters [Reference Wildani, Mardiati, Mulyana and Setiawan13, Reference Edlinger, Zauner and Zauner14]. The disasters type robots contain unmanned ground vehicles that carry various sensors [Reference Narayan, Aquif, Kalim, Chagarlamudi and Vignesh15Reference Alam, Saiam, Al-Mamun, Rahman and Hany17] unmanned aerial vehicles (UAVs) that may deliver aerial support for disaster response operations [Reference Kamegawa, Akiyama, Sakai, Fujii, Une, Ou and Gofuku18Reference Ravendran, Ponpai, Yodvanich, Faichokchai and Hsu20], and unmanned marine vehicles that can perform underwater inspections [Reference Baldemir, İyigün, Musayev and Cenk21]. Though most of these robots are controlled by humans, semi-autonomous systems diminish the requirement for minimal control [Reference Ullah, Mahmood and Garg22, Reference Avinash23].

Most of the victims and other disasters have died based on the delay. Some researchers and educational personnel have consequently become more concerned with conducting research to extend robots for rough terrain, particularly for rescue missions. These rescue robots are capable of executing their tasks in dangerous and high-risk places. The robots are capable of providing images of the environment and specifying the location of the victims. The existing method does not provide sufficient detection of humans during natural calamities, which motivates us to carry out this research work.

Wireless communication uses a few strategies to offer reliable communications and is made to be tolerant to radio frequency interference. These methods comprise Carrier Sense, Multiple Access with Collision Avoidance, message acknowledgments, and alternative paths. ZigBee is the nearest competitor to wireless communication technology. Even though ZigBee promises a significantly rapid data rate (1Mbps vs.250 kbps), wireless communication identifies a greater transfer range and is made for lower power consumption. ZigBee’s main drawback in modular robotics applications works with small networks and needs a central coordinator. Wireless communication does not have this limitation. Hence, Wireless Vision-based Semi-Autonomous Rescue Robots are chosen for natural disaster detection.

Natural disasters are rare and unavoidable events that disrupt society’s economic and social balance. Nowadays, there is a growing awareness among people about intelligent rescue measures in such disasters, which can save precious life and property but cannot prevent disasters. There have been numerous recent tragedies across many different regions. Disasters have a terrible impact and don’t distinguish between human and inanimate objects. This results in significant loss of property and life. When a person is buried in the wreckage, it is challenging to detect them.

Those who are buried and injured can be saved only by a timely rescue. In such situations, the proposed Wireless Vision-based Semi-Autonomous Rescue Robots make rapid decisions under pressure and exert maximal effort to rescue sufferers and place them in safe location. As soon as possible, the rescue system must gather information on the victims’ status, the integrity of the structures, and their location so that medics and firefighters reaches the calamity region and protect lives. Most of the tasks are completed by humans and trained canines in extremely hazardous and perilous circumstances. Due to the large region that is damaged, detection by Tele-operating rescue robots takes longer and is more challenging. Rescue robots have been recommended for long time to assist them and carry out duties that humans, dogs, and current equipment are unable to.

In this manuscript, the detection of natural calamities using artificial vision based on recuse robots is proposed. The design and execution of Receiver module with Davinci DVM6437 code processor, Zigbee transceiver, and GPS for semi-autonomous rescue robots based on wireless vision was presented. The Davinci DVM6437 code processor is an extensively preferred digital media processor to receive module construction. The Zigbee transceiver module allows the control station to receive GPS data signals. The GPS helps the controller by providing the latitude and longitude of the robot’s location. Finally, the wireless camera receiver is utilized on receiver module to receive audio and video information of the natural calamities captured by the camera.

The major contributions of this manuscript are abridged below:

  • The implemented wireless vision-based semi-autonomous rescue robots are not affected by the environmental natural calamities.

  • DVM6437 code processor is a capable, lower power, and cost-effective solution for localization in large crowded areas, owing to its higher localization accuracy in natural calamities environments.

  • GPS is useful for localization in wide areas because of its long transmission range and low energy requirements, but it also performs best in an outside environment.

  • Zigbee transceiver module has a comparable low energy demand but performs significantly better in natural calamities.

The remaining manuscript is structured as: part 2 shows literature survey, part 3 illustrates the proposed technique, part 4 proves the results, and part 5 presents conclusion.

2. Literature review

Numerous research works were presented associated with the Detection of Natural Calamities utilizing Recuse Robot, certain recent research works are reviewed below:

Punith et al. [Reference Punith, Sumanth and Savadatti24] have presented Internet rescue robots for disaster management. The development of new generation of search with rescue robots that operate in semi-autonomous and wireless modes, as well as under the challenging physical conditions of disaster regions, made the assigned missions more successful by utilizing cutting-edge and affordable sensors. The introduction of the project’s overview and economic sensors clarifies the issue domain of earthquake disasters and search as well as rescue operations. Disasters threaten the economic and social stability. The most pressing issues during an emergency are the lack of qualified rescue workers and the risks associated with search and rescue efforts.

Alam et al. [Reference Alam, Ahmed, Islam and Chowdhury25] have presented a prototype of multi-functional rescue robot utilizing wireless communication. A prototype of multi-functional robot was presented and the aim was to go to dangerous regions, such as distorted places, collecting various data from terrorist attack zones, and sending it through wireless communication technology. The primary function of such robot was to assist the rescue crew by supplying numerous details that would have been risky and difficult for a human to gather. Such rescue approach was divided into three main components: Quadcopter, Rover, and Nanobot. Every component was equipped with a live-action GoPro camera, whose output was readily accessible in the observation room. Mapping was done from above by the quadcopter. The data collection and rubble removal were done by the rover, which has a mechanical claw. The nanobot searches for holes inside the rubble and has thermal camera to monitor if any living people are trapped there.

Dong et al. [Reference Dong, Ota and Dong26] have presented UAV-based real-time survivor identification scheme in post-disaster search along rescue operations. Wherein, a thermal image dataset was recorded using drones. A number of deep convolutional neural networks, involving YOLOV3, YOLOV3-MobileNetV1, and YOLOV3-MobileNetV3, were trained to identify survivors. The optimal sites to fine-tune the survivor identification network, dependent on the sensitivity of convolutional layer, were discovered due to the onboard microcomputer’s restricted processing capacity and memory. NVIDIA’s Jetson TX2 was used and attained 26.60 frames. A real-time survivor identification scheme was designed under DJI Matrice 210 along Manifold 2-G to offer save services after calamity.

Saputra et al. [Reference Saputra, Rakicevic, Kuder, Bilsdorfer, Gough, Dakin and Kormushev27] have presented improved design of mobile rescue robot along inflatable neck securing device for secure casualty extraction. A mobile rescue robot was created to help first responders rescue victims from hazardous situations by completing a casualty extraction procedure while guaranteeing that no additional injuries or lives are endangered. ResQbot 2.0 mobile rescue robot structured to perform casualty extraction tasks.

Habibian et al. [Reference Habibian, Dadvar, Peykari, Hosseini, Salehzadeh, Hosseini and Najafi28] have presented design and implementation of a maxi-sized mobile robot (Karo) for rescue missions. Karo was a mobile robot that moves rapidly while yet retaining the agility and exploring abilities needed for urban search and rescue tasks. The Rescue Robot League standard rescue robot system requirements were clarified, and the Karo was proposed by developing a locomotion and manipulation system. The study provides detailed mechanical designs for the robot’s platform, 7-DOF manipulator, and manipulation scheme along with thorough design methods. The power system, sensor system, and hardware systems of the robot were found to be helpful in the plan and operation of the command and control scheme.

Dumbre et al. [Reference Dumbre, Jadhav, Mahajan and Sorte29] have presented design and fabrication semi-autonomous search and rescue robot utilizing rocker-bogie mechanism. The main aspect of rocker-bogie design was drive train simplicity as it uses only motors for mobility. Motors were placed inside the casing wherein thermal vibration was minimal, and efficiency was increased. Such robots use six wheels, as some obstacles on the terrain need both front wheels of robot to climb. There were many experiments on agricultural land and rough roads. The results of all tests showed that the rocker bogie was capable of long field traverses.

3. Proposed methodology

In this manuscript, wireless vision-based semi-autonomous rescue robots for a natural disaster are proposed. The block diagram of hardware model is given in Fig. 1. The detailed discussion regarding the wireless vision-based semi-autonomous rescue robots for a natural disaster is given below.

Figure 1. Block diagram of hardware model.

3.1. Davincicodeprocessor DVM6437

The DVM6437 is a widely preferred digital media processor for receiving module construction. Digital Signal Processing (DSP), together with the device, is the greatest performance fixed-point DSP generation on DSP. DVM6437 depends on Texas Instruments enhanced third-generation high-performance, advanced-speed for digital media applications. The DVM6437 also finds application as hardware-specific logic and on-chip memory. The DVM6437 core utilizes two-level cachefed architecture. The first phase includes level 1 program (L1P) and level 1 data (L1D). L1P encompasses a memory capacity of 256,000 bits, and L1D comprises a memory space of 640,000 bits. bits. Starting at 640,000 bits, 384,000 bits of which are allocated memory and 256,000 bits may be organized as allocated memory or bidirectional set-associative caches. In the second phase, level 2 memory/cache (L2M) has 1 Mbit memory space.

The DVM6437 consists of Video Processing Subsystem using two configurable image/video peripherals including Video Processing Front End Input (VPFE) and Video Processing Back End Output. VPFE is organized as a Charge Coupled Device Controller, Preview Engine, etc. Resizer gets image data to divide horizontal as well as vertical resize between $1/4x$ to $4x$ raises $256/N$ , the values of $N$ ranges from 64 to 1024. Hence, by using Davincicode processor DVM6437, a received module is constructed. Figure 2 portrays functional block diagram of DVM 6437.

Figure 2. DVM 6437 functional block diagram.

3.2. Zigbee transceiver

Data are sent and received between the robot and control unit using Zigbee. It is a digital wireless communication protocol and also a variety of higher-level communication protocols that utilize lower-power digital radios. Zigbee is used in radio frequency (RF) applications involving less data rates, extended battery life, and safe networks. The transmission is in the range of 10 to 75 m and Zigbee pro up to 1500 m. The power output of radios is typically 0 dBm. Figure 3 shows Zigbee transceiver for robot control.

Figure 3. Zigbee transceiver for robot control.

Depending on its lower power limits, transmission distances are only 10 to 100 m line of sight. By sending information across a mesh network, ZIGBEE devices may communicate data over great distances and reach even the most remote locations. ZIGBEE is quite often utilized on lower data rate applications, which extends battery life and protects networks by 128-bit symmetric encryption keys. ZIGBEE has an intermittent data rate of 250 kbit/s, which is more suitable for a sensor or input device.

In Zigbee transceiver, the wireless programing downloaded PC to control board, two-way wireless control, and robot monitor for robot control. Transceivers do not have RS232 communication ports via a commercially available USB to RS232 adapter. In a terminal emulator, press individual keys on the keyboard to control the robot. This includes, $F=\textit{Forward}, B=Back-up, L=Turn-Left, R=Turn-\textit{Right}, A=\textit{Accelerate}, D=\textit{Decelerate}$ , and $S=Stop$ . The user may directly observe the robot.

3.3. Global Positioning System (GPS)

GPS receiver is applied to track the positioning of the robot. It is utilized to calculate the correct latitude and longitude presence of the robot on the ground. To determine the location as well as synchronized time, the GPS satellites transmit radio signals that enable GPS receivers on the Earth’s surface. Navigation messages are made up of ephemeris data, utilized to compute the location of every satellite. The original GPS has two range codes including C/A code, which is generally kept for military applications.

C/A code is a bi-phase modulated signal using $1.023\,{\rm MHz}$ chip rate. Every chip is $977.5\,{\rm ns}(1/1.023\,{\rm MHz})$ long. The transmitted C/A code has several side lobes. To discover the establishment of C/A code on received signal, like 1 ms and GPS C/A signals own to pseudorandom noise codes called Gold codes. The signals created as creation of two 1023-bit PRN sequences $G1$ and $G2$ . The feedback loop positions define the output form of sequence. The output of shift register’s last bit is the output of sequence.

The precision (P) code is a long sequence of binary digits that repeat 266 days. Also, it is 10 times quicker than C/A code, that is, 10.23 Mbps speed. Multiplying the time consumed for P-code to repeat, by their rate, shows that P-code has approximately 2.35 × 1014 chips. In this, the long code is split into 38 segments, with 32 segments allocated to certain GPS satellites. GPS satellite is typically find out through its individual P-code 1 week segment of, because every satellite transmits individual 1 week segment of P-code. For example, GPS satellite using PRN ID of 20 represents GPS satellite that allocates to 20th-week segment. P-code is principally structured for military applications.

GPS navigation message is a data stream that includes L1, L2 carriers binary biphasic modulation in 50 kbps lesser rate. It consist of 25 frames of 1,500 bits each totalling 37,500 bits. The transfer of whole navigation message takes 750 s, or 12.5 mins. GPS receiver coordinates into time functioning, satellite health status, etc. Every satellite broadcasts its own navigation message using information about other satellites.

3.4. Wireless camera

This receiver is utilized to deliver live data of the site in inspection to the PC. Wireless data transfer is utilized for transmitting audio and video. The camera is necessary for reconnaissance and surveillance of the area, to see objects in front of robot to aiding in circumnavigating through debris. The wireless camera consists of 5V battery connected to acrylic plate in the most suitable position. Several software components are essential to execute the wireless vision-fed rescue robot receiver module including the MATLAB Simulink set of image and video processing blocks, as well as the Code Composer Studio integrated development environment (CCS). Figure 4 shows the block diagram of software model.

Figure 4. Block diagram of software model.

The Simulink model is organized on MATLAB with the aid of the Simulink Library Browser. The software model for natural disaster detection has Image File, Video Viewer, etc. Input images are portrayed with the video viewer block. The file image block is essentially utilized to access the two input images to be subtracted. Input images are displayed with this block. The workspace block is utilized for storing clock images on MATLAB workspace. This block is programed with MATLAB commands. The CCS creates a program on TI software development environment. The CCS accelerates and improves development to build and test embedded signal processing applications in real time. It delivers tools to configure, build, debug, trace, and analyze programs making it the world’s most powerful integrated DSP development tool.

4. Results and Discussion

The description of Edge detection, people tracking, Histogram display, and Motion tracking are delineated in below sections.

4.1. Edge detection

The Prewitt operator finds edges with Prewitt approximation to derivative. It shows borders at the points wherever maximum image is found. The canny system provides edges by searching for local maxima of the image gradient.

Figure 5 shows that edge detection module presents the binary image using blank margins. Here, the output is in the Border window. In Fig. 5, from the original window, the compilation module agrees with the original video frames, and then the margin detection module outcome is fed to the inputs. This outcome shows a composite image on Overlay window; wherever the white border values overwrite the original pixel values.

Figure 5. Block and results for edge detection.

Figure 6. Block and results for people tracking.

4.2. People tracking

Figure 6 shows the block that detects and tracks people on video stream using a stationary background with below process:

  1. 1. Utilizeinitial frames of video for estimating background imagery.

  2. 2. Divide the pixels corresponding to people.

  3. 3. Cluster pixels signify individual people collectively and compute proper bounding box.

4.3. Histogram display

Figure 7 shows the block and results for histogram display. Here, block shows R, G, and B histograms and original RGB video. Image histogram is a sort of histogram from graphical representation of the entire distribution. An observer can determine the tonal distribution in single glance.

4.4. Motion tracking

Sum of absolute difference (SAD) system is the general system for detecting motion in video processing. This module uses the SAD process for up to four quadrants per video sequence.

Figure 7. Block and results for histogram display.

Figure 8 shows the block and results for motion detection. By dual-clicking on switch block to connect the signal with sum of absolute difference page, the video viewer volume indicates the sum of absolute difference value, which is the absolute value of variance among the current image and previous image.

Figure 8. Block and results for motion detection.

Figure 9 shows the angular position, velocity, and acceleration curves of the wireless vision-based semi-autonomous rescue robots for a natural disaster.

Figure 9. Performance of angular position, velocity, and acceleration curves of wireless.

Figure 10 shows the circuit board of the model with Zigbee system acting as the transmitting mechanisms of the signal and the GPS data to PC. The wireless camera continuously sends the images to the Tuner connected with the system and thus able to be interfaced with MATLAB and utilized for image and video processing.

Figure 10. Circuit board of model.

Figures 11 and 12 show simulation results of the wireless vision-based semi-autonomous rescue robots for a natural disaster. Here, the performance metrics of robot coordinates detection accuracy and robot coordinates detection time are analyzed and compared with existing methods like WV-APFR-NC [Reference Imteaj, Chowdhury, Farshid and Shahid30], WV-MR-NC [Reference Hilario, Balbuena, Penaloza, Hernandez-Carmona, Quiroz, Ramirez and Cuellar31], and remote controlled rescue robot using ZigBee Communication (WV-RCRR-NC) [Reference Yang, Clemente, Laffréchine, Heinzlef, Serre and Barroca34], respectively.

Figure 11. Performance of robot coordinates detection accuracy.

Figure 12. Performance of robot coordinates detection time.

Figure 11 represents the performance of robot coordinates detection accuracy. Here, the performance of proposed WV-SARR-NC method is compared with existing WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC models. At scanning step in degree 1, the detection accuracy of the proposed WV-SARR-NC method provides 44.13%, 21.12%, and 37.74% higher detection accuracy compared with existing approaches such as WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 2, the detection accuracy ofthe proposed WV-SARR-NC method provides 55.23%, 28.36%, and 44.39% higher detection accuracy compared with existing approaches such as WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 3, the detection accuracy of the proposed WV-SARR-NC method delivers 19.98%, 37.01%, and 52.20% higher detection accuracy compared with the existing methods like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 4, the detection accuracy of the proposed WV-SARR-NC method provides 45.74%, 61.51%, and 39.96% higher detection accuracy compared with the existing methods like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 5, the detection accuracy of proposed WV-SARR-NC method provides 37.59%, 29.47%, and 51.12% higher detection accuracy compared with existing approaches like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively [Reference Nazarova and Zhai32].

Figure 12 represents the performance of robot coordinates detection time. Here the performance of proposed WV-SARR-NC system is compared with existing methods like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 1, the detection time ofthe proposed WV-SARR-NC method provides 71.59%, 36.41%, and 11.33% lower detection time compared with the existing methods like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 2, the detection time of the proposed WV-SARR-NC method provides 29.63%, 41.53%, and 19.27% lower detection time compared with the existing methods like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 3, the detection time of the proposed WV-SARR-NC method provides 19.22%, 51.31%, and 69.52% lower detection time compared with the existing methods like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 4, the detection time of the proposed WV-SARR-NC method provides 47.53%, 11.33%, and 24.35% lower detection time compared with the existing methods like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively. At scanning step in degree 5, the detection time of the proposed WV-SARR-NC method provides 11.23%, 37.20%, and 41.32% lower detection time compared with the existing methods like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively [33].

The primary problems of communication capacity, data transmission, real-time detection, and other issues can be resolved through wireless networks. Numerous battery-operated micro-sensor nodes with a tiny volume, low cost, and good interoperability make up a wireless network. Since wireless networks have a unique ability to disperse wireless signals, it is simple to detect anyone using them. Here, receiving all weather reports via sensors including humidity, temperature, and visibility. The rescue robot operates in earthquake- and disaster-prone areas and aids in locating and operating rescue systems while recognizing living and injured persons. Early detection during natural disasters can prevent significant loss of life and preserve priceless lives, even without the aid of numerous rescue missions. The proposed design has rescue robot including PC control module. Rescue robot has Sensor unit, Micro-controller, Camera unit, Motor driver unit, and Transmission unit. Sensor unit and the microcontroller need to be connected directly. The sensor devices send data to the microcontroller while monitoring current values. This information must be transmitted by the controller circuit. Controllers are structured in the level of hardware. Through standard serial port, 2.4 GHz RF module interfaces to microcontroller. The PC/SERVER updates this data so that the rescue crew may monitor the readings, and also use the camera to identify motion and view the disaster’s urgent situation live.

Microcontroller unit:

Microcontroller is used in PIC 16F877A controller. Due to their affordability, accessibility, widespread user base, ease of programing for specific applications, availability of free development, lower cost tools, capability for serial programing (re-programing along flash memory), PICs are well-liked by both industrial developers and hobbyists. The microcontroller collects data via the sensor unit in real time, compares it to a set point (a safe level of temperature), then sends the related data to the control room CPU. It obtains instructions through CPU and then sends them to the robot unit for the movement. A core of surveillance robot is called microcontroller.

Sensor unit:

A sensor is essentially a detector as well as converter that first scales a physical quantity before converting it into a signal detected by a device or by an observer. In this project, four sensors are considered: visibility, Human, Gas, Temperature, for that Light Dependent Register (LDR), Passive Infra-Red (PIR), MQ-7 and LM35 sensors are employed. Aditionally a Metal sensor are used to identify theappearance of suspicious materials in salvage process. Wireless vision-based semi-autonomous rescue robots flash when visibility is low after the LDR goes underground. LM 35 has 0–110 °C. The PIC 16F877A receives the real-time values that the sensors have detected and compares them with the set points. This sensor-provided data information aids in having thorough awareness of the environment in disaster areas.

Camera module:

It is mounted to the robot and has a web camera; the video signal is sent there to be received. The camera module broadcasts video coverage of the paths, making it easier for the rescue team to map the route to be taken. High-range cameras must be utilized for real-time applications to have good clarity and area coverage. Since obstacles in the robot’s path can be observed and necessary action taken, the camera function prevents the robot from falling into a pit and extends the robot existence in the disaster area, because of this watch the robot in real time.

Robot driver unit:

The robot driver unit places the highest priority on the movements of the robot along its x and y axes. Conveyor belt technology is used by the robot to help it navigate obstacles and rough terrain. The 200 rpm 2 dc motors run the Wireless Vision-based Semi-Autonomous Rescue robot wheels. Robot drives ahead when both wheels receive positive pulse edges. It moves backward while reversing the supply, which means that both wheels are supplied with negative pulse edges. Left and right turns can also be effectively accomplished by adjusting the negative and positive edges. Selection of given supply for every motor, L293D IC is applied. As a result, the robot moves forward, backward, and turns left as well as right.

Transmission unit:

This is employed to transfer data. Here, 2.4 GHZ RF modem is deemed to be free bandwidth. To enhance the wireless interference and system security, another secure frequency is considered. The Wireless Vision-based Semi-Autonomous Rescue robot unit has a transmitter attached to it. Its purpose is to receive data from the microcontroller and transfer it to the receiver located in control chamber. Such weather parameters and real-time video of the affected region are shown on a TV in the control room. From a wireless vision-based semi-autonomous rescue robot to the control room, wireless communication is used to determine the precise location of a human. If there is an emergency, the rescue experts team and doctors are dispatched to the victim’s location for immediate assistance.

If the Passive Infra-Red (PIR) sensor identifies the surrounding motion signal, then the control program instructs the camera to show that area. The proposed design broadcasts its present location and displays live vision to the rescue team when a human is found in the disaster region by the Wireless Vision-based Semi-Autonomous Rescue robot. When a rescue robot moves, a PIR sensor continuously checks for motion; if it identifies the motion, a beep sound is emitted from the buzzer, then it turns on every other sensors, and the camera. Otherwise, Wireless Vision-based Semi-Autonomous Rescue Robots continue moving to check for motion. If a person is found, the control room can watch them live. Figure 13 shows that the output of the detection as displayed on the screen.

Figure 13. Output of the detection as displayed on the screen.

Testing of the robot

Operation of the proposed Wireless Vision-based Semi-Autonomous Rescue Robots is guided through the remote controller. Arduino is deemed to activate control activities in robotic scheme. Wireless Vision-based Semi-Autonomous Rescue Robots communicate with the operator to receive control events. The operator can choose what to do once the rescue robot has received environmental data from sensory modules. When a decision to take action is made, this procedure keeps moving forward continuously. The procedure from design to realization of proposed Wireless Vision-based Semi-Autonomous Rescue Robots is literally difficult and functioning with testing is the vital state for the evaluation of entire system presentation. Figure 14 depicts the robot being tested in an area that simulates a simplified catastrophe zone.

Figure 14. The proposed Wireless Vision-based Semi-Autonomous Rescue Robot in testing environment.

The testing environment contains:

  • Doll used to simulate the victim

  • Narrow mazes that simulate the narrow paths by this the robot is navigated

  • Obstacles are identified in complex and uneven terrains

  • Miniature obstacles to evaluate robot’s manipulation skills

The robot guides the target through the human operator emulating visual cues of child (with a hot water bottle) from wireless camera shown in Fig 15. Figure 16 shows that it is successful in locating the doll that served as the victim’s representation. Table I shows the specifications of the proposed Wireless Vision-based Semi-Autonomous Rescue Robot

Figure 15. Proposed wireless vision based is search for victim (doll).

Figure 16. Proposed wireless vision based identifies the victim (doll).

5. Discussion

The proposed Wireless Vision-based Semi-Autonomous Rescue Robots maximize a product’s regularity, quality, efficacy, security, and productivity. The proposed Wireless Vision-based Semi-Autonomous Rescue Robots operate constantly in hazardous zones without fatigue or any problems. They are significantly more accurate than a person, also doing multiple jobs at once. A robot designed specifically with saving humans in mind is called a rescue robot. This is employed in hostage circumstances, explosions, urban calamities, and mining mishaps. It contains capability for entering unreachable places. This is developed with the intention of being able to carry out operations including excavating, reconnaissance, mapping, garbage removal, delivery of goods, rendering medical assistance, and transporting injured people. The search as well as rescue robots were brought to disaster-prone areas right away and performed flawlessly. Although it shows 100% efficiency, we made the premise that there might be disruptions that lead to 2% error and 98% accuracy in severe weather forecasts due to environmental constraints and natural disturbances. The search as well as rescue robot is a complex, fascinating, and cutting-edge concept. In many unfortunate circumstances, this is a program that saves thousands of lives. In addition, participants in this activity reduce the chance of losing their survival. Future developments will use 360-degree technology. A wireless camera might be used to capture a larger, clearer image of the disaster area for enhanced rescue efforts. A wireless camera can be used to locate unconscious victims. The robot could be made fully autonomous in place of needing manual control. This lessens the number of humans required to control the robot and expedite search and rescue operations. After changing into an amphibious vehicle, the robot can perform more dynamic rescue missions both on land and in the water. By using more sensors, such as those that can identify hazardous events, the rescuers may be exposed to a variety of environmental circumstances, improving their comprehension of the operation. Table II shows the comparison of proposed and existing methods

Table I. Specifications of the proposed Wireless Vision-based Semi-Autonomous Rescue Robot.

Table II. Comparison of proposed and existing methods.

6. Conclusions

In this manuscript, a wireless vision-based semi-autonomous rescue robot for natural disaster is successfully implemented. The Davinci code processor is implemented with DVM6437, a wireless camera receiver, ZigBee transceiver, and GPS. ZigBee Transceiver on receiver side enhances control center for receiving GPS data signals and enables the robot to monitor the transmission signals wirelessly, thus minimizing the shortcomings of tele-operating rescue robots. The Davinci DVM6437 processor is a digital media fixed-point digital signal processing depending on VLIW that aids half-duplex as well as full-duplex communication. These rescue robots diminish human labor, also increase access to inaccessible areas when natural including man-made disasters. Here, the performance metrics, like robot coordinates detection accuracy and robot coordinates detection time, are analyzed. The proposed method attains lower robot coordinates detection time 95.63%, 91.12%, and 93.58% compared with existing methods, like WV-APFR-NC, WV-MR-NC, and WV-RCRR-NC, respectively.

Author contribution:

S. Mary Joans (Corresponding Author) – Conceptualization Methodology, Original draft preparation

N. Gomathi – Supervision

P. Ponsudha– Supervision

Financial support

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Competing interests

Authors declare that they have no competing interests.

Ethical standards

This article does not contain any studies with human participants performed by any of the authors.

References

Chen, S. W., Shao, J. Q. and Zhu, H. S., “Technology of internet of things responding to natural disasters,” EAI Endorsed Trans. Internet of Things 7(26), e2 (2021).Google Scholar
Ma, J., Cheng, J. C., Jiang, F., Gan, V. J., Wang, M. and Zhai, C., “Real-time detection of wildfire risk caused by powerline vegetation faults using advanced machine learning techniques,” Adv. Eng. Inform. 44, 101070 (2020).CrossRefGoogle Scholar
Choden, Y., Raj, M., Wangchuk, C., Singye, P. and Muramatsu, K.. “Remote Controlled Rescue Robot Using ZigBee Communication”. In: 2019 IEEE 5th International Conference for Convergence in Technology (I2CT), IEEE (2019) pp. 15.Google Scholar
Machaiah, M. D. and Akshay, S., “IoT based human search and rescue robot using swarm robotics,” Int. J. Eng. Adv. Technol. 8(5), 17971801 (2019).Google Scholar
Dinh, T. D., Vishnevsky, V., Le, D. T., Kirichek, R. and Koucheryavy, A.. “Determination of Subscribers Coordinates Using Flying Network for Emergencies”. In: 2021 23rd International Conference on Advanced Communication Technology (ICACT), IEEE (2021) pp. 110.Google Scholar
Yang, Q. and Parasuraman, R.. “Needs-Driven Heterogeneous Multi-Robot Cooperation in Rescue Missions”. In: IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), IEEE(2020) pp. 252259.Google Scholar
Behairy, A., El-Rahman, G. I. A., Aly, S. S., Fahmy, E. M. and Abd-Elhakim, Y. M., “Di (2-ethylhexyl) adipate plasticizer triggers hepatic, brain, and cardiac injury in rats: Mitigating effect of Peganumharmala oil,” Ecotox. Environ. Saf. 208, 111620 (2021).CrossRefGoogle Scholar
Çoban, B., Scaparra, M. P. and O’Hanley, J. R., “Use of OR in earthquake operations management: A review of the literature and roadmap for future research,” Int. J. Disast. Risk. Reduct. 65, 102539 (2021).10.1016/j.ijdrr.2021.102539CrossRefGoogle Scholar
Battistuzzi, L., Recchiuto, C. T. and Sgorbissa, A., “Ethical concerns in rescue robotics: A scoping review,” Ethics Inf. Technol. 23(4), 863875 (2021).CrossRefGoogle Scholar
Azpúrua, H., Rezende, A., Potje, G., Júnior, G. P. D. C., Fernandes, R., Miranda, V. and Freitas, G. M., “Towards semi-autonomous robotic inspection and mapping in confined spaces with the EspeleoRobô,” J. Intell. Robot. Syst. 101, 127 (2021).CrossRefGoogle Scholar
Tan, L., Guo, J., Mohanarajah, S. and Zhou, K., “Can we detect trends in natural disaster management with artificial intelligence? A review of modeling practices,” Nat. Hazards 107(3), 23892417 (2021).CrossRefGoogle Scholar
Nosirov, K. K., Shakhobiddinov, A. S., Arabboev, M., Begmatov, S. and Togaev, O. T., “Specially designed multi-functional search and rescue robot,” Bull. TUIT Manag. Comm. Technol. 2(1), 15 (2020).Google Scholar
Wildani, F., Mardiati, R., Mulyana, E. and Setiawan, A. E.. Semi-Autonomous Navigation Robot Using Integrated Remote Control And Fuzzy Logic”. In: 2021 7th International Conference on Wireless and Telematics (ICWT), IEEE (2021) pp. 15.Google Scholar
Edlinger, R., Zauner, G. and Zauner, M., “Hazmat label recognition and localization for rescue robots in disaster scenarios,” Electron. Imag. 31(7), 16 (2019).Google Scholar
Narayan, S., Aquif, M., Kalim, A. R., Chagarlamudi, D. and Vignesh, M. H.. “Search and Reconnaissance Robot for Disaster Management”. In: Machines, Mechanism and Robotics: Proceedings of iNaCoMM. 2019, Springer Singapore (2022) pp. 187201.Google Scholar
Soma, P., Jatoth, R. K. and Nenavath, H.. “Implementation of Single Image De-hazing System on DSP TMS320C6748 Processor”. In: Soft Computing: Theories and Applications: Proceedings of SoCTA. 2018 (Springer Singapore, Singapore, 2020) pp. 405415.Google Scholar
Alam, M. N., Saiam, M., Al-Mamun, A., Rahman, M. M. and Hany, U.. “A Prototype of Multi Functional Rescue Robot Using Wireless Communication”. In: 2021 5th International Conference on Electrical Engineering and Information Communication Technology (ICEEICT), IEEE (2021) pp. 15.Google Scholar
Kamegawa, T., Akiyama, T., Sakai, S., Fujii, K., Une, K., Ou, E. and Gofuku, A., “Development of a separable search-and-rescue robot composed of a mobile robot and a snake robot,” Adv. Robot. 34(2), 132139 (2020).CrossRefGoogle Scholar
Nosirov, K., Begmatov, S. and Arabboev, M.Analog Sensing and Leap Motion Integrated Remote Controller for Search and Rescue Robot System”. In: 2020 International Conference on Information Science and Communications Technologies (ICISCT), IEEE (2020) pp. 15.Google Scholar
Ravendran, A., Ponpai, P., Yodvanich, P., Faichokchai, W. and Hsu, C. H.. “Design and development of a low cost rescue robot with environmental adaptability”. In: 2019 International Conference on System Science and Engineering (ICSSE), IEEE (2019) pp. 5761.Google Scholar
Baldemir, Y., İyigün, S., Musayev, O. and Cenk, U. L. U., “Design and development of a mobile robot for search and rescue operations in debris,” Int. J. Appl. Math. Electron. Comput. 8(4), 133137 (2020).CrossRefGoogle Scholar
Ullah, K., Mahmood, T. and Garg, H., “Evaluation of the performance of search and rescue robots using T-spherical fuzzy Hamacher aggregation operators,” Int. J. Fuzzy Syst. 22(2), 570582 (2020).CrossRefGoogle Scholar
Avinash, L. G., “Robotic search and rescue using human detection system,” Turk. J. Comput. Math. Educ. 12(1S), 162169 (2021).Google Scholar
Punith, K. M., Sumanth, S. and Savadatti, M. A., “Internet rescue robots for disaster management,” Int. J. Wir. Microw. Technol. 11(2), 1323 (2021).Google Scholar
Alam, S. S., Ahmed, T., Islam, M. S. and Chowdhury, M. M. F., “A smart approach for human rescue and environment monitoring autonomous robot,” Int. J. Mech. Eng. Robot. Res. 10(4), 209215 (2021).CrossRefGoogle Scholar
Dong, J., Ota, K. and Dong, M., “UAV-based real-time survivor detection system in post-disaster search and rescue operations,” IEEE J. Miniaturizat. Air Space Syst. 2(4), 209219 (2021).CrossRefGoogle Scholar
Saputra, R. P., Rakicevic, N., Kuder, I., Bilsdorfer, J., Gough, A., Dakin, A. and Kormushev, P., “Resqbot 2.0: An improved design of a mobile rescue robot with an inflatable neck securing device for safe casualty extraction,” Appl. Sci. 11(12), 5414 (2021).CrossRefGoogle Scholar
Habibian, S., Dadvar, M., Peykari, B., Hosseini, A., Salehzadeh, M. H., Hosseini, A. H. and Najafi, F., “Design and implementation of a maxi-sized mobile robot (Karo) for rescue missions,” Robomech J. 8(1), 133 (2021).CrossRefGoogle Scholar
Dumbre, S., Jadhav, Y., Mahajan, S. and Sorte, M., “Design and fabrication semi-autonomous search and rescue robot using rocker bogie mechanism,” Int. J. Mech. Dynam. Anal. 7(2), 3438 ( 2021 ).Google Scholar
Imteaj, A., Chowdhury, M. I. J., Farshid, M. and Shahid, A. R.. “RoboFI: Autonomous Path Follower Robot for Human Body Detection and Geolocalization for Search and Rescue Missions Using Computer Vision and IoT”. In: 1st International Conference on Advances in Science, Engineering and Robotics Technology (ICASERT), IEEE(2019) pp. 16.Google Scholar
Hilario, J., Balbuena, J., Penaloza, C., Hernandez-Carmona, D., Quiroz, D., Ramirez, J. and Cuellar, F.. “Late Breaking Report-Development of a Mobile Robot for Industrial Plants Inspections Using Computer Vision”. In: 2019 IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), IEEE (2019) pp. 108109.Google Scholar
Nazarova, A. V. and Zhai, M., “The Application of Multi-Agent Robotic Systems for Earthquake Rescue”,” In: Robotics: Industry 4.0 Issues & New Intelligent Control Paradigms (2020) pp. 133146.CrossRefGoogle Scholar
., “Development of two rescue robots for disaster relief operations in narrow debris,” (4), 399406 (2015).Google Scholar
Yang, Z., Clemente, M. F., Laffréchine, K., Heinzlef, C., Serre, D. and Barroca, B., “Resilience of social-infrastructural systems: Functional interdependencies analysis,” Sustainability 14(2), 606 (2022).CrossRefGoogle Scholar
Figure 0

Figure 1. Block diagram of hardware model.

Figure 1

Figure 2. DVM 6437 functional block diagram.

Figure 2

Figure 3. Zigbee transceiver for robot control.

Figure 3

Figure 4. Block diagram of software model.

Figure 4

Figure 5. Block and results for edge detection.

Figure 5

Figure 6. Block and results for people tracking.

Figure 6

Figure 7. Block and results for histogram display.

Figure 7

Figure 8. Block and results for motion detection.

Figure 8

Figure 9. Performance of angular position, velocity, and acceleration curves of wireless.

Figure 9

Figure 10. Circuit board of model.

Figure 10

Figure 11. Performance of robot coordinates detection accuracy.

Figure 11

Figure 12. Performance of robot coordinates detection time.

Figure 12

Figure 13. Output of the detection as displayed on the screen.

Figure 13

Figure 14. The proposed Wireless Vision-based Semi-Autonomous Rescue Robot in testing environment.

Figure 14

Figure 15. Proposed wireless vision based is search for victim (doll).

Figure 15

Figure 16. Proposed wireless vision based identifies the victim (doll).

Figure 16

Table I. Specifications of the proposed Wireless Vision-based Semi-Autonomous Rescue Robot.

Figure 17

Table II. Comparison of proposed and existing methods.