Abstract

In this digital age there is a growing interest in robotics by students in basically all age grades beginning from the toy robot cars used by preschoolers, to hi-tech robots used by teenagers and youths such as Sphero BOLT, Grillbot, and many others available in the market. In other to spark interest of youngsters in science, technology, engineering, and math (STEM) fields and also for robotics enthusiasts without making learning complex but simple and intuitive this cost-effective mobile robot called IKPEZE Robot was designed and developed. This can be used for education and research purposes in laboratories; knowledge gathered from this approach can also be applied to wider ranges of robotics. IKPEZE Robots has an added vision and audio capability, which provides surveillance of its immediate environment. Our main goal was to develop a low-cost mobile robot, which is cheaper than most commercial robots sold today and can find application not only in education and research, but also in other applications. This mobile robot was developed by designing and implementing a chassis having a Raspberry Pi controller programmed in C language in a Debian Linux-based OS that was used to control the chassis remotely via a Wi-Fi network. A mobile application was loaded into an Android smartphone with an easy-to-use graphical user Interface (GUI). Captured footages are sent wirelessly via Wi-Fi to the Android smartphone and it is stored in the local memory of the smartphone.

1. Introduction

Mobile robots in action are quite fascinating to watch and operate by preschoolers, elementary, and even post-secondary students. There is a growing hunger in them on how these robots are made and how they operate. In a bid to feed the minds and educate these young ones on how these robots are made and operated either manually or autonomously, this robot was designed and developed. Having a cost-effective mobile robot with a simple approach to design and implementation will definitely boost the interest of students in science, technology, engineering, and math (STEM), as well as a beginner’s guide to post-secondary students who have interest in robotics without making it too ambiguous to learn. Mobile robots find application in many areas of life, ranging from; learning and research centres [13], to home application for daily chores [4], to medical field [5], to space exploration [6], to agricultural sector [7], and also in surveillance of both public and private spaces [8].

Surveillance is the monitoring of behavior, activities, or other changing information with the intention of influencing, managing, directing, or protecting persons, process, or assets. Surveillance is used by governments in intelligence gathering, prevention of crime, safety of person(s), objects or process, or the investigation of crime [8]. Advances in technology over the years have made it possible to monitor areas of importance remotely by using robots in place of humans. Apart from the clear advantage of not losing any work force, physical, and space robots we can detect subtle elements that are not evident to people.

Robots are machines that are operated automatically and used to replace human efforts, although they may not perform functions exactly or look physically similar to human beings. Robotics by extension is an engineering discipline that deals with the design, construction, operation, and maintenance of robots. A mobile robot is simply a robot that runs on software and is capable of moving around within its environment with the help of sensors [9]. An autonomous robot is an added advantage as such robot has the capacity to self-control itself with the help of added sensors.

The main intention of a mobile robot is to perform a diversity of assigned tasks in unreceptive environments where humans are unable to work or are exposed to hazardous conditions. Some of such tasks are mining, surveillance, space exploration, underwater and/or volcanic sensing, etc. To increase the robot’s success, it is necessary to provide higher levels of autonomy by improving its mobility and steerability capabilities [10].

Furnishing the robot with a high resolution camera makes it feasible to gain information about a particular place remotely. Surveillance systems are becoming even more popular and diverse research has been carried out over the years and is still being studied. The general motivation is to livestream (audio and video) the surroundings or area where the robot is placed using a camera and a microphone mounted on the robot. The livestream will be captured and controlled at the receiver end. Some related research works have been successfully carried out in this area though are not so cost effective for educational or research purpose.

Krofitsch et al. [1] introduced a smartphone driven control robot for education and research. This approach took into consideration the restrictions accompanied by entering the robotics domain by using smartphone as a platform for the control of the robot. This approach was as flexible as possible however, it does not have the livestream and audio features.

Pedre et al. [2] proposed a design of a multi-purpose low-cost mobile robot for research and education. This robot is complex to the understanding of preteen and teen students and its design focuses on vision-based autonomous navigation, which does not include audio capability.

López-Rodríguez et al. [3] presents the design of an open educational low-cost modular and extendable mobile robot based on Android and Arduino, with Local Area Network (LAN) and Internet connection capabilities. This design was not implemented using the latest technology in terms of microprocessor, hence latency is an issue coupled with less features it presents.

Cheng et al. [4] presents a system to remotely control robotic arms by user motion using low-cost, off-the-shelf mobile devices, and webcam that behaves similar to human and can assist with daily chores and monotonous tasks. However, this does not have a wide range of application outside the home.

Bucolo et al. [5] proposed the use of KUKA robots for ultrasound scan application in remote control. This was constructed using a cost-effective medical investigative system based on ultrasound sensors as well as force reactive behavior due to robust control feedback acting on an artificial body. This would make available to a medical doctor, a reasonable level of information throughout a remote-controlled ultrasound scanning.

Bogue [6] presents robotic devices that have been deployed on the Moon and Mars. The humanoid robot was deployed on the International Space Station and robotic developments for use during proposed future planetary missions. It concludes with a brief consideration of the effect of space robots on terrestrial robotic technology.

Sparrow and Howard [7] surveyed the prospects for agricultural robotics, its likely impacts, and studied the ethical and policy questions that could come up. Along with the environmental and economic influences of robots, partisan, common, cultural, and security inferences of the introduction of robots that have received little attention in the larger literature on agricultural robotics were considered.

Bucolo et al. [11] projected a generic theory common to the design of all Leonardo machines to be controlled by using ad-hoc modern control techniques and devices. The project also connected the conventional and the future and to highlight the fundamental role of traditional mechanics in engineering projects. The project led also to a discipline called “woodtronics,” which is a new idea in developing machines with wood and controlled by reusable electronic components, motors, and low-cost microcontrollers.

Arena et al. [12] introduced a VLSI chip for real-time locomotion control in legged robots. This chip is centered on a bio-inspired control strategy implementing a central pattern generator (CPG) for hexapod control via the paradigm of central neural networks (CNNs).

Alli et al. [13] designed and implemented an obstacle detection and avoidance system for an unmanned lawnmower. This was achieved using infrared and ultrasonic sensor modules that were placed at the front of the robot to throw both light and sound waves at any obstacle and when a reflection is received, a low output is sent to the Arduino microcontroller which interprets the output and makes the robot to stop. The performance of the system indicated an accuracy of 85% and 0.18% probability of failure, respectively.

Abaya et al. [14] designed and implemented a low-cost smart security camera with night vision capability using Raspberry Pi and OpenCV (Open Source Computer Vision). The system has human detection and smoke detection capabilities that can provide precautions against potential crimes and potential fire. The experimental result shows that the accuracy for human and smoke detection is 83.56% and 83.33%, respectively.

Prasad et al. [15] implemented a smart surveillance monitoring system using Raspberry Pi and PIR sensors for mobile devices. It increases the usage of mobile technology to provide essential security to homes and for other control applications. The Raspberry Pi operates and controls motion detectors and video cameras for remote sensing and surveillance, streams live video, and records it for future playback. The system was capable of successfully recording/capturing video/images and transmitting to a smartphone.

Bokade and Ratnaparkhe [16] devised a method for controlling a wireless robot for surveillance using an application built on the Android platform. The Android application opens a web page that has a video screen for surveillance and buttons to control the robot and camera. The experimental result shows that the time required for processing the commands from the smartphone and responding accordingly was negligible and the fetching of a good quality video was quick and clear, which was up to 15 frames per second.

Suganthi et al. [17] presented a system that was designed to serve as a security monitoring device using an ultrasonic sensor connected to an Arduino Uno for object detection and a Raspberry Pi to which Wi-Fi repeaters, web camera, GPS module, and Arduino Uno were connected. Images were captured and sent to the remote PC through Wi-Fi and it was being monitored by the security people in the host area. The GPS module was used to find the current location of the rover. This system serves as a security monitoring device that replaces human security in less critical areas where humans are not necessary without compromising security.

Shivani and Kumbhar [18] presented a system that uses an infrared sensor for motion detection and Raspberry Pi to operate and control the surveillance system by capturing data and transmitting the data to a smartphone through a 3G dongle. The smart surveillance system is capable of recording and capturing of images and transmitting the general data to the smartphone of the owner. The system offered reliability and privacy.

Pahuja and Kumar [19] implemented a Smartphone Android controlled robot. The robot is controlled using Bluetooth module HC-06 and 89c2051 microcontroller on an Android smartphone device. The controlling devices of the whole system are the microcontroller and Bluetooth module. DC motors are interfaced to the microcontroller. The data received by the Bluetooth module from Android Smartphone is fed as input to the controller. The system was able to livestream the happenings in its immediate environment.

Vanitha et al.[20] implemented a monitoring robot that can be controlled via the Internet through the Raspberry Pi board. This robot uses a PIR sensor to detect when a person or an object enters into the surveillance area, and the smoke sensor detects fire accidents by sensing the smoke level increase in the atmosphere. The web page created, successfully monitored, and controlled the mobile robot via the Internet.

Ghute et al. [21] presented a method for controlling a robotic arm using an application built in the Android platform. The android phone and Raspberry Pi board are connected through Wi-Fi. The robotic arm is designed to perform the same activity as a human hand. In this smartphone technique, the delay and server problem experienced was overcome unlike when the internet-controlled technique was used.

Hou et al. [22] presented a wireless robot controlled with the help of the Internet and it is can detect living bodies with the help of a PIR sensor. The robot will find usefulness in rescue operations and the transmitted video can be accessed by the user in a remote area. The camera mounted on the robot can move horizontally around its horizontal axis and controlled through a web page at the user interface. The designed web surveillance system is capable of recording/capturing video/images and transmitting it to a personal computer.

Huang et al. [23] implemented a surveillance robot capable of capturing real-time images, video, and audio footage for a specific area or people. This approach uses a ZigBee network to control the robot. The robot is capable of walking on any surface and providing monitoring over an area with the help of image processing. The system that includes the face detection feature, was able to identify faces with a maximum accuracy of matching 70% of faces.

Sun et al. [24] presented a single unit, which will monitor the environment in various hazardous conditions and provide live video feedback. The proposed system is also able to capture real-time videos that are useful for surveillance for a specific person or area. Controlling of the robot is performed using a Raspberry Pi 3 processor. This robot is more comfortable for military applications such as surveillance of any area(s) of interest. It will provide a tactical advantage during hostage situations or in hostile grounds. The system is capable of walking on any surface and providing monitoring over an area. With the help of high-quality video transmission, surveillance becomes more effective.

2. Hardware

The system needs power to drive the various units shown in the block diagram in Figure 1. The power supply unit consists of two 3.7 V batteries responsible for providing power to the system. This choice is because the Raspberry Pi, DC motors, DC driver board, Ultrasonic sensor, Servomotors, Camera module, and USB microphone (each has working voltages between 3 V and 6 V) can be successfully powered using two Lithium batteries connected in series that provide approximately 7.4 V. The Motor driver board and the Raspberry Pi will be able to provide enough current from the 5 V digital pin to power the pan/tilt servomotors.

Operating the system with a much higher battery will burn the electronics and operating with a battery of lower voltage will reduce the RPM of the motor, resulting in a reduction in the efficiency of the surveillance robot. The power supply is controlled by a switch. When “on” the current flows through the LED having a 10 kΩ resistor and it turns “on” emitting green coloured light. The power supply is connected to the L293D Motor driver board that drives the DC motors responsible for controlling the four wheels connected to them, which provides forward, backward, left, and right movement of the robot.

The servomotor unit contains two servomotors for the panning/tilting motion of the camera module. A servomotor is a rotary actuator or linear actuator that allows for defined control of angular or linear position, velocity, and acceleration. It consists of a suitable motor coupled to a sensor for position feedback. The two servomotors are connected to the motor driver board and provide a 360° capturing capability.

The Motor driver board (L293DD013TR) is a monolithic integrated high voltage, high current four-channel driver designed to accept standard DTL or TTL logic levels and drive inductive loads (in this case the DC motors) and switching power transistors. In this design, two L293DD013TR were used to drive four DC motors. This device is suitable for use in switching applications at frequencies up to 5 kHz. It is assembled in a twenty-lead surface mount which has eight center pins connected and used for heatsinking. 74HCT595S16-13, which is also present on the motor driver board, is an 8-Bit shift register with 8-Bit output register with a supply voltage of 5 V (pin 16) and GND (pin 8). An eight-bit shift register accepts data from the serial input (pin 14) on each positive transition of the shift register clock (pin 12). When asserted “low,” the reset function (pin 10) sets all shift register values to zero and it is independent of all clocks. Data from the input serial shift register (pin 14, pin 11, pin 10) is placed in the output register (pin 15, pin 1, pin 2, pin 3, pin 4, pin 5, pin 6, pin 7) with a rising pulse on the storage resister clock (pin12). With the output enable (pin 13) asserted “low” the 3-state output QA-QH becomes active.

The Ultrasonic sensor unit houses the ultrasonic sensor that was used for obstacle detection and avoidance. It is connected to the motor driver board, where vcc, Trig, Echo, and GND pins of the ultrasonic sensor are connected to vcc and GND pin of the motor driver board respectively. The female GPIO header of the motor driver board is stacked on the Raspberry Pi.

The Raspberry Pi board is the microcontroller and handles all processing effectively and communicates with the Motor driver board which in turn powers the Raspberry Pi board. The Raspberry Pi 3 has a processor operating voltage of 3.3 V, raw voltage input of 5 V, 2 A power source, the maximum current through each I/O pin is 16 mA, and a clock frequency of 1.2 GHz.

A 5 MP camera module for Raspberry Pi is the eye of the system and it is connected to the Raspberry Pi board and used for livestreaming and capturing of events in the environment.

The USB 2.0 Mini Microphone is plug and play and once plugged into the Raspberry Pi, the Raspbian OS automatically detects it. The microphone, which is powered by the Raspberry Pi, is an omni-directional noise-cancelling microphone that picks up sound from the environment and sends it to the remote controller via Wi-Fi.

The Wi-Fi embedded in the Raspberry Pi connects the robot to a LAN router which transmits the generated data and provides communication with the remote controller. The assembled autonomous Smartphone control robot is shown in Figure 2.

3. Mapping and Navigation

Mapping means creating a representation of the surrounding environment and it involves path planning and navigation. Mapping is not simple in skid-steered robots due to its nonlinear performance.

Path planning involves determining a collision-free path from one point to another while reducing the total cost of the associated path.

Navigation is the ability of a robot to determine its own position in its frame of reference, and then plan a path towards some goal location.

In this work, a navigation algorithm without a map was used. In this case, the robot’s navigation is determined by the direct interaction of the robot’s sensor (ultrasonic sensor) with the environment while driving. This algorithm is called online algorithm. This navigation technique is called “local navigation” or online mode for path planning in which the robot decides its position and orientation and can control its motion using externally equipped sensors. This is to state that path planning was performed based on the information gathered by the local sensor (ultrasonic sensor) installed on the robot [25].

4. Software

The Micro SD card, which is the ROM of the Raspberry Pi, was set up in a Debian Linux operating system. SSH was used to log into the Raspberry Pi where programming was performed in C language. SSH makes it possible to remotely access the command line of the Raspberry Pi from another computer on the same network.

The Mobile Application for the Android smartphone was bundled using Ionic framework and the bundled source code was compiled and deployed as an APK file using Android Studio. The API for the Mobile App was developed using Node.js.

Upon completion of Mobile App development, Node-RED was used to develop a program that makes it possible for bidirectional communication between the Raspberry Pi in the surveillance robot and the mobile app in the smartphone. Figure 3 shows the software architecture of the different software used and how they interact.

The software used includes the following:(i)Raspbian OS (Debian Linux): Raspbian is a modified version of the Debian Linux- based OS created particularly for the Raspberry Pi. Raspberry Pi is also compatible with Windows 10 IoT Core but prefers any Linux-based OS. Excluding the fact that the Raspbian OS is the official Debian based general purpose operating system, it offers greater speed and security than the Windows operating system. Installation of Raspbian OS in the Micro SD card, which is the ROM of the Raspberry Pi, was performed in a Linux-based OS in C programming language.(ii)PuTTY: PuTTY is a free and open-source terminal emulator, serial console, and network file transfer application. PuTTY makes it possible to run shells (SSH), which in turn run commands. It supports several network protocols, including SCP, SSH, Telnet, rlogin, and raw socket connection. It can also connect to a serial port. PuTTY is a very versatile tool for remote access to another computer. It is probably used more often to secure remote shell access to a UNIX or Linux system than for any other purpose, although that is only one of its many uses. SSH makes it possible to remotely access the command line of the Raspberry Pi from another computer or device on the same network.(iii)Ionic framework: Ionic Framework is a free, open source mobile UI toolkit for developing high-quality cross-platform apps for native iOS, Android, and the web, from a single codebase. Ionic framework was used for developing the Mobile Application. The Mobile Application communicates and controls the robot through the robot’s wireless LAN network.(iv)Android Studio: Android Studio is the official integrated development environment for Google’s Android operating system, built on JetBrains’ IntelliJ IDEA software and designed specifically for Android development. Android Studio is the IDE used in compiling the bundled source code and deploying the APK file onto an Android smartphone for monitoring and controlling the surveillance system.(v)Node.js: Node.js is an open source, cross-platform runtime environment for developing server-side and networking applications. It runs on various platforms (Windows, Linux, Unix, Mac OS X, etc.). It uses JavaScript on the server and it is perfect for data-intensive real-time applications that run across distributed devices. The API for the robot was developed using Node.js.(vi)Node-RED: Node-RED is a programming tool for wiring together hardware devices, APIs, and online services in new and interesting ways. It provides a browser-based editor that makes it easy to wire together flows using a wide range of nodes in the palette that can be deployed on its runtime in a single click. It enables real-time bidirectional communication between the API running on the Raspberry Pi of the surveillance robot and the mobile application running on the Android smartphone.

4.1. Modes of Operation

When the surveillance robot is powered, it initializes the IP address allocated to it, and an access point “ikpeze_robot” is established. The Wi-Fi feature in the smartphone is activated by the remote controller, which then connects to “ikpeze_robot” created by the Raspberry Pi of the robot as shown in Figure 4. When a connection is established, the Android application starts running.

The first mode of operating and controlling the robot is the manual mode. The navigation of the robot is performed manually by the remote controller using the icons in the Android application installed on a smartphone. The flow diagram of its operation is illustrated in Figure 5. The manual control of the robot, the pan/tilt motion of the camera, live streaming, image/scene capturing, and recording of both visual and audio happenings in the environment is achieved after the Raspberry Pi decodes the command sent by the controller.

The second mode of robot control is the autonomous mode where the robot navigates the environment on its own based on the predefined stored program in the Raspberry Pi. The Ultrasonic sensor in the robot is able to calculate the distance between the ultrasonic sensor and an obstacle and decides when an “avoidance” will occur (9.7 cm–10 cm). If an obstacle is detected, the sensor sends a signal to the Raspberry Pi and the robot changes its direction of movement accordingly, and if no obstacle is detected, it maintains its straight motion. The flow diagram of its operation is illustrated in Figure 6.

5. Performance Evaluation

The Android application developed and loaded in the smartphone is user friendly and easy to navigate. The surveillance robot was tested on different terrains and it provided good surveillance under reasonable speed. The GUI is shown in Figure 7. “F,” “R,” “A,” and “T” represents Forward movement, Reverse movement, Autopilot (used in the autonomous mode of operation), and Tracking (used in line tracking movement). The pedal icon when pressed initiates motion (accelerator) in the manual. The directional arrows around the camera icon in the bottom left of the figure are used for the pan/tilt movement of the camera module, the directional arrows on the left and right side of the pedal icon in the bottom right of the figure are used to change the position of the surveillance robot to the left and right direction, respectively. The camera icon, video icon, and microphone icon are used for image capturing, live video recording, and audio recording, respectively. The power icon at the top right is used to shut down the surveillance robot.

The performance of the Smartphone control robot system was evaluated through an accuracy and Probability of Failure (POF) test.

The accuracy and probability of failure of the robot system were also evaluated by operating the robot in the autonomous mode while being traversed on a tarred surface with obstacles placed on the path towards the goal. This was performed to determine how successful (with respect to obstacle detection and avoidance) the robot was in reaching its goal. The accuracy and probability of failure of the robot system were also tested in the autonomous mode. The robot system was tested on a tarred terrain, with obstacles placed on the path to be traversed. The number of obstacles the robot was able to detect and avoid, which contributed to the number of successful missions was used to ascertain the accuracy of the robot system. The reliability of the robot system was determined using the probability of failure of the robot, which is proportional to the number of obstacles the robot collided with to the total number of obstacles placed on the path. A tarred terrain measuring 9 m × 6 m was used. Eight obstacles were placed on the path to be traversed by the robot before it gets to its goal. A total of ten missions, were carried out and nine were successful.

For Accuracy and POF test equation (1)–(3) were used to obtain the time between transmitted and received reflected waves, the accuracy of the system, and the probability of failure of the system, respectively. After the application of the abovementioned equations, the generated data is shown in Table 1.where Ttr represents time between transmitted and received reflected wave, D represents distance between the ultrasonic sensor and the detected obstacle, and V represents ultrasonic wave propagation speed in air at a normal speed of 344 m/s (34,400 cm/s) (Ultrasonic technology, 2019).

Accuracy of the robot system.

Probability of failure.where Toa represents total number of obstacles robot detected and avoided, Tou represents total number of obstacles used in the evaluation, and Toc represents total number of obstacles the robot collided with total number of missions = 10, total number of successful missions = 9, measurement of terrain = 9 m × 6 m, and total number of obstacles tested/placed on path = 8.

6. Results and Discussion

The readings analysed in Table 1 for accuracy and POF test shows that the distance between the ultrasonic sensor and the detected obstacle (D) falls between 9.7 cm and 10 cm in all missions undertaken by the robot. The accuracy of the robot was calculated for the individual successful missions and an average was obtained. The average accuracy of the surveillance robot is 88.8%. The reliability of the surveillance robot was also evaluated using the probability of failure. An average of the probability of failure, which is 0.11, was obtained from the successful missions undertaken.

7. Conclusion

The smartphone control robot developed was capable of livestreaming, capturing images, and recording both video and audio happenings in an environment. With the use of a smartphone, the robot system could seamlessly monitor areas of interest while being controlled remotely either manually or autonomously via an app installed in the Android smartphone. The working range of the Wi-Fi access point is about 50 meters.

The accuracy and POF of the robot were also obtained as 88.8% and 0.11, respectively. The approach used in this work can be applied to a wider range of robot development. This smartphone control robot can also find applications in various areas.

Data Availability

The software code used to support the findings of this study is included within the supplementary information file.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This study was self-funded by the corresponding author Ikpeze, Onyinye Florence. The financial assistance for publication was provided by Afe Babalola University, Ado-Ekiti, Ekiti State, Nigeria.

Supplementary Materials

Coding of the MJPEG streamer, motor driver, ultrasonic sensor is shown in Appendix C, Appendix D, and Appendix E respectively. Node-RED makes use of Web Socket node to develop a node program that makes it possible for bidirectional communication between the Raspberry Pi in the surveillance robot and the mobile application in the smartphone. The source code is shown in Appendix F. Node JS was used to create a web server on the Raspberry Pi to allow a Web Socket connection for real-time events and control of the robot. The source code is shown in Appendix I. The mobile application was bundled using Ionic Framework and the bundled source code as shown in Appendix G was compiled and deployed as an APK file using Android studio as shown in Appendix H. (Supplementary Materials)