Selected Papers from the International Conference on Information, Communication, and Engineering 2013View this Special Issue
Research Article | Open Access
DSP- and FPGA-Based Stair-Climbing Robot Design
A stair-climbing robot is equipped with a grasping arm for capturing objects to provide service for the elders. A board based on a digital signal processor (DSP) plays the role of control center to manage all actions of two brushless DC motors for locomotion, two worm gears for torque magnification, and two DC motors for arms’ pitching in the main body. The robot is steered by fuzzy logic control (FLC) which integrates the outputs of DC bus current sensor and an inclinometer. There is a CMOS camera for vision used in the robot, and the grasping arm is controlled via the video camera for capturing objects. Simple algorithms of image processing are implemented in the field programmable gate array (FPGA) board that generates the -axis and -axis commands of the grasping arm to capture/load objects. Three walking experiments of the stair-climbing robot to move up and down stairs are shown in the taped pictures from videos. The object tracking and capturing by the grasping arm and camera verifies the proposed design.
Research on service robots has been attended in recent years. One of the most important reasons is the growing of aging population and decreasing of working population. It is not a long-term way by hiring many foreigners with cheaper payment to work at factories or to take home-caring job for the elder or disabled. As a result, for the latter, home-caring robot is an excellent candidate capable of supporting such an aging society. Specifically, the elders can control the robots directly for service.
The robot designed to directly carry elders up and down stairs needs a large vehicle and the stair rail for moving. The robot “HRP-2” from Harada successfully climbed up 280 mm stairs by grasping the stair rail . The robot “WL-16RII”  can walk independently and allow users to build its upper body based on their requirements, such as a walking wheelchair or as a walking support machine that is able to walk up and down stairs carrying or assisting an elder. The developed biped locomotor with Stewart platform legs successfully achieved walking up and down on stairs for 250 mm continuously and carrying 60 kg man on it. Another biped-type robot, for example, “Zero Walker-1” , uses its two legs to assist the aged person walking and moving up and down stairs along the handrail by stepping onto the feet of the robot . However, the aforementioned robots generally need tremendous effect on expense and time. Furthermore, it is very difficult to lift an aged person by human force, and it is not very easy to have a large and heavy-weight lift machine in a normal house.
Generally, the first step of image processing for segmenting the target is background subtraction scheme . Although the background subtraction scheme works fairly well in segment foreground data from nonchanging background but it will still allow objects that are paced into the workspace to be detected as potential objects. As a result, a color (or skin) detector is implemented to further filter the foreground data by converting the RGB (red, green, and blue) form of resultant image into (luminance, blueness, and redness) form . Additionally, object tracking has become an important topic in robotics field [6, 7]. Some basic functions such as real time, automation, and robustness to nonideal situations are required for practical object tracking systems. The servo control structure in real-time visual tracking systems usually belongs to the category of “image-based” and “dynamic look and move” . Based on image moments, three different visual predictive control architectures are proposed . However, the proposed system was verified only by simulation results. A novel two-level scheme for adaptive active visual servoing of a mobile robot equipped with a pan camera was presented in . The designed active visual servoing system shows advantages of a satisfactory solution for the field-of-view problem, global high servoing efficiency, and free of any complex pose estimation algorithm usually required for visual servoing systems. With respect to traditional visual servoing approaches that use geometric visual features, the visual feature used in  was the pixel intensity. The proposed approach has been tested in term of accuracy and robustness via several experimental conditions. A visual servoing strategy based on a novel motion-estimation technique  was presented for the stabilization of a nonholonomic mobile robot. The practical exponential stability can be achieved by perturbed linear system theory, despite the lack of depth information.
In the paper, a stair-climbing robot equipped with an arm will be designed to provide the service of carrying objects up and down stairs or patrolling for security. A control board including a digital signal processor (DSP) TMS320F28335 will steer the robot based on the outputs of DC bus current sensor and an inclinometer by fuzzy logic control (FLC). Without background subtraction and complex object tracking schemes, simple algorithms, such as the color filtering and locating center of the detected target, are employed in the robot system. With the aid of a CMOS camera for vision, the robot arm will track, capture, and put back the target object. The field programmable gate array (FPGA) on DE2-70 board will implement the image processing.
The paper is organized as follows for further discussion. Section 2 describes the design steps of the proposed robot, robot mechanism describing each component and the ways of climbing up and going down stairs, DSP-based controller providing all control signals and realizing fuzzy control logic, and the FPGA-based visual servo for tracking and capturing objects. Section 3 presents the experimental results of stair motion and visual servoing. Finally, Section 4 claims our conclusions.
2. Robot Design
There are three steps for the proposed robot design, mechanism, DSP-based controller, and FPGA-based visual servo.
Step I: Mechanism Design. It is well known that the most effective style of movement of a robot on a plane field is the wheel type. As obstacles and stairs exist, crawler-type and leg-type robots become better candidates for application. However, the proposed robot is equipped with roller chains which are attached with polyurethane rubber blocks to generate friction with ground and stairs for climbing up and down. The stair-climbing robot consists of a main body for moving, a front arm and a rear arm for moving up and down stairs. The main body is equipped with two brushless DC motors (BLDCMs) and their drives for locomotion, worm gears for torque amplification, two dc motors to control two arms, and DSP-based board as control center, shown in Figure 1 . The chassic size of the main body is 58.5 cm × 53 cm and each arm is 48 cm × 40 cm such that the maximum length of the robot will be 154.5 cm. The robot is equipped with roller chains attached with rubber blocks used to generate friction with ground and stairs for moving. There are 136 rubber blocks with size of 3 cm × 2 cm × 1 cm attached to the roller chains, 40 for each arm and 56 for main body. The distance between any two plastic blocks is properly arranged to fix the stair brink. The moving direction of the robot is steered based on the speed difference of two BLDCMs and the information from ultrasonic sensors. The robot makes use of friction force between the roller chains/rubber blocks and stairs/ground to climb up. The front arm will be pushed down to flat top so that the main body is lifted. The front arm will be pulled up for next stair-climbing. The rear arm is kept flat during the robot going up. Figure 2 displays total travel of climbing-up motion. Similarly, Figure 3 shows summary of going-down motion.
Step II: DSP-Based Controller Design. DSP provides the pulse-width modulation (PWM) signals of BLDCMs and DC motors and realizes the fuzzy logic rules for speed control.
The th fuzzy rule in the fuzzy rule-base system is described as  where , , and , , are fuzzy output variables, input fuzzy variables, and linguistic variables, respectively. Referring to Figure 4 for th membership function with isosceles triangle shape, means the length of the base and stands for the abscissa of the center of the base. The membership grade of input is calculated by
The bases of triangular membership function keep same for easily programming. By product operation, the membership grade of the antecedent proposition is calculated as Then the output will be Summarily, Table 1 lists the linguistic control rules and Figure 5 displays the design scheme of fuzzy logic control.
Step III: FPGA-Based Visual Servo Design. The grasping multilink arm consists of three couples of gears, three DC motors, four links, and one clamper. Referring to Figure 6, the first DC motor steers the diving gear and driven gear to determine the rotating angle. The second motor controls the gears of , , , and together with belts to stretch the length of the arm. and are mounted in the same shaft and with the same number of gears. Then, the length can be calculated as where is the length of the link. The third motor decides the open angle of the clamper.
The pixel array of CMOS camera THDB-D5M used in the robot consists of a matrix of 2752 × 2004 pixels addressed by column and row [13, 14]. The 2592 × 1944 array in the center called active region represents the default output image, surrounded by an active boundary region and a border of dark pixels. Pixels of active region are output in a Bayer pattern format consisting of four “colors,” green1, green2, red, and blue (G1, G2, R, and B) to represent three filter colors [12, 13]. The first row output alternates between G1 and R pixels, and the second row output alternates between B and G2 pixels. The green1 and green2 pixels have the same color filter, but they are treated as separate colors by the data path and analog signal chain.
In order to calculate the R, G, and B intensity of each pixel, we relabel the Bayer pattern by the corresponding column-row address, shown in Figure 7. After analyzing, we summarize each expression of the color intensity for each pixel as follows.
(i) is even and is even:
(ii) is even and is odd:
(iii) is odd and is even:
(iv) is odd and is odd: where and stand for the column address and row address, respectively.
The image raw data is sent from D5M to DE2-70 board where the FPGA on DE2-70 board will handle image processing and convert the data to RGB format to display on the VGA display [15–17]. As a result, we first capture the image of experiment background to find the ranges of colors of RGB and then define their location regions for color discrimination. In order to reduce the effect of light variation, the image in RGB space will be converted into space . In addition, the ranges of RGB from D5M are four times of the general image. So those the transformation can be expressed as
3. Experiment Results
Gear ratio used in the 45 kg stair-climbing robot is 1320 and the rated dc input power and speed of the 200-W BLDCM are 24 V and 9600 rpm. A preliminary experiment that the unloaded robot climbs up and goes down a gradual stair with the rise of 120 mm and depth of 400 mm by wired control is tested. The results of every motion are shown in Figure 8 . It qualifies the designed robot. Then we conduct the second experiment that the robot with loading of one arm moves up and down a steeper stair with the rise of 175 mm and depth of 280 mm. The taped pictures of the experiment and every motion are shown in Figures 9 and 10, respectively .
The third experiment contains image processing and grasping arm motion. In order to prevent target damage while clamping, one pressure sensor is installed inside the clamper. The pressure output after calibrating is sent to DSP for reference. Figure 11 displays the sequentially taped pictures from videos of tracking, capturing the cola can, and putting it back by the robot arm. Figure 11(a) presents the initial status of the experiment. The arm tracks the corresponding direction after the can is shifted left, shown in Figures 11(b) and 11(c). Figures 11(d) and 11(e) depict the right tracking. The arm tracks the can back to central position, shown in Figures 11(f) and 11(g). Then, the robot stretches out the arm for capturing the can and then draws the arm back, presented in Figures 11(h)–11(l). Finally, the robot puts the can back and goes back to the initial status, presented in Figures 11(m)–11(p).
Finally, we conduct the fourth experiment that the robot with all loading moves up a stair with the rise of 150 mm and depth of 300 mm. The taped pictures of the experiment are shown in Figure 12. The climbing-up motion presents variation in proceeding direction. This is due to the plastic bell being almost torn into broken after long time of experimental test. But, it was not found.
In the paper, we have developed a stair-climbing robot and completed experiments of moving up/down stairs and object tracking, capturing, and loading. In fact, the stair-climbing robot can provide service for the elders by capturing the specific object at one floor and then climbing up or down to another floor. In addition, the robot will patrol for security by the CCD camera around the house while more image processing functions are provided.
The authors would like to express their appreciation to Ministry of Education and National Science Council, China, under Contract no. NSC 100-2632-E-218-001-MY3, for financial supporting.
- K. Harada, H. Hirukawa, F. Kanehiro et al., “Dynamical balance of a humanoid robot grasping an environment,” in Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 1167–1173, Sendai, Japan, October 2004.
- Y. Sugahara, A. Ohta, K. Hashimoto et al., “Walking up and down stairs carrying a human by a biped locomotor with parallel mechanism,” in Proceedings of the IEEE IRS/RSJ International Conference on Intelligent Robots and Systems, pp. 1489–1494, Alberta, Canada, August 2005.
- Y. Konuma and S. Hirose, “Development of the stair-climbing biped robot ‘Zero Walker-1’,” in Proceedings of the 19th Annual Conference of the Robotics Society of Japan, pp. 851–852, 2001.
- Y. Takahashi, H. Nakayama, T. Nagasawa et al., “Robotic assistance for aged people,” in Proceedings of the 37th SICE Annual Conference, pp. 853–858, Chiba, Japan, July 1998.
- A. Malima, E. Özgür, and M. Çetin, “A fast algorithm for vision-based hand gesture recognition for robot control,” in Proceedings of the 14th IEEE Signal Processing and Communications Applications, pp. 1–4, Antalya, Turkey, April 2006.
- S. Hutchinson, G. D. Hager, and P. I. Corke, “A tutorial on visual servo control,” IEEE Transactions on Robotics and Automation, vol. 12, no. 5, pp. 651–670, 1996.
- C. Copot, C. Lazar, and A. Burlacu, “Predictive control of nonlinear visual servoing systems using image moments,” IET Control Theory & Applications, vol. 6, no. 10, pp. 1486–1496, 2012.
- Y. Fang, X. Liu, and X. Zhang, “Adaptive active visual servoing of nonholonomic mobile robots,” IEEE Transactions on Industrial Electronics, vol. 59, no. 1, pp. 486–497, 2012.
- B. Tamadazte, N. L.-F. Piat, and E. Marchand, “A direct visual servoing scheme for automatic nanopositioning,” IEEE/ASME Transactions on Mechatronics, vol. 17, no. 4, pp. 728–736, 2012.
- X. Zhang, Y. Fang, and X. Liu, “Motion-estimation-based visual servoing of nonholonomic mobile robots,” IEEE Transactions on Robotics, vol. 27, no. 6, pp. 1167–1175, 2011.
- M.-S. Wang and Y.-M. Tu, “Design and implementation of a stair-climbing robot,” in Proceedings of the IEEE International Conference on Advanced Robotics and Its Social Impacts, Taipei, Taiwan, August 2008.
- M.-S. Wang, Y.-S. Kung, and Y.-M. Tu, “Fuzzy logic control design for a Stair-climbing robot,” International Journal of Fuzzy Systems, vol. 11, no. 3, pp. 174–182, 2009.
- Terasic Company, THDB-D5M Hardware Specification, 2008.
- Terasic Company, TRDB-D5M User Guide, 2008.
- A. K. Benkhalil, S. S. Sipson, and W. Booth, “Real-time detection and tracking of a moving object using a complex programmable logic device,” in Proceedings of the IEE Colloquium on Target Tracking and Data Fusion, pp. 1–7, Birmingham, UK, June 1998.
- T. Hamamoto, S. Nagao, and K. Aizawa, “Real-time objects tracking by using smart image sensor and FPGA,” in Proceedings of the International Conference on Image Processing (ICIP '02), pp. III/441–III/444, New York, NY, USA, September 2002.
- S.-B. Park, A. Teuner, and B. J. Hosticka, “A motion detection system based on a CMOS photo sensor array,” in Proceedings of the International Conference on Image Processing, vol. 3, pp. 967–971, Chicago, Ill, USA, October 1998.
Copyright © 2013 Ming-Shyan Wang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.