Hostname: page-component-cd9895bd7-fscjk Total loading time: 0 Render date: 2024-12-25T04:46:24.546Z Has data issue: false hasContentIssue false

Comprehensive Review on Reaching and Grasping of Objects in Robotics

Published online by Cambridge University Press:  05 February 2021

Qaid Mohammed Marwan*
Affiliation:
Faculty of Engineering and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75470Melaka, Malaysia E-mails: [email protected], [email protected]
Shing Chyi Chua
Affiliation:
Faculty of Engineering and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75470Melaka, Malaysia E-mails: [email protected], [email protected]
Lee Chung Kwek
Affiliation:
Faculty of Engineering and Technology, Multimedia University, Jalan Ayer Keroh Lama, 75470Melaka, Malaysia E-mails: [email protected], [email protected]
*
*Corresponding author. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Summary

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

Interaction between a robot and its environment requires perception about the environment, which helps the robot in making a clear decision about the object type and its location. After that, the end effector will be brought to the object’s location for grasping. There are many research studies on the reaching and grasping of objects using different techniques and mechanisms for increasing accuracy and robustness during grasping and reaching tasks. Thus, this paper presents an extensive review of research directions and topics of different approaches such as sensing, learning and gripping, which have been implemented within the current five years.

Type
Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press

References

Garcia, G. J., Corrales, J. A., Pomares, J. and Torres, F., “Survey of visual and force/tactile control of robots for physical interaction in Spain,” Sensors 9(12), 96899733 (2009).CrossRefGoogle ScholarPubMed
Chi, C., Sun, X., Xue, N., Li, T. and Liu, C., “Recent progress in technologies for tactile sensors,” Sensors 18(4), 3240 (2018).CrossRefGoogle ScholarPubMed
Zou, L., Ge, C., Wang, Z. J., Cretu, E. and Li, X., “Novel tactile sensor technology and smart tactile sensing systems: A review,” Sensors 17(11), 124 (2017).CrossRefGoogle ScholarPubMed
Yamaguchi, A. and Atkeson, C. G., “Recent progress in tactile sensing and sensors for robotic manipulation: Can we turn tactile sensing into vision?,” Adv. Robot. 33(14), 661673 (2019).CrossRefGoogle Scholar
Pierson, H. A. and Gashler, M. S., “Deep learning in robotics: A review of recent research,” Adv. Robot. 31(16), 821835 (2017).CrossRefGoogle Scholar
Caldera, S., Rassau, A. and Chai, D., “Review of deep learning methods in robotic grasp detection,” Multimodal Tech. Interaction 2(57), 124 (2018).Google Scholar
Li, Y., “Deep reinforcement learning: An overview,” arXiv:1701.07274, 1–85 (2017).Google Scholar
Mohammed, M. Q., Chung, K. L. and Chyi, C. S., “Review of deep reinforcement learning-based object grasping: Techniques, open challenges, and recommendations,” IEEE Access 8, 178450178481 (2020).CrossRefGoogle Scholar
Ersen, B. M., Oztop, E. and Sariel, S., “Cognition-enabled robot manipulation in human environments requirements, recent work and open problems,” IEEE Robot. Autom. Mag. 24(3), 108122 (2017).CrossRefGoogle Scholar
Kappassov, Z., Corrales, J. A. and Perdereau, V., “Tactile sensing in dexterous robot hands - review,” Rob. Auton. Syst. 74(part A), 195220 (2015).CrossRefGoogle Scholar
Saudabayev, A. and Varol, H. A., “Sensors for robotic hands: A survey of state of the art,” IEEE Access 3, 17651782 (2015).CrossRefGoogle Scholar
Li, Y., Krahn, J. and Menon, C., “Bioinspired dry adhesive materials and their application in robotics: A review,” J. Bionic. Eng. 13(2),181199 (2016).CrossRefGoogle Scholar
Gorissen, B., Reynaerts, D., Konishi, S., Yoshida, K., Kim, J. W. and De Volder, M., “Elastic inflatable actuators for soft robotic applications,” Adv. Mater. 29(43), 114 (2017).CrossRefGoogle ScholarPubMed
Polygerinos, P., Correll, N., Morin, S. A., Mosadegh, B., Onal, C. D., Petersen, K., Cianchetti, M., Tolley, M. T. and Shepherd, R. F., “Soft robotics: Review of fluid-driven intrinsically soft devices; Manufacturing, sensing, control and applications in human-robot interaction,” Adv. Eng. Mater. 19(12), 122 (2017).CrossRefGoogle Scholar
Shintake, J., Cacucciolo, V., Floreano, D. and Shea, H., “Soft robotic grippers,” Adv. Mater. 30(29), 1707035 (2018).CrossRefGoogle Scholar
Hutchinson, S., Hager, G. D. and Corke, P. I., “A tutorial on visual servo control,” IEEE Trans. Robot. Autom. 12(5), 651670 (1996).CrossRefGoogle Scholar
Marjanovic, M., Scassellati, B. and Williamson, M., “Self-Taught Visually-guided Pointing for a Humanoid Robot,” Proceedings of the IEEE International Conference in Simulation of Adaptive Behavior (1996) pp. 3544.Google Scholar
Gaskett, C., Cheng, G. and Chris Gaskett, G. C., “Online Learning of a Motor Map for Humanoid Robot Reaching,” Proceedings of the 2nd International Conference on Computational Intelligence, Robotics and Autonomous Systems (2003) pp. 16.Google Scholar
Nori, F., Natale, L., Sandini, G. and Metta, G., “Autonomous Learning of 3D Reaching in a Humanoid Robot,” Proceedings of the IEEE/RSJ IInternational Conference on Intelligent Robots and Systems (2007) pp. 11421147.Google Scholar
Hersch, M., Sauser, E. and Billard, A., “Online learning of the body schema,” Int. J. Humanoid Robot. 5(2), 161181 (2008).CrossRefGoogle Scholar
Jamone, L. and Damas, B., “Incremental development of multiple tool models for robotic reaching through autonomous exploration,” J. Behav. Robot. 3(3), 113127 (2012).Google Scholar
Jamone, L., Natale, L., Nori, F., Metta, G. and Sandini, G., “Autonomous online learning of reaching behavior in a humanoid robot,” Int. J. Humanoid Robot. 9(3), 1250017 (2012).CrossRefGoogle Scholar
Jamone, L., Natale, L., Sandini, G. and Takanishi, A., “Interactive Online Learning of the Kinematic Workspace of a Humanoid Robot,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems (2015) pp. 26062612.Google Scholar
Truax, R., Platt, R. and Leonard, J., “Using Prioritized Relaxations to Locate Objects in Points Clouds for Mmanipulation,” Proceedings of the IEEE International Conference on Robotics and Automation (2011) pp. 20912097.Google Scholar
Steder, B., Rusu, R. B., Konolige, K. and Burgard, W., “Point Feature Extraction on 3D Range Scans Taking into Account Object Boundaries,” Proceedings of the IEEE International Conference on Robotics and Automation (2011) pp. 26012608.Google Scholar
Surmann, H., Nüchter, A. and Hertzberg, J., “An autonomous mobile robot with a 3D laser range finder for 3D exploration and digitalization of indoor environments,” Rob. Auton. Syst. 45(4–3), 181198 (2003).CrossRefGoogle Scholar
Zhuang, Y., Jiang, N., Hu, H. and Yan, F., “3-D-laser-based scene measurement and place recognition for mobile robots in dynamic indoor environments,” IEEE Trans. Instrum. Meas. 62(2), 438450 (2013).CrossRefGoogle Scholar
Zhuang, Y., Lin, X., Hu, H. and Guo, G., “Using scale coordination and semantic information for robust 3-D object recognition by a service robot,” IEEE Sens. J. 15(1), 3747 (2015).CrossRefGoogle Scholar
Lee, M., Member, J. L., Shaw, P. and Sheldon, M., “An Infant Inspired Model of Reaching for a Humanoid Robot,” Proceedings of the IEEE Joint Conference on Development and Learning - Epigenetics and Robotics (2012) pp. 16.Google Scholar
Chao, F., Lee, M. H., Jiang, M. and Zhou, C., “An infant development-inspired approach to robot hand-eye coordination,” Int. J. Adv. Robot. Syst. 11(15), 114 (2014).CrossRefGoogle Scholar
Chao, F., Wang, Z., Shang, C., Meng, Q., Jiang, M., Zhou, C. and Shen, Q., “A developmental approach to robotic pointing via human – robot interaction,” Inf. Sci. 283, 288303 (2014).CrossRefGoogle Scholar
Brand, M., Jamone, L., Kryczka, P., Endo, N., Hashimoto, K. and Takanishi, A., “Reaching for the Unreachable: Integration of Locomotion and Whole-Body Movements for Extended Visually Guided Reaching,” Proceedings of the 13th IEEE-RAS International Conference on Humanoid Robots (2013) pp. 2833.Google Scholar
Nguyen, H. and Kemp, C. C., “Autonomously learning to visually detect where manipulation will succeed,” Auton. Robots. 36(1–2), 137152 (2014).CrossRefGoogle Scholar
Sebastian, H., Lang, T. and Brock, O., “Extracting Kinematic Background Knowledge from Interactions Using Task-Sensitive Relational Learning,” Proceedings of the IEEE International Conference on Robotics and Automation (2014) pp. 43424347.Google Scholar
Vicente, P., Ferreira, R., Jamone, L. and Bernardino, A., “Eye-Hand Online Adaptation During Reaching Tasks in a Humanoid Robot,” Proceedings of the IEEE International Conference on Development and Learning and on Epigenetic Robotics (2015) pp. 175180.Google Scholar
Vicente, P., Ferreira, R., Jamone, L. and Bernardino, A., “GPU-Enabled Particle Based Optimisation for Robotic-Hand Pose Estimation and Self-Calibration,” Proceedings of the IEEE International Conference on Autonomous Robot Systems and Competitions GPU-Enabled (2015) pp. 38.Google Scholar
Bhattacharjee, T., Shenoi, A. A., Park, D., Rehg, J. M. and Kemp, C. C., “Combining Tactile Sensing and Vision for Rapid Haptic Mapping,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2015) pp. 12001207.Google Scholar
Chao, F., Zhu, Z., Lin, C., Hu, H. and Member, S., “Enhanced robotic hand-eye coordination inspired from human-like behavioral patterns,” IEEE Trans. Cogn. Dev. Syst. 8920(c), 112 (2016).Google Scholar
Fantacci, C., Vezzani, G., Pattacini, U., Tikhanoff, V. and Natale, L., “Markerless Visual Servoing on Unknown Objects for Humanoid Robot Platforms,” Proceedings of the IEEE International Conference on Robotics and Automation (2018) pp. 30993106.Google Scholar
Luo, D., Hu, F., Zhang, T., Deng, Y., Wu, X. and Member, S., “How does a robot develop its reaching ability like human infants do?,” IEEE Trans. Cogn. Dev. Syst. 10(3), 795809 (2018).CrossRefGoogle Scholar
Sundermeyer, M., Marton, Z.-C., Durner, M., Brucke, M. and Triebel, R., “Implicit 3D Orientation Learning for 6D Object Detection from RGB Images,” Proceedings of the European Conference on Computer Vision (2018) pp. 712729.Google Scholar
Bousmalis, K., Irpan, A., Wohlhart, P., Bai, Y., Kelcey, M., Kalakrishnan, M., Brain, G., Ibarz, J., Pastor, P., Konolige, K., Levine, S. and Vanhoucke, V., “Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping,” Proceedings of the IEEE International Conference on Robotics and Automation (2017) pp. 42434250.Google Scholar
Fang, K., Bai, Y., Hinterstoisser, S., Savarese, S. and Kalakrishnan, M., “Multi-Task Domain Adaptation for Deep Learning of Instance Grasping from Simulation,” Proceedings of the IEEE International Conference on Robotics and Automation (2017) pp. 35163523.Google Scholar
Jiang, T., Cheng, X., Cui, H., Shi, C. and Li, Y., “Dual-camera-based method for identification and location of scattered self-plugging rivets for robot grasping,” Meas. J. Int. Meas. Confed. 134, 688697 (2019).CrossRefGoogle Scholar
Zhong, M., Zhang, Y., Yang, X., Yao, Y., Guo, J., Wang, Y. and Liu, Y., “Assistive grasping based on laser-point detection with application to wheelchair-mounted robotic arms,” Sensors 19(2), 303 (2019).CrossRefGoogle ScholarPubMed
Lin, C. M., Tsai, C. Y., Lai, Y. C., Li, S. A. and Wong, C. C., “Visual object recognition and pose estimation based on a deep semantic segmentation network,” IEEE Sens. J. 18(22), 93709381 (2018).CrossRefGoogle Scholar
Chen, J., Hao, Y., Zhang, S., Sun, G., Xu, K., Chen, W. and Zheng, X., “An automated behavioral apparatus to combine parameterized reaching and grasping movements in 3D space,” J. Neurosci. Methods 312, 139147 (2019).CrossRefGoogle ScholarPubMed
Carey, D. P., “Eye–hand coordination: Eye to hand or hand to eye?,” Curr. Biol. 10(11), R416R419 (2000).CrossRefGoogle ScholarPubMed
Muis, A. and Ohnishi, K., “Eye-to-hand approach on eye-in-hand configuration within real-time visual servoing,” IEEE/ASME Trans. Mechatron. 10(4), 404410 (2005).CrossRefGoogle Scholar
Chang, W. C. and Wu, C. H., “Eye-in-hand vision-based robotic bin-picking with active laser projection,” Int. J. Adv. Manuf. Technol. 85(9–12), 28732885 (2016).CrossRefGoogle Scholar
Ghandi, Y. and Davoudi, M., “Visually guided manipulator based on artificial neural networks,” IETE J. Res. 65(2), 275283 (2018).CrossRefGoogle Scholar
Kim, D.-J., Lovelett, R. and Behal, A., “Eye-in-Hand Stereo Visual Servoing of an Assistive Robot Arm in Unstructured Environments,” Proceedings of the IEEE International Conference on Robotics and Automation Kobe International Conference Center Kobe (2009) pp. 23262331.Google Scholar
Fanello, S. R., Pattacini, U., Gori, I., Tikhanoff, V., Randazzo, M., Roncone, A., Odone, F. and Metta, G., “3D Stereo Estimation and Fully Automated Learning of Eye-Hand Coordination in Humanoid Robots,” Proceedings of the 14th IEEE-RAS International Conference on Humanoid Robots (2015) pp. 10281035.Google Scholar
Mohebbi, A., Keshmiri, M. and Xie, W. F., “An Eye-in-Hand Stereo Visual Servoing for Tracking and Catching Moving Objects,” Proceedings of the 33rd Chinese Control Conference (2014) pp. 85708577.Google Scholar
Ambrosini, E., Pezzulo, G. and Costantini, M., “The eye in hand: Predicting others’ behavior by integrating multiple sources of information,” J. Neurophysiol. 113(7), 22712279 (2015).CrossRefGoogle ScholarPubMed
Assa, A. and Janabi-Sharifi, F., “Virtual visual servoing for multicamera pose estimation,” IEEE/ASME Trans. Mechatron. 20(2), 789798 (2015).CrossRefGoogle Scholar
Barth, R., Hemming, J. and van Henten, E. J., “Design of an eye-in-hand sensing and servo control framework for harvesting robotics in dense vegetation,” Biosyst. Eng. 146, 7184 (2016).CrossRefGoogle Scholar
Xu, H., Wang, H. and Chen, W., “Uncalibrated Visual Servoing of Mobile Manipulators with an Eye-to-Hand Camera,” Proceedings of the IEEE International Conference on Robotics and Biomimetics (2016) pp. 21452150.Google Scholar
Taryudi, and Wang, M. S., “Eye to hand calibration using ANFIS for stereo vision-based object manipulation system,” Microsyst. Technol. 24(1), 305317 (2018).CrossRefGoogle Scholar
Pandya, H., Shah, S. V., Krishna, K. M., Gaud, A. and Mithun, P., “Image Based Visual Servoing for Tumbling Objects,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2019) pp. 29012908.Google Scholar
Murakami, K., Huang, S., Sumi, H., Ishikawa, M. and Yamakawa, Y., “Towel-Like Object Alignment with Human – Robot Cooperation and High-Speed Robotic Manipulation,” Proceedings of the IEEE International Conference on Robotics and Biomimetics (2018) pp. 772777.Google Scholar
Nair, R. R., Subramanian, V. K., Sharma, R. S., Agrawal, P. and Behera, L., “Robust hybrid visual servoing using reinforcement learning and finite-time adaptive FOSMC,” IEEE Syst. J. 13(3), 34673478 (2018).Google Scholar
Erden, M. S., Ardón, P. and Dragone, M., “Reaching and Grasping of Objects by Humanoid Robots Through Visual Servoing,” Proceedings of the International Conference on Human Haptic Sensing and Touch Enabled Computer Applications (2018) pp. 353365.Google Scholar
Nicolis, D., Palumbo, M., Zanchettin, A. M. and Rocco, P., “Occlusion-free visual servoing for the shared autonomy teleoperation of dual-arm robots,” IEEE Robot. Autom. Lett. 3(2), 796803 (2018).CrossRefGoogle Scholar
Morrison, D., Corke, P. and Leitner, J., “Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter,” arXiv:1809.08564v1, 1–7 (2018).Google Scholar
He, Z., Wu, C., Zhang, S. and Zhao, X., “Moment-based 2 1/2D visual servoing for texture-less planar part grasping,” IEEE Trans. Ind. Electron. 66(10), 78217830 (2018).CrossRefGoogle Scholar
Lippiello, V., Fontanelli, G. A. and Ruggiero, F., “Image-based visual-impedance control of a dual-arm aerial manipulator,” IEEE Robot. Autom. Lett. 3(3), 1856–1863 (2018).Google Scholar
Liang, X., Wang, H., Liu, Y. H., Chen, W. and Jing, Z., “Image-based position control of mobile robots with a completely unknown fixed camera,” IEEE Trans. Autom. Control 63(9), 30163023 (2018).CrossRefGoogle Scholar
Tsay, T. I. J. and Hung, P. J., “Behavioristic image-based pose control of mobile manipulators using an uncalibrated eye-in-hand vision system,” Artif. Life Robot. 23(1), 94102 (2018).CrossRefGoogle Scholar
Zhang, Y., Hua, C., Li, Y. and Guan, X., “Adaptive neural networks-based visual servoing control for manipulator with visibility constraint and dead-zone input,” Neurocomputing 332, 4455 (2019).CrossRefGoogle Scholar
Durovic, P. and Cupec, R., “Active Vision for Low Cost SCARA Robots Using RGB-D Camera,” Proceedings of the Zooming Innovation in Consumer Technologies Conference (2018) pp. 8487.Google Scholar
Durović, P., Grbić, R. and Cupec, R., “Visual servoing for low-cost SCARA robots using an RGB-D camera as the only sensor,” Automatika 58(4), 495505 (2018).CrossRefGoogle Scholar
Vicente, P., Jamone, L. and Bernardino, A., “Towards Markerless Visual Servoing of Grasping Tasks for Humanoid Robots,” Proceedings of the IEEE International Conference on Robotics and Automation (2017) pp. 38113816.Google Scholar
Yan, M., Frosio, I., Tyree, S. and Kautz, J., “Sim-to-Real Transfer of Accurate Grasping with Eye-In-Hand Observations and Continuous Control,” arXiv:1712.03303v2, 1–10 (2017).Google Scholar
Columbia, B., “Haptic perception: A tutorial,” Attention Percept. Psychophys. 71(7), 12331240 (2009).Google Scholar
Luo, S., Bimbo, J., Dahiya, R. and Liu, H., “Robotic tactile perception of object properties: A review,” Mechatronics 48(2017), 5467 (2017).CrossRefGoogle Scholar
Chitta, S., Sturm, J., Piccoli, M. and Burgard, W., “Tactile sensing for mobile manipulation,” IEEE Trans. Robot. 27(3), 558568 (2011).CrossRefGoogle Scholar
Spiers, A. J., Liarokapis, M. V., Calli, B. and Dollar, A. M., “Single-grasp object classification and feature extraction with simple robot hands and tactile sensors,” IEEE Trans. Haptics 9(2), 207220 (2016).CrossRefGoogle ScholarPubMed
Liu, H., Guo, D. and Sun, F., “Object recognition using tactile measurements: Kernel sparse coding methods,” IEEE Trans. Instrum. Meas. 65(3), 656665 (2016).CrossRefGoogle Scholar
Song, S. K., Park, J. B. and Choi, Y. H., “Dual-fingered stable grasping control for an optimal force angle,” IEEE Trans. Robot. 28(1), 256262 (2012).CrossRefGoogle Scholar
Ward-Cherrier, B., Rojas, N. and Lepora, N. F., “Model-free precise in-hand manipulation with a 3D-printed tactile gripper,” IEEE Robot. Autom. Lett. 2(4), 20562063 (2017).CrossRefGoogle Scholar
Dollar, A. M., Jentoft, L. P., Gao, J. H. and Howe, R. D., “Contact sensing and grasping performance of compliant hands,” Auton. Robots 28(1), 6575 (2010).CrossRefGoogle Scholar
Suwanratchatamanee, K., Matsumoto, M. and Hashimoto, S., “Robotic tactile sensor system and applications,” IEEE Trans. Ind. Electron. 57(3), 10741087 (2010).CrossRefGoogle Scholar
Romano, J. M., Hsiao, K., Niemeyer, G., Chitta, S. and Kuchenbecker, K. J., “Human-inspired robotic grasp control with tactile sensing,” IEEE Trans. Robot. 27(6), 10671079 (2011).CrossRefGoogle Scholar
Dang, H. and Allen, P. K., “Stable grasping under pose uncertainty using tactile feedback,” Auton. Robots 36(4), 309330 (2014).CrossRefGoogle Scholar
Walker, J. M., Blank, A. A., Shewokis, P. A. and Omalley, M. K., “Tactile feedback of object slip facilitates virtual object manipulation,” IEEE Trans. Haptics 8(4), 454466 (2015).CrossRefGoogle ScholarPubMed
Xu, H., Zhang, D., Huegel, J. C., Xu, W. and Zhu, X., “Effects of different tactile feedback on myoelectric closed-loop control for grasping based on electrotactile stimulation,” IEEE Trans. Neural Syst. Rehabil. Eng. 24(8), 827836 (2016).CrossRefGoogle ScholarPubMed
Sommer, N. and Billard, A., “Multi-contact haptic exploration and grasping with tactile sensors,” Rob. Auton. Syst. 85, 4861 (2016).CrossRefGoogle Scholar
Montaño, A. and Suárez, R., “Manipulation of unknown objects to improve the grasp quality using tactile information,” Sensors 18(5), 1412 (2018).CrossRefGoogle ScholarPubMed
Bhattacharjee, T., Clever, H. M., Wade, J. and Kemp, C. C., “Multimodal tactile perception of objects in a real home,” IEEE Robot. Autom. Lett. 3(3), 25232530 (2018).CrossRefGoogle Scholar
Lepora, N. F., Church, A., De Kerckhove, C., Hadsell, R. and Lloyd, J., “From pixels to percepts: Highly robust edge perception and contour following using deep learning and an optical biomimetic tactile sensor,” IEEE Robot. Autom. Lett. 4(2), 21012107 (2018).CrossRefGoogle Scholar
Abdullah, S. C., Yussof, H., Wada, J. and Ohka, M., “Object Exploration Algorithm Based on Three-Axis Tactile Data (Invited Paper),” Proceedings of the 4th International Conference on Mathematical Modelling and Computer Simulation (2010) pp. 158163.Google Scholar
Ratnasingam, S. and McGinnity, T. M., “Object Recognition Based on Tactile form Perception,” Proceedings of the IEEE Workshop on Robotic Intelligence in Informationally Structured Space (2011) pp. 2631.Google Scholar
Vezzani, G., Jamali, N., Pattacini, U., Battistelli, G., Chisci, L. and Natale, L., “A Novel Bayesian Filtering Approach to Tactile Object Recognition,” Proceedings of the IEEE-RAS International Conference on Humanoid Robots (2016) pp. 256263.Google Scholar
Luo, S., Mou, W., Althoefer, K. and Liu, H., “Iterative Closest Labeled Point for Tactile Object Shape Recognition,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems (2016) pp. 31373142.Google Scholar
Pedneault, N. and Cretu, A. M., “3D Object Recognition from Tactile Data Acquired at Salient Points,” Proceedings of the 5th IEEE International Symposium on Robotics and Intelligent Sensors (2018) pp. 150155.Google Scholar
Funabashi, S., Morikuni, S., Sugano, S., Schmitz, A., Ogasa, S. and Tomo, T. P., “Object Recognition Through Active Exploration Using a Multi-Fingered Robot Hand with 3D Tactile Sensors,” Proceedings of the Annual Conference on Robotics and Mechatronics (2018) pp. 25892595.Google Scholar
Payeur, P., Pasca, C., Cretu, A., Member, S. and Petriu, E. M., “Intelligent haptic sensor system for robotic manipulation,” IEEE Trans. Instrum. Meas. 54(4), 15831592 (2004).CrossRefGoogle Scholar
De Oliveira, T. E. A., Cretu, A. M., Da Fonseca, V. P. and Petriu, E. M., “Touch sensing for humanoid robots,” IEEE Inst. Meas. Mag. 18(5), 1319 (2015).CrossRefGoogle Scholar
Alves De Oliveira, T. E., Cretu, A. M. and Petriu, E. M., “Multimodal bio-inspired tactile sensing module,” IEEE Sens. J. 17(11), 32313243 (2017).CrossRefGoogle ScholarPubMed
Kucherhan, D. J., Goubran, M., Da Fonseca, V. P., Alves De Oliveira, T. E., Petriu, E. M. and Groza, V., “Object Recognition Through Manipulation Using Tactile Enabled Prosthetic Fingers and Feedback Glove - Experimental Study,” Proceedings of the IEEE International Symposium on Medical Measurements and Applications (2018) pp. 16.Google Scholar
Dahiya, R. S., Mittendorfer, P., Valle, M., Cheng, G. and Lumelsky, V. J., “Directions towards effective utilisation of tactile skin: A review,” IEEE Sens. J. 13(11), 41214138 (2013).CrossRefGoogle Scholar
Yousef, H., Boukallel, M. and Althoefer, K., “Tactile sensing for dexterous in-hand manipulation in robotics - A review,” Sensors Actuators Phys. 167(2), 171187 (2011).CrossRefGoogle Scholar
Klatzky, R. L. and Lederman, S. J., “Haptic object perception: Spatial dimensionality and relation to vision,” Philos. Trans. R. Soc. B Biol. Sci. 366(1581), 30973105 (2011).CrossRefGoogle Scholar
Bullock, I. M. and Dollar, A. M., “Classifying Human Manipulation Behavior,” Proceedings of the IEEE International Conference on Rehabilitation Robotics (2011) pp. 16.Google Scholar
Brown, J. N. A., “Once more, with feeling’: Using haptics to preserve tactile memories,” Int. J. Hum. Comput. Interact. 31(1), 6571 (2015).CrossRefGoogle Scholar
Yang, J., Liu, H., Sun, F. and Gao, M., “Object Recognition Using Tactile and Image Information,” Proceedings of IEEE International Conference on Robotics and Biomimetics (2015) pp. 17461751.Google Scholar
Liu, H., Yu, Y., Sun, F. and Gu, J., “Visual-tactile fusion for object recognition,” IEEE Trans. Autom. Sci. Eng. 14(2), 9961008 (2017).CrossRefGoogle Scholar
Kolycheva Née Nikandrova, E. and Kyrki, V., “Task-Specific Grasping of Simiiar Objects by Probabiiistic Fusion of Vision and Tactiie Measurements,” Proceedings of the IEEE-RAS International Conference on Humanoid Robots (2015) pp. 704710.Google Scholar
Yamaguchi, A. and Atkeson, C. G., “Combining Finger Vision and Optical Tactile Sensing: Reducing and Handling Errors While Cutting Vegetables,” Proceedings of the IEEE-RAS International Conference on Humanoid Robots (2016) pp. 10451051.Google Scholar
Izatt, G., Mirano, G., Adelson, E. and Tedrake, R., “Tracking Objects with Point Clouds from Vision and Touch,” Proceedings of the IEEE International Conference on Robotics and Automation (2017) pp. 40004007.Google Scholar
Wang, S., Wu, J., Sun, X., Yuan, W., Freeman, W. T., Tenenbaum, J. B. and Adelson, E. H., “3D Shape Perception from Monocular Vision, Touch and Shape Priors,” Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems (2018) pp.16061613.Google Scholar
Takahashi, K. and Tan, J., “Deep Visuo-Tactile Learning: Estimation of Tactile Properties from Images,” arXiv:1803.03435v2, 1–7 (2018).Google Scholar
Li, Q., Uckermann, A., Haschke, R. and Ritter, H., “Estimating an Articulated Tool’s Kinematics via Visuo-Tactile Based Robotic Interactive Manipulation,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2018) pp. 69386944.Google Scholar
Lee, M. A., Zhu, Y., Srinivasan, K., Shah, P., Savarese, S., Fei-Fei, L., Garg, A. and Bohg, J., “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks,” Proceedings of the IEEE International Conference on Robotics and Automation (2019) pp. 89438950.Google Scholar
Calandra, R., Owens, A., Jayaraman, D., Lin, J., Yuan, W., Malik, J., Adelson, E. H. and Levine, S., “More than a feeling: Learning to grasp and regrasp using vision and touch,” IEEE Robot. Autom. Lett. 3(4), 33003307 (2018).CrossRefGoogle Scholar
Luo, S., Yuan, W., Adelson, E., Cohn, A. G. and Fuentes, R., “ViTac: Feature Sharing between Vision and Tactile Sensing for Cloth Texture Recognition,” Proceedings of the IEEE International Conference on Robotics and Automation (2018) pp. 27222727.Google Scholar
Fischinger, D., Vincze, M. and Jiang, Y., “Learning Grasps for Unknown Objects in Cluttered Scenes,” Proceedings of the IEEE International Conference on Robotics and Automation (2013) pp. 609616.Google Scholar
Gualtieri, M., Ten Pas, A., Saenko, K. and Platt, R., “High Precision Grasp Pose Detection in Dense Clutter,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems (2016) pp. 598605.Google Scholar
Wada, K., Murooka, M., Okada, K. and Inaba, M., “3D Object Segmentation for Shelf Bin Picking by Humanoid with Deep Learning and Occupancy Voxel Grid Map,” Proceedings of the IEEE-RAS International Conference on Humanoid Robots (2016) pp. 11491154.Google Scholar
Mahler, J. and Goldberg, K., “Learning Deep Policies for Robot Bin Picking by Simulating Robust Grasping Sequences,” Proceedings of the 1st Annual Conference on Robot Learning (2017) pp. 515524.Google Scholar
Wada, K., Okada, K. and Inaba, M., “Probabilistic 3D Multilabel Real-time Mapping for Multi-object Manipulation,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2017) pp. 50925099.Google Scholar
Pajarinen, J. and Kyrki, V., “Robotic manipulation of multiple objects as a POMDP,” Artif. Intell. 247, 213228 (2017).CrossRefGoogle Scholar
Levine, S., Pastor, P., Krizhevsky, A., Ibarz, J. and Quillen, D., “Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection,” Int. J. Rob. Res. 2017(37), 421436 (2017).Google Scholar
Ten Pas, A., Gualtieri, M., Saenko, K. and Platt, R., “Grasp pose detection in point clouds,” Int. J. Rob. Res. 36(13–14), 14551473 (2017).CrossRefGoogle Scholar
Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Asfour, T. and Schaal, S., “Template-Based Learning of Grasp Selection,” Proceedings of the IEEE International Conference on Robotics and Automation (2012) pp. 23792384.Google Scholar
Herzog, A., Pastor, P., Kalakrishnan, M., Righetti, L., Bohg, J., Asfour, T. and Schaal, S., “Learning of grasp selection based on shape-templates,” Auton. Robots 36(1–2), 5165 (2014).CrossRefGoogle Scholar
Kappler, D., Bohg, J. and Schaal, S., “Leveraging Big Data for Grasp Planning,” Proceedings of the IEEE International Conference on Robotics and Automation (2015) pp. 43044311.Google Scholar
Ni, P., Zhang, W., Bai, W., Lin, M. and Cao, Q., “A new approach based on two-stream CNNs for novel objects grasping in clutter,” J. Intell. Robot. Syst. 94(1), 117 (2018).Google Scholar
Panda, S., Hafez, A. H. A. and Jawahar, C. V., “Learning Support Order for Manipulation in Clutter,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems (2013) pp. 809815.Google Scholar
Panda, S., Abdul Hafez, A. H. and Jawahar, C. V., “Single and multiple view support order prediction in clutter for manipulation,” J. Intell. Robot. Syst. 83(2), 179203 (2016).CrossRefGoogle Scholar
Mojtahedzadeh, R., Bouguerra, A., Schaffernicht, E. and Lilienthal, A. J., “Support relation analysis and decision making for safe robotic manipulation tasks,” Rob. Auton. Syst. 71, 99117 (2015).CrossRefGoogle Scholar
Kartmann, R., Paus, F., Grotz, M. and Asfour, T., “Extraction of physically plausible support relations to predict and validate manipulation action effects,” IEEE Robot. Autom. Lett. 3(4), 39913998 (2018).CrossRefGoogle Scholar
Zhang, H., Lan, X., Zhou, X., Tian, Z., Zhang, Y. and Zheng, N., “Visual Manipulation Relationship Network,” Proceedings of the IEEE-RAS 18th International Conference on Humanoid Robots (2018) pp. 118125.Google Scholar
Fichtl, S., Kraft, Di., Krüger, N. and Guerin, F., “Bootstrapping relational affordances of object pairs using transfer,” IEEE Trans. Cogn. Dev. Syst. 10(1), 5671 (2018).CrossRefGoogle Scholar
Schmidhuber, J., “Deep learning in neural networks: An overview,” Neural Netw. 61, 85117 (2015).CrossRefGoogle ScholarPubMed
Nielsen, M. A., Neural, Networks and Deep, Learning (Determination Press, San Francisco, CA, USA, 2015).Google Scholar
Jordan, M. I and Mitchell, T. M., “Machine learning: Trends, perspectives and prospects,” Science 349(6245), 255–60 (2015).CrossRefGoogle ScholarPubMed
Lecun, Y., Bengio, Y. and Hinton, G., “Deep learning,” Nature 521(7553), 436444 (2015).CrossRefGoogle ScholarPubMed
Pinto, L. and Gupta, A., “Supersizing Self-supervision: Learning to Grasp from 50K Tries and 700 Robot Hours,” Proceedings of the IEEE International Conference on Robotics and Automation (2016) pp. 34063413.Google Scholar
Yang, P.-C., Sasaki, K., Suzuki, K., Kase, K., Sugano, S. and Ogata, T., “Repeatable folding task by humanoid robot worker using deep learning,” IEEE Robot. Autom. Lett. 2(2), 397403 (2017).CrossRefGoogle Scholar
Hossain, D., Capi, G., Jindai, M. and Kaneko, S. I., “Pick-place of dynamic objects by robot manipulator based on deep learning and easy user interface teaching systems,” Ind. Robot 44(1), 1120 (2017).CrossRefGoogle Scholar
Hossain, D., Capi, G. and Jindai, M., “Optimising deep learning parameters using genetic algorithm for object recognition and robot grasping,” Adv. Robot. 32(20), 10901101 (2018).CrossRefGoogle Scholar
Kumra, S. and Kanan, C., “Robotic Grasp Detection using Deep Convolutional Neural Networks,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2017) pp. 769776.Google Scholar
De Magistris, G., Giovanni, A. Munawar and Vinayavekhin, P., “Teaching a Robot Pick and Place Task using Recurrent Neural Network,” Proceedings of the ViEW 2016 (2016) pp. 14.Google Scholar
Hossain, D. and Capi, G., “Multiobjective Evolution for Deep Learning and its Robotic Applications,” Proceedings of the 8th International Conference on Information, Intelligence, Systems and Applications (2017) pp. 16.Google Scholar
Hossain, D. and Capi, G., “Application of Deep Belief Neural Network for Robot Object Recognition and Grasping,” Proceedings of the IEEJ International Workshop on Sensing, Actuation and Motion Control (2016) pp. 14.Google Scholar
Rahmatizadeh, R., Abolghasemi, P., Bölöni, L. and Levine, S., “Vision-Based Multi-Task Manipulation for Inexpensive Robots Using End-To-End Learning from Demonstration,” Proceedings of the IEEE International Conference on Robotics and Automation (2017) pp. 37583765.Google Scholar
Calandra, R., Owens, A., Upadhyaya, M., Yuan, W., Lin, J., Adelson, E. H. and Levine, S.., “The Feeling of Success: Does Touch Sensing Help Predict Grasp Outcomes?,” Proceedings of the 1st Conference on Robot Learning (2017) pp. 110.Google Scholar
Chen, X. and Guhl, J., “Industrial robot control with object recognition based on deep learning,” Procedia CIRP 76, 149154 (2018).CrossRefGoogle Scholar
Chu, F.-J., Xu, R. and Vela, P. A., “Real-world multi-object, multi-grasp detection,” IEEE Robot. Autom. Lett. 3(4), 33553362 (2018).CrossRefGoogle Scholar
Park, D., Seo, Y. and Chun, S. Y., “Real-Time, Highly Accurate Robotic Grasp Detection using Fully Convolutional Neural Networks with High-Resolution Images,” arXiv1:809.05828v1, 1–7 (2018).Google Scholar
Wang, S., Jiang, X., Zhao, J., Wang, X., Zhou, W. and Liu, Y., “Efficient Fully Convolution Neural Network for Generating Pixel Wise Robotic Grasps With High Resolution Images,” arXiv:1902.08950v1, 1–7 (2019).CrossRefGoogle Scholar
Tian, Z., Zhang, Y., Lan, X., Zhou, X., Zheng, N. and Zhang, H., “Fully Convolutional Grasp Detection Network with Oriented Anchor Box,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (2019) pp. 72237230.Google Scholar
Zhang, H., Lan, X., Zhou, X. and Zheng, N., “RoI-based Robotic Grasp Detection in Object Overlapping Scenes Using Convolutional Neural Network,” arXiv:1808.10313v4, 1–8 (2018).Google Scholar
Chen, Y., Ma, Y., Kim, D. H. and Park, S.-K., “Region-based object recognition by color segmentation using a simplified PCNN,” IEEE Trans. Neural Netw. Learn. Syst. 26(8), 16821697 (2015).CrossRefGoogle ScholarPubMed
Zhang, H., Lan, X., Wan, L., Yang, C., Zhou, X. and Zheng, N., “RPRG: Towards Real-time Robotic Perception, Reasoning and Grasping with One Multi-task Convolutional Neural Network,” arXiv:1809.07081v1, 1–7 (2018).Google Scholar
Asif, U., Bennamoun, M. and Sohel, F. A., “RGB-D object recognition and grasp detection using hierarchical cascaded forests,” IEEE Trans. Robot. 33(3), 547564 (2017).CrossRefGoogle Scholar
Viereck, U., ten Pas, A., Saenko, K. and Platt, R., “Learning a Visuomotor Controller for Real World Robotic Grasping Using Simulated Depth Images,” Proceedings of the 1st Conference on Robot Learning (2017) pp. 110.Google Scholar
Redmon, J. and Angelova, A., “Real-Time Grasp Detection Using Convolutional Neural Networks,” Proceedings of the IEEE International Conference on Robotics and Automation (2015) pp. 13161322.Google Scholar
Wang, Z., Li, Z., Wang, B. and Liu, H., “Robot grasp detection using multimodal deep convolutional neural networks,” Adv. Mech. Eng. 8(9), 112 (2016).Google Scholar
Mahler, J., Liang, J., Niyaz, S., Laskey, M., Doan, R., Liu, X., Ojea, J. A. and Goldberg, K., “Dex-Net 2.0: Deep Learning to Plan Robust Grasps with Synthetic Point Clouds and Analytic Grasp Metrics,” arXiv:1703.09312v3 (2017).CrossRefGoogle Scholar
Lenz, I., Lee, H. and Saxena, A., “Deep learning for detecting robotic grasps,” Int. J. Rob. Res. 34(4–5), 705724 (2015).CrossRefGoogle Scholar
Qiu, Z., Zhuang, Y., Yan, F., Hu, H. and Wang, W., “RGB-D images and full convolution neural network-based outdoor scene understanding for mobile robots,” IEEE Trans. Inst. Meas. 68(1), 2737 (2019).CrossRefGoogle Scholar
Choi, C., Schwarting, W., DelPreto, J. and Rus, D., “Learning object grasping for soft robot hands,” IEEE Robot. Autom. Lett. 3(3), 23702377 (2018).CrossRefGoogle Scholar
Mar, T., Tikhanoff, V. and Natale, L., “What can i do with this tool? Self-supervised learning of tool affordances from their 3-D geometry,” IEEE Trans. Cogn. Dev. Syst. 10(3), 595610 (2018).CrossRefGoogle Scholar
Mar, T., Tikhanoff, V., Metta, G. and Natale, L., “Self-Supervised Learning of Tool Affordances from 3D Tool Representation through Parallel SOM Mapping,” Proceedings of the IEEE International Conference on Robotics and Automation (2017) pp. 894901.Google Scholar
Gupta, A., Murali, A., Gandhi, D. and Pinto, L., “Robot Learning in Homes: Improving Generalization and Reducing Dataset Bias,” arXiv:1807.07049v1, 1–10 (2018).Google Scholar
Van Molle, P., Verbelen, T., De Coninck, E., De Boom, C., Simoens, P. and Dhoedt, B., “Learning to Grasp from a Single Demonstration,” arXiv:1806.03486v1, 1–10 (2018).Google Scholar
Zhao, T., Deng, M., Li, Z. and Hu, Y., “Cooperative manipulation for a mobile dual-arm robot using sequences of dynamic movement primitives,” IEEE Trans. Cogn. Dev. Syst. 12(1), 1829 (2018).CrossRefGoogle Scholar
Krishnan, S., Garg, A., Liaw, R., Thananjeyan, B., Miller, L., Pokorny, F. T. and Goldberg, K., “SWIRL: A sequential windowed inverse reinforcement learning algorithm for robot tasks with delayed rewards,” Int. J. Rob. Res. 38(2–3), 126145 (2019).CrossRefGoogle Scholar
François-Lavet, V., Henderson, P., Islam, R., Bellemare, M. G. and Pineau., J., “An introduction to deep reinforcement learning,” Found. Trends® Mach. Learn. 11(3–4), 1523 (2018).Google Scholar
Lillicrap, T. P., Hunt, J. J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D. and Wierstra, D., “Countinuous Learning Control with Deep Reinforcement Learning,” arXiv:1509.02971v5, 1–14 (2015).Google Scholar
Nicolas, H., Sriram, S., Lemmon, J., Merel, J., Wayne, G., Tassa, Y., Erez, T., Wang, Z., Eslami, S. M. A., Riedmiller, M. and Silver, D., “Emergence of Locomotion Behaviours in Rich Environments,” arXiv:1707.02286v2, 1–14 (2017).Google Scholar
Schulman, J., Eecs, J., Edu, B., Abbeel, P., Cs, P. and Edu, B., “Trust Region Policy Optimisation,” Proceedings of the 31st International Conference on Machine Learning (2015) pp. 18891897.Google Scholar
Mnih, V., Mirza, M., Graves, A., Harley, T., Lillicrap, T. P. and Silver, D., “Asynchronous Methods for Deep Reinforcement Learning,” Proceedings of the International Conference on Machine Learning (2016) pp. 19281937.Google Scholar
Martinez, R. V., Branch, J. L., Fish, C. R., Jin, L., Shepherd, R. F., Nunes, R. M. D., Suo, Z. and Whitesides, G. M., “Robotic tentacles with three-dimensional mobility based on flexible elastomers,” Adv. Mater. 25(2), 205212 (2012).CrossRefGoogle ScholarPubMed
Stokes, A. A., Shepherd, R. F., Morin, S. A., Ilievski, F. and Whitesides, G. M., “A hybrid combining hard and soft robots adam,” SOFT Robot. 1(1), 7074 (2014).CrossRefGoogle Scholar
Bhagat, S. and Banerjee, H., “Deep reinforcement learning for soft, flexible robots: Brief review with impending challenges,” Robotics 8(4), 136 (2019).CrossRefGoogle Scholar
Zhou, J., Member, S., Chen, S., Wang, Z. and Member, S., “A soft-robotic gripper with enhanced object adaptation and grasping reliability,” IEEE Robot. Autom. Lett. 2(4), 22872293 (2017).CrossRefGoogle Scholar
Gualtieri, M. and Platt, R., “Learning 6-DoF Grasping and Pick-Place Using Attention Focus,” arXiv:1806.06134v1, 219354 (2018).Google Scholar
Kalashnikov, D., Irpan, A., Pastor, P., Ibarz, J., Herzog, A., Jang, E., Quillen, D., Holly, E., Kalakrishnan, M., Vanhoucke, V. and Levine, S., “QT-Opt: Scalable Deep Reinforcement Learning for Vision-Based Robotic Manipulation,” arXiv:1806.10293v3, 1–23 (2018).Google Scholar
Mohammed, M. Q., Chung, K. L. and Chyi, C. S., “Pick and place objects in a cluttered scene using deep reinforcement learning,” Int. J. Mech. Mechatron. Eng. 20(04), 5057 (2020).Google Scholar
Amend, J., Cheng, N., Fakhouri, S. and Culley, B., “Soft robotics commercialization: Jamming grippers from research to product,” Soft Robot. 3(4) 213222, (2016).CrossRefGoogle Scholar
Okatani, Y., Nishida, T. and Tadakuma, K., “Development of universal robot gripper using MRα fluid,” Int. J. Humanoid Robot. 13(4), 231235 (2014).Google Scholar
Miriyev, A., Stack, K. and Lipson, H., “Soft material for soft actuators,” Nat. Commun. 8(1), 18 (2017).CrossRefGoogle ScholarPubMed
Graule, M. A., Chirarattananon, P., Fuller, S. B., Jafferis, N. T., Ma, K. Y., Spenko, M., Kornbluh, R. and Wood, R. J., “Perching and takeoff of a robotic insect on overhangs using switchable electrostatic adhesion,” Science 352(6288), 978982 (2016).CrossRefGoogle ScholarPubMed
Chen, R., “A gecko-inspired electroadhesive wall-climbing robot,” IEEE Potentials 34(2)1519 (2015).CrossRefGoogle Scholar
Jiang, H., Hawkes, E. W., Fuller, C., Estrada, M. A., Suresh, S. A., Abcouwer, N., Han, A. K., Wang, S., Ploch, C. J., Parness, A. and Cutkosky, M. R., “A robotic device using gecko-inspired adhesives can grasp and manipulate large objects in microgravity,” Sci. Robot. 2(7), 111 (2017).CrossRefGoogle ScholarPubMed
Jeong, D. and Lee, K., “Design and analysis of an origami-based three-finger manipulator,” Robotica 36(2), 261274 (2018).CrossRefGoogle Scholar
Yang, Y., Chen, Y., Li, Y., Chen, M. Z. Q. and Wei, Y., “Bioinspired robotic fingers based on pneumatic actuator and 3D printing of smart material,” Soft Robot. 4(2), 147162 (2017).CrossRefGoogle ScholarPubMed
Poulin, A., Rosset, S. and Shea, H. R., “Printing low-voltage dielectric elastomer actuators,” Appl. Phys. Lett. 107(24), 15 (2015). https://doi.org/10.1063/1.4937735.CrossRefGoogle Scholar
Li, J., Liu, H. and Cai, H., “On computing three-finger force-closure grasps of 2-D and 3-D objects Jia-Wei,” IEEE Trans. Robot. Autom. 19(1), 155161 (2003).Google Scholar
Maximo, A. R. and Raul, S., “Finding locally optimum force-closure grasps,” Robot. Comput. Integr. Manuf. 25(3), 536544 (2009).Google Scholar
Hawkes, E. W., Jiang, H. and Cutkosky, M. R., “Three-dimensional dynamic surface grasping with dry adhesion,” Int. J. Rob. Res. 35(8), 943958 (2015).CrossRefGoogle Scholar
Kosuge, K., Lee, J., Ichinose, J. and Hirata, Y., “A Novel Grasping Mechanism for Flat-Shaped Objects Inspired by Lateral Grasp,” Proceedings of the 2nd Biennial IEEE/RAS-EMBS International Conference on Biomedical Robotics and Biomechatronics (2008) pp. 282288.Google Scholar
Davis, S. Ã., Gray, J. O. and Caldwell, D. G., “An end effector based on the Bernoulli principle for handling sliced fruit and vegetables,” Robot. Comput. Integr. Manuf. 24(2), 249257 (2008).CrossRefGoogle Scholar
Xu, Z., Deyle, T. and Kemp, C. C., “1000 trials: An Empirically Validated End Effector that Robustly Grasps Objects from the Floor,” Proceedings of the IEEE International Conference on Robotics and Automation (2009) pp. 21602167.Google Scholar
Dogar, M. R. and Srinivasa, S. S., “Push-Grasping with Dexterous Hands: Mechanics and a Method,” Proceedings of The IEEE/RSJ International Conference on Intelligent Robots and Systems (2010) pp. 21232130.Google Scholar
Kappler, D., Chang, L. Y., Pollard, N. S., Asfour, T. and Dillmann, R., “Templates for pre-grasp sliding interactions,” Rob. Auton. Syst. 60(3), 411423 (2012).CrossRefGoogle Scholar
Odhner, L. U., Ma, R. R., Member, S. and Dollar, A. M., “Open-loop precision grasping with underactuated hands inspired by a human manipulation strategy,” Trans. Autom. Sci. Eng. 10(3), 625633 (2013).CrossRefGoogle Scholar
Odhner, L., Ma, R. and Dollar, A., “Precision Grasping and Manipulation of Small Objects from Flat Surfaces Using Underactuated Fingers,” Proceedings of the IEEE International Conference on Robotics and Automation (2012) pp. 28302835.Google Scholar
Amend, J. R., Member, S., Brown, E., Rodenberg, N. and Jaeger, H. M., “A positive pressure universal gripper based on the jamming of granular material,” IEEE Trans. Robot. 28(2) 341350 (2012).CrossRefGoogle Scholar
Kessens, C. C. and Jaydev, P. D., “A self-sealing suction cup array for grasping,” J. Mech. Robot. 3(4), 045001 (2011).CrossRefGoogle Scholar
Catalano, M. G., Grioli, G., Farnioli, E., Serio, A., Piazza, C. and Bicchi, A., “Adaptive synergies for the design and control of the Pisa/IIT SoftHand,” Int. J. Rob. Res. 33(5), 768782 (2014).CrossRefGoogle Scholar
Odhner, L. U., Jentoft, L. P., Claffee, M. R., Corson, N., Tenzer, Y., Ma, R. R., Buehler, M., Kohout, R., Howe, R. D. and Dollar, A. M., “A compliant, underactuated hand for robust manipulation,” Int. J. Rob. Res. 33(5), 736752 (2014).CrossRefGoogle Scholar
Valois, M. K. J., Bagnell, J. A. and Pollard, N., “Human-inspired force compliant grasping primitives,” Auto. Robots 37(2), 209225 (2014).Google Scholar
Ciocarlie, M., Hicks, F. M., Holmberg, R., Hawke, J., Schlicht, M., Gee, J., Stanford, S. and Bahadur, R., “The Velo gripper?: A versatile single-actuator design for enveloping,” Int. J. Rob. Res. 33(5), 753767 (2014).CrossRefGoogle Scholar
Eppner, C., Deimel, R., Alvarez-Ruiz, J., Maertens, M. and Brock, O., “Exploitation of environmental constraints in human and robotic grasping,” Int. J. Rob. Res. 34(7), 10211038 (2015).CrossRefGoogle Scholar
Shi, Q., Yu, Z., Wang, H., Sun, T., Huang, Q. and Fukuda, T., “Development of a highly compact microgripper capable of online calibration for multisized microobject manipulation,” IEEE Trans. Nanotechnol. 17(4), 657661 (2018).CrossRefGoogle Scholar
Liang, C., Wang, F., Shi, B., Huo, Z., Zhou, K. and Tian, Y., “Design and control of a novel asymmetrical piezoelectric actuated microgripper for micromanipulation,” Sensors Actuators Phys. 269, 227237 (2018).CrossRefGoogle Scholar
Wang, F., Shi, B., Tian, Y., Huo, Z., Zhao, X. and Zhang, D., “Design of a novel dual-axis micro manipulator with an asymmetric compliant structure,” IEEE/ASME Trans. Mechatron. 24(2), 656665 (2019).CrossRefGoogle Scholar
Scheggi, S., Yoon, C., Ghosh, A., Gracias, D. H. and Misra, S., “A GPU-accelerated model-based tracker for untethered submillimeter grippers,” Robot. Auto. Syst. 103(2018), 111121 (2018).CrossRefGoogle ScholarPubMed
Yang, Y., Lou, J., Wu, G., Wei, Y. and Fu, L., “Design and position/force control of an S-shaped MFC microgripper,” Sensors Actuators Phys. 282(2018), 6378 (2018).CrossRefGoogle Scholar
Graña, M., Alonso, M. and Izaguirre, A., “A panoramic survey on grasping research trends and topics,” Cybern. Syst. 50(1), 4057 (2019).CrossRefGoogle Scholar
Hasegawa, S., Wada, K., Niitani, Y., Okada, K. and Inaba, M., “A Three-Fingered Hand with a Suction Gripping System for Picking Various Objects in Cluttered Narrow Space,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems (2017) pp. 11641171.Google Scholar
Yamaguchi, K., Hirata, Y. and Kosuge, K., “Development of Robot Hand with Suction Mechanism for Robust and Dexterous Grasping,” Proceedings of the IEEE International Conference on Intelligent Robots and Systems (2013) pp. 55005505.Google Scholar
Lévesque, F., Sauvet, B., Cardou, P. and Gosselin, C., “A model-based scooping grasp for the autonomous picking of unknown objects with a two-fingered gripper,” Rob. Auton. Syst. 106, 1425 (2018).CrossRefGoogle Scholar
Babin, V., St-onge, D. and Gosselin, C., “Stable and repeatable grasping of fl at objects on hard surfaces using passive and epicyclic mechanisms,” Robot. Comput. Integr. Manuf. 55, 110 (2019).CrossRefGoogle Scholar
Babin, V. and Gosselin, C., “Picking, grasping, or scooping small objects lying on flat surfaces: A design approach,” Int. J. Robot. Res. 37(12), 14841499 (2018).CrossRefGoogle Scholar
Elgeneidy, K., Lohse, N., Rossiter, J., Xiang, C., Justham, L. and Guo, J., “Soft pneumatic grippers embedded with stretchable electroadhesion,” Smart Mater. Struct. 27(5), 055006 (2018).Google Scholar
Hirano, D., Tanishima, N., Bylard, A. and Chen, T. G. “Underactuated Gecko Adhesive Gripper for Simple and Versatile Grasp,” Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) (2020) pp. 89648969. 10.1109/ICRA40945.2020.9196806Google Scholar
Ngo, T., Tran, H., Nguyen, T., Dao, T. and Wang, D., “Design and kinetostatic modeling of a compliant gripper for grasp and autonomous release of objects,” Adv. Robot. 32(14), 717735 (2018).CrossRefGoogle Scholar
Huh, T. M., Liu, C., Hashizume, J., Chen, T. G., Suresh, S. A., Chang, F. K. and Cutkosky, M. R., “Active sensing for measuring contact of thin film Gecko-inspired adhesives,” IEEE Robot. Autom. Lett. 3(4), 32633270 (2018).CrossRefGoogle Scholar
Hashizume, J., Huh, T. M., Suresh, S. A. and Cutkosky, M. R., “Capacitive sensing for a gripper with Gecko-inspired adhesive film,” IEEE Robot. Autom. Lett. 4(2), 677683 (2019).CrossRefGoogle Scholar
Hawkes, E. W., Jiang, H., Christensen, D. L., Han, A. K. and Cutkosky, M. R., “Grasping without squeezing: Design and modeling of shear-activated grippers,” IEEE Trans. Robot. 34(2), 303316 (2018).CrossRefGoogle Scholar
Yoshikawa, T., “Multifingered robot hands: Control for grasping and manipulation,” Ann. Rev. Control 34(2), 199208 (2010).CrossRefGoogle Scholar
Gong, D., He, R., Yu, J. and Zuo, G., “A pneumatic tactile sensor for co-operative robots,” Sensors 17(11), 115 (2017).CrossRefGoogle ScholarPubMed
Mnyusiwalla, H., Vulliez, P., Seguin, P., Gazeau, J. P. and Laguillaumie, P., “Focus on the mechatronics design of a new dexterous robotic hand for inside hand manipulation,” Robotica 36(8), 12061224 (2018).Google Scholar
Chen, Y., Guo, S., Li, C., Yang, H. and Hao, L., “Size recognition and adaptive grasping using an integration of actuating and sensing soft pneumatic gripper,” Rob. Auton. Syst. 104, 1424 (2018).CrossRefGoogle Scholar
Fan, S., Gu, H., Zhang, Y., Jin, M. and Liu, H., “Research on adaptive grasping with object pose uncertainty by multi-fingered robot hand,” Int. J. Adv. Robot. Syst. 15(2), 116 (2018).CrossRefGoogle Scholar
Garate, V. R., Pozzi, M., Tsagarakis, N., Ajoudani, A. and Prattichizzo, D., “Grasp stiffness control in robotic hands through coordinated optimisation of pose and joint stiffness,” IEEE Robot. Autom. Lett. 3(4), 39523959 (2018).CrossRefGoogle Scholar
Sarantopoulos, I. and Doulgeri, Z., “Human-inspired robotic grasping of flat objects,” Rob. Auton. Syst. 108, 179191 (2018).CrossRefGoogle Scholar
Deimel, R., “A novel type of compliant and underactuated robotic hand for dexterous grasping,” Int. J. Robot. Res. 35(3),161185 (2016).CrossRefGoogle Scholar
Crooks, W., Rogers, C., O’Sullivan, M., Vukasin, G. and Messner, W., “Fin Ray® effect inspired soft robotic gripper: From the RoboSoft grand challenge towards optimization,” Front. Robot. AI 3, 19 (2016).CrossRefGoogle Scholar
Crooks, W., Rozen-Levy, S., Trimmer, B., Rogers, C. and Messner, W., “Passive gripper inspired by Manduca Sexta and the fin ray® effect,” Int. J. Adv. Robot. Syst. 14(4), 17 (2017).CrossRefGoogle Scholar
Gandarias, J. M., Gómez-de-Gabriel, J. M. and Garca-Cerezo, A. J., “Enhancing perception with tactile object recognition in adaptive grippers for human-robot interaction,” Sensors 18(3), 120 (2018).CrossRefGoogle ScholarPubMed
Nassour, J., Ghadiya, V., Hugel, V. and Hamker, F. H., “Design of New Sensory Soft Hand: Combining Air-Pump Actuation with Superimposed Curvature and Pressure Sensors,” Proceedings of the IEEE International Conference on Soft Robotics, RoboSoft (2018) pp. 164169.Google Scholar
Zhang, J., Jackson, A., Mentzer, N. and Kramer, R., “A modular, reconfigurable mold for a soft robotic gripper design activity,” Front. Robot. AI 4, 18 (2017).CrossRefGoogle Scholar
Suresh, S. A., Glick, P., Ruffatto, D., Tolley, M. T., Cutkosky, M. and Parness, A., “A soft robotic gripper with gecko-inspired adhesive,” IEEE Robot. Autom. Lett. 3(2), 903910 (2018).Google Scholar
Terryn, S., Brancart, J., Lefeber, D., Van Assche, G. and Vanderborght, B., “Self-healing soft pneumatic robots,” Sci. Robot. 2(9), 113 (2017).CrossRefGoogle ScholarPubMed
Roberge, J.-P., Ruotolo, W., Duchaine, V. and Cutkosky, M., “Improving industrial grippers with adhesion-controlled friction,” IEEE Robot. Autom. Lett. 3(2), 10411048 (2018).CrossRefGoogle Scholar
Song, S., Drotlef, D.-M., Majidi, C. and Sitti, M., “Controllable load sharing for soft adhesive interfaces on three-dimensional surfaces,Proc. Natl. Acad. Sci. 114(22), E4344E4353 (2017).CrossRefGoogle Scholar
Bemelmans, R., Gelderblom, G. J., Jonker, P. and de Witte, L., “Socially assistive robots in elderly care: A systematic review into effects and effectiveness,” J. Am. Med. Dir. Assoc. 13(2), 114120 (2012).CrossRefGoogle ScholarPubMed
Maymó, M. R., Shafti, A. and Faisal, A. A., “Fast Orient: Lightweight Computer Vision for Wrist Control in Assistive Robotic Grasping,” arXiv:1807.08275v1, 1–6 (2018).Google Scholar
Al-Fahaam, H., Davis, S. and Nefti-Meziani, S., “The design and mathematical modelling of novel extensor bending pneumatic artificial muscles (EBPAMs) for soft exoskeletons,” Robot. Auto. Syst. 99, 6374 (2018).CrossRefGoogle Scholar
Yi, J., Chen, X. and Wang, Z., “A three-dimensional-printed soft robotic glove with enhanced ergonomics and force capability,” IEEE Robot. Autom. Lett. 3(1), 242248 (2017).CrossRefGoogle Scholar
Grioli, G., Bicchi, A., Felici, F., Catalano, M. G., Ciullo, A. S. and Ajoudani, A., “Analytical and experimental analysis for position optimisation of a grasp assistance supernumerary robotic hand,” IEEE Robot. Autom. Lett. 3(4), 43054312 (2018).Google Scholar
Corbato, C. H., Bharatheesha, M., van Egmond, J., Ju, J. and Wisse, M., “Integrating different levels of automation: Lessons from winning the amazon robotics challenge 2016,” IEEE Trans. Ind. Inform. 14(11), 49164926 (2018).CrossRefGoogle Scholar
Hodson, R., “A gripping problem,” Nature 557(7704), S23S25 (2018).CrossRefGoogle Scholar
Kimmel, A., Shome, R., Littlefield, Z. and Bekris, K., “Fast, Anytime Motion Planning for Prehensile Manipulation in Clutter,” Proceedings of the IEEE-RAS 18th International Conference on Humanoid Robots (2018) pp. 874880.Google Scholar
Shi, J. and Koonjul, G. S., “Real-time grasping planning for robotic bin-picking and kitting applications,” IEEE Trans. Autom. Sci. Eng. 14(2), 809819 (2017).CrossRefGoogle Scholar