Hostname: page-component-cd9895bd7-gxg78 Total loading time: 0 Render date: 2024-12-25T05:21:05.631Z Has data issue: false hasContentIssue false

Movement prediction for a lower limb exoskeleton using a conditional restricted Boltzmann machine

Published online by Cambridge University Press:  28 November 2016

Eunsuk Chong
Affiliation:
Robotics Laboratory, Seoul National University, Seoul 08826, Korea E-mail: [email protected]
F. C. Park*
Affiliation:
Robotics Laboratory, Seoul National University, Seoul 08826, Korea E-mail: [email protected]
*
*Corresponding author. E-mail: [email protected]

Summary

We propose a novel class of unsupervised learning-based algorithms that extend the conditional restricted Boltzmann machine to predict, in real-time, a lower limb exoskeleton wearer's intended movement type and future trajectory. During training, our algorithm automatically clusters unlabeled exoskeletal measurement data into movement types. Our predictor then takes as input a short time series of measurements, and outputs in real-time both the movement type and the forward trajectory time series. Physical experiments with a prototype exoskeleton demonstrate that our method more accurately and stably predicts both movement type and the forward trajectory compared to existing methods.

Type
Articles
Copyright
Copyright © Cambridge University Press 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

1. Dollar, A. M. and Herr, H., “Lower extremity exoskeletons and active orthoses: Challenges and state-of-the-art,” IEEE Trans. Robot. 24 (1), 144158, (2008).Google Scholar
2. Zoss, A. B., Kazerooni, H. and Chu, A., “Biomechanical design of the Berkeley lower extremity exoskeleton (BLEEX),” IEEE/ASME Trans. Mechatronics 11 (2), 128138, (2006).Google Scholar
3. Fleischer, C., Reinicke, C. and Hommel, G., “Predicting the Intended Motion with EMG Signals for an Exoskeleton Orthosis Controller,” Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, (IROS 2005), IEEE (2005) pp. 20292034.Google Scholar
4. Kiguchi, K., Tanaka, T. and Fukuda, T., “Neuro-fuzzy control of a robotic exoskeleton with EMG signals,” IEEE Trans. Fuzzy Syst. 12 (4), 481490 (2004).CrossRefGoogle Scholar
5. Lenzi, T., Rossi, S. M. M. D., Vitiello, N. and Carrozza, M. C., “Intention-based EMG control for powered exoskeletons,” IEEE Trans. Biomed. Eng. 59 (8), 21802190 (2012).Google Scholar
6. Pfurtscheller, G., Guger, C., Mller, G., Krausz, G. and Neuper, C., “Brain oscillations control hand orthosis in a tetraplegic,” Neurosci. Lett. 292 (3), 211214 (2000).Google Scholar
7. Varol, H. A., Sup, F. and Goldfarb, M., “Multiclass real-time intent recognition of a powered lower limb prosthesis,” IEEE Trans. Biomed. Eng. 57 (3), 542551 (2010).Google Scholar
8. Wang, L., Van Asseldonk, E. H. and Van Der Kooij, H., “Model Predictive Control-Based Gait Pattern Generation for Wearable Exoskeletons,” Proceedings of the IEEE International Conference on Rehabilitation Robotics (ICORR), IEEE (2011) pp. 16.Google Scholar
9. Veneman, J. F., Kruidhof, R., Hekman, E. E., Ekkelenkamp, R., Van Asseldonk, E. H. and Van Der Kooij, H., “Design and evaluation of the LOPES exoskeleton robot for interactive gait rehabilitation,” IEEE Trans. Neural Syst. Rehabil. Eng. 15 (3), 379386 (2007).Google Scholar
10. Kwon, S. and Kim, J., “Real-time upper limb motion estimation from surface electromyography and joint angular velocities using an artificial neural network for humanmachine cooperation,” IEEE Trans. Inform. Technol. Biomed. 15 (4), 522530 (2011).Google Scholar
11. Aung, Y. M. and Al-Jumaily, A., “sEMG based ANN for shoulder angle prediction,” Procedia Eng. 41, 10091015 (2012).Google Scholar
12. Yin, Y. H., Fan, Y. J. and Xu, L. D., “EMG and EPP-integrated humanmachine interface between the paralyzed and rehabilitation exoskeleton,” IEEE Trans. Inform. Technol. Biomed. 16 (4), 542549 (2012).Google Scholar
13. Aertbelin, E. and De Schutter, J., “Learning a Predictive Model of Human Gait for the Control of a Lower-limb Exoskeleton,” Proceedings of the 5th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics, IEEE (2014) pp. 520525.Google Scholar
14. He, H., Kiguchi, K. and Horikawa, E., “A study on lower-limb muscle activities during daily lower-limb motions,” Int. J. Bioelectromagnetism 9 (2), 7984 (2007).Google Scholar
15. Jimenez-Fabian, R. and Verlinden, O., “Review of control algorithms for robotic ankle systems in lower-limb orthoses, prostheses and exoskeletons,” Med. Eng. Phys. 34 (4), 397408 (2012).CrossRefGoogle ScholarPubMed
16. Anam, K. and Al-Jumaily, A. A., “Active exoskeleton control systems: State of the art,” Procedia Eng. 41, 988994 (2012).Google Scholar
17. Yan, T., Cempini, M., Oddo, C. M. and Vitiello, N., “Review of assistive strategies in powered lower-limb orthoses and exoskeletons,” Robot. Auton. Syst. 64, 120136 (2015).Google Scholar
18. Han, Y., Zhu, S., Zhou, Z., Shi, Y. and Hao, D., “Research on a multimodal actuator-oriented power-assisted knee exoskeleton,” Robotica, 117 (2016), doi: 10.1017/S0263574716000576.Google Scholar
19. Shotton, J., Sharp, T., Kipman, A., Fitzgibbon, A., Finocchio, M., Blake, A. and Moore, R., “Real-time human pose recognition in parts from single depth images,” Commun. ACM 56 (1), 116124 (2013).Google Scholar
20. Moeslund, T. B., Hilton, A. and Krger, V., “A survey of advances in vision-based human motion capture and analysis,” Comput. Vis. Image Underst. 104 (2), 90126 (2006).Google Scholar
21. Poppe, R., “A survey on vision-based human action recognition,” Image Vis. Comput. 28 (6), 976990 (2010).Google Scholar
22. Zhou, F., De la Torre, F. and Hodgins, J. K., “Hierarchical aligned cluster analysis for temporal clustering of human motion,” IEEE Trans. Pattern Anal. Mach. Intell. 35 (3), 582596 (2013).CrossRefGoogle ScholarPubMed
23. Vgele, A., Krger, B. and Klein, R., “Efficient Unsupervised Temporal Segmentation of Human Motion,” In Proceedings of the ACM SIGGRAPH/Eurographics Symposium on Computer Animation (pp. 167–176). Eurographics Association.Google Scholar
24. Taylor, G. W., Hinton, G. E. and Roweis, S. T., “Modeling Human Motion using Binary Latent Variables,” In Advances in Neural Information Processing Systems (2006) pp. 1345–1352.Google Scholar
25. Taylor, G. W., Sigal, L., Fleet, L. and Hinton, G. E., “Dynamical Binary Latent Variable Models for 3d Human Pose Tracking,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (2010) pp. 631638.Google Scholar
26. LeCun, Y., Huang, F. J. and Bottou, L., “Learning Methods for Generic Object Recognition with Invariance to Pose and Lighting,” Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2004. CVPR 2004, IEEE (2004) pp. II-97-104.Google Scholar
27. Lee, H., Pham, P., Largman, Y. and Ng, A. Y., “Unsupervised Feature Learning for Audio Classification Using Convolutional Deep Belief Networks,” Advances in Neural Information Processing Systems (2009) pp. 10961104.Google Scholar
28. Davis, J. W., “Hierarchical Motion History Images for Recognizing Human Motion,” Proceedings of the IEEE Workshop on Detection and Recognition of Events in Video, IEEE (2001) pp. 3946.Google Scholar
29. Le, Q. V., Zou, W. Y., Yeung, S. Y. and Ng, A. Y., “Learning Hierarchical Invariant Spatio-Temporal Features for Action Recognition with Independent Subspace Analysis,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), IEEE (2011) pp. 33613368.Google Scholar
30. Ji, S., Xu, W., Yang, M. and Yu, K., “3D convolutional neural networks for human action recognition,” IEEE Trans. Pattern Anal. Mach. Intell. 35 (1), 221231 (2013).Google Scholar
31. Tompson, J., Stein, M., Lecun, Y. and Perlin, K., “Real-time continuous pose recovery of human hands using convolutional networks,” ACM Trans. Graph. (TOG) 33 (5), 169 (2014).Google Scholar
32. Jain, A., Tompson, J., Andriluka, M., Taylor, G. W. and Bregler, C., “Learning Human Pose Estimation Features with Convolutional NetworksProceedings of International Conference on Learning Representations 2014 (ICLR 2014), Banff, Canada (2014) pp. 114.Google Scholar
33. Carreira-Perpinan, M. A. and Hinton, G. E., “On Contrastive Divergence Learning,” Proceedings of the 10th International Workshop on Artificial Intelligence and Statistics, NP: Society for Artificial Intelligence and Statistics (2005) pp. 3340.Google Scholar
34. Bishop, C. M., Pattern Recognition and Machine Learning (Springer, Berlin, 2006).Google Scholar
35. Coates, A., Ng, A. Y. and Lee, H., “An Analysis of Single-Layer Networks in Unsupervised Feature Learning,” Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA, 15 (2011).Google Scholar
36. Park, F. C., “Distance metrics on the rigid-body motions with applications to mechanism design,” J. Mech. Des. 117 (1), 4854 (1995).Google Scholar
37. Bishop, C. M., “Training with noise is equivalent to Tikhonov regularization,” Neural Comput. 7 (1), 108116 (1995).Google Scholar