Laxmisha Rai (Special session 17)

Invited Talk: Laxmisha Rai, Shandong University of Science and Technology

Special session 17: Special session on 5G and Artificial Intelligence

   

Short Bio: 
Laxmisha Rai received Ph.D degree majoring in Information and Communication from Kyungpook National University, South Korea in 2008. From 2008 to 2009, he worked as a Post-Doctoral Researcher at Soongsil University of South Korea. Dr. Rai has significant teaching, industry and research experience of over 17 years in the fields of Information Technology (IT), Communication, Software engineering and Bilingual Education in China, India, and South Korea. Currently he is a Professor, at College of Electronic and Information Engineering, Shandong University of Science and Technology, Qingdao, China. His research interests include software engineering, real-time systems, embedded systems, autonomous mobile robots, expert systems, wireless sensor networks, MOOC, and Bilingual Education. He has published over 60 research papers in peer-reviewed international conferences, and journals. He is author two patents, and three books. For his research, he has won three best paper awards at prestigious international conferences held in South Korea, and USA. He is a Senior Member of IEEE, and Member of ACM. He is currently serving as an Associate Editor for the IEEE Access journal.

Title: Knowledge-Based Systems and Machine Vision: Applications in Gesture Generation and Recognition

Abstract:
In the first part of talk, the experiments on generating intelligent gestures using knowledge-based system (KBS) is described. The Jess expert system is used as the knowledge-based tool and different gestures are generated autonomously using different rules. The KBS guarantees the predictability of the behaviors and supports to analyze the non-determinism in robot operation. With this approach, any user can generate any purposeful behaviors using simple facts and rules to demonstrate different robot action sequences. In the second part of talk, the experiments conducted on hand gesture recognition system for parking applications based on machine vision is described. Here, the best features of image recognition is combined along with embedded control system. The system completes the image preprocessing and recognizes the gesture information. For the choice of gestures, five different gestures to control the vehicle’s movements are identified. Afterwards, the BP neural network identifies the recognition results, the embedded system converts the recognition results into control instructions to control the vehicles’ movement.