1. AI-based robotics
This research is to implement real-time hardware-based artificial intelligence (AI) algorithms to the current robotic control and computer vision. AI will be applied to object recognition, motion control, and sensor data analysis. The research platform is built with Nvidia Xavier and Xilinx FPGA.
2. Hardware AI platform
The basics of the research is to implement AI algorithm in a much more basic level than the current trending AI with high-level programming language. We investigate the implementation of an AI system based on FPGA-based system-on-a-chip (SoC). The effort on realizing such system is to minimize the requirement on the hardware so that our solutions can be portable yet sufficient for the robot to finish tasks in real-time.
The versatility of the FPGA can make the system suitable for multiple applications without a complete overhaul on the universal design. The hardware AI does not work alone. The software part is still necessary not only for the users but also for the developers. The corresponding software is developed on the SoC part of the FPGA. It takes care of the user-interface and other high-level coordination. It also establishes high-level communication with other devices and makes the internet of things (IoT) concept possible.
3. AI-based sensor fusion
To control the robot, sensor signals are the keys to make decisions. We investigate the efficient and clever way to perform sensor fusion, so that the decisions of AI system can be made with a reliable reference. The research is two-fold. First, it is important to investigate optimal placement and combinations of the sensors applied to the system.
Second, it is essential to understand these signals and deliver suitable instructions to the robot. Our goal is to deliver an easy-to-deploy system which is compatible with various existing sensing technologies, such as ultrasound, depth camera, and light detection and ranging (LiDAR)Î. The system can perform local data screening and preliminary sensor fusion and share the valuable information within global networks.
4. AI-based signal processing
We are now working with a collaborative robotic arm and developing a computer vision system with a depth camera for object recognition. The robotic arm is equipped with ultrasonic sensors to perform live non-destructive testing on the object it “sees”. The aim is to develop an automated system which can efficiently conduct non-destructive testing on the production line without supervision.
The computations should be done real-time, and the system is expected to have the capability to communicate with cloud server for knowledge exchange.