← Back to Research

GGCNN Model Improvements for Low Computing Resource Environments

Rebuild GGCNN models to optimize performance in environments with limited computing resources, focusing on efficiency and accuracy improvements.

Advisor

Po-Chiang Lin

Innovative Technology Lab

Yuan Ze University

Team

Sheng-Kai Chen

Project Developer & Paper Revise

Jie-Yu Chao

Paper Co-Author

Jr-Yu Chang

Paper Co-Author

Po-Lien Wu

Paper Co-Author

Abstract

Despite the effectiveness of deep neural networks in robotic grasp detection, deploying them on resource-constrained platforms remains challenging due to high computational demands. This study proposed a knowledge distillation-based approach to compress the Generative Grasping Convolutional Neural Network (GGCNN), enabling efficient real-time performance without compromising grasping accuracy. Two lightweight student models, Version 1 and Version 2, are designed using distinct distillation strategies to strike a balance between model size, inference speed, and accuracy. Experimental results show that both student models significantly reduce model size and inference time, with one achieving up to a 75% size reduction and nearly halving the inference time, while maintaining competitive IoU accuracy. Notably, Version 2 matches the teacher model’s accuracy while offering improved efficiency and higher throughput, demonstrating the effectiveness of the method for real-time robotic applications where speed and resource efficiency are essential.