← Back to Research

AI-Glasses Embedded with Agentic AI System

Building AI-Glasses embedded with an agentic AI system to enhance user interaction and experience through advanced perception and decision-making capabilities.

Mentor

Jyh-Hong Wu

Division of Virtual-Real Integration

National Center for High-Performance Computing

Team

Sheng-Kai Chen

Project Developer & Paper Author

Ching-Yao Lin

Assistant of Android Studio Development

Yen-Ting Lin

Assistant of AI-Glasses

Abstract

This research presents an AI glasses system integrating real-time voice processing, artificial intelligence agents, and cross-network streaming capabilities. The system employs dual-agent architecture where Agent 01 handles Automatic Speech Recognition (ASR) and Agent 02 manages AI processing through local Large Language Models (LLMs), Model Context Protocol (MCP) tools, and Retrieval-Augmented Generation (RAG). The system supports real-time RTSP streaming for voice and video data transmission, eye tracking data collection, and remote task execution through RabbitMQ messaging. Implementation demonstrates successful voice command processing with multi-language support and cross-platform task execution capabilities.