The results of the search and selection process
A total of 863 publications were retrieved, of which 101 studies meeting the inclusion criteria were selected for systematic review. Based on the research content, the included literature was categorized into the following themes: health management (16/101), stress relief and psychological intervention (15/101), assistance in clinical surgery(23/101), tools for supporting clinical diagnosis and treatment (32/101) and telemedicine services—including telemedical education and remote diagnosis and treatment (30/101). Some of the literature covers multiple content areas. In addition, a portion of the literature focused on the ethical challenges and limitations faced by smart glasses in medical applications42,43,44,45,46, providing valuable references for future technological advancements and clinical integration.
Figure 2 illustrates our systematic search strategy and result flowchart using the PRISMA framework, detailing the process from the identification of records through database searching to the final inclusion of studies after screening and eligibility assessment.

This diagram illustrates the complete screening process222.
Technical basis and research status of smart glasses
Smart glasses represent an advanced form of wearable computing technology designed to be worn on the head or as part of eyewear. These devices typically integrate a transparent display that overlays digital information onto the wearer’s field of vision, augmenting the physical world with real-time data47. The composition of smart glasses can vary widely but generally includes components such as microprocessors, sensors, cameras, connectivity modules, and user interface elements. Depending on their design and intended use, smart glasses can be classified into several categories based on their primary function or underlying technology:
Mixed reality (MR) glasses combine elements of both the real and virtual worlds to create new environments where physical and digital objects coexist and interact.
Virtual reality (VR) glasses fully immerse the user in a simulated environment, blocking out the real world entirely.
Augmented reality (AR) glasses overlay computer-generated information on top of the user’s view of the real world, enhancing the perception of reality without fully replacing it.
AI-powered glasses integrate artificial intelligence to provide context-aware assistance, predictive analytics, and personalized experiences.
Seamless integration is the standout feature of MR glasses, enabling users to interact with both real and digital objects simultaneously. This capability sets MR apart for applications requiring complex interactions, such as remote collaboration, medical training, industrial maintenance, and education. Compared to VR, MR offers a higher level of endurance and medium fashionability, and portability, making it suitable for extended use in various environments. Devices like Microsoft HoloLens2 and Meta Quest Pro are equipped with precision tracking systems that ensure stable performance without obstructing daily activities.
Total immersion characterizes VR glasses, which provide an unparalleled experience by replacing the user’s view of the physical world with a simulated one. While they score lower on portability and fashionability due to their heavy weight, VR devices excel in delivering high-resolution displays and powerful graphics processing. This makes them ideal for entertainment, gaming, education, and training scenarios where complete isolation from the external environment is beneficial. In addition, products like Meta Quest and HTC Vive prioritize sensory engagement over mobility, offering lifelike simulations that can be used for everything from architectural walkthroughs to therapeutic treatments.
Contextual enhancement without displacement is the forte of AR glasses, which overlay information onto the real world without fully replacing it. AR glasses strike a balance between functionality and wearability, with medium weight and portability that do not impede daily activities. They are well-suited for practical applications such as navigation, translation, entertainment, and photography. Devices like Microsoft HoloLens and Magic Leap offer valuable context-aware data that enhances situational awareness and decision-making. Unlike VR, AR maintains a connection to the physical world, ensuring continuous interaction while providing supplementary information.
Table 1 provides a detailed comparison of different types of smart glasses, highlighting the unique attributes and functionalities associated with each category.
Intelligent assistance through context-aware computing defines AI-powered glasses, which provide personalized support using machine learning algorithms. These glasses excel in sectors like healthcare, finance, transportation, services, and entertainment, thanks to their high fashionability, portability, and lightweight. Products like Ray-Ban-Meta integrate seamlessly into everyday life, offering predictive analytics and real-time recommendations based on environmental cues and historical data. The key advantage of AI-powered glasses is their ability to enhance human performance with smart, anticipatory guidance, setting them apart from other types of smart glasses that may not offer the same level of personalized assistance.
Common smart glasses models on the market
Table 2 offers an exhaustive comparison of various smart glass models currently available on the market, with an in-depth analysis based on key characteristics. These include support for large language models, optical technology, voice recognition, image object detection, AR/VR capabilities, applications, and appearance. The information comes from the official websites of each brand.
The current market smart glasses have made significant breakthroughs in technology and functionality, but there are still many shortcomings that limit their widespread use. For example, many brands offer relatively simple functions, mainly providing basic voice reminders and broadcasting features, lacking more advanced AR/VR functions, image recognition, or intelligent interaction applications. Therefore, it is recommended to strengthen the integration of AR/VR capabilities and improve their applicability in various scenarios, especially in intelligent interaction. Additionally, the issue of battery life is still a concern. Some products have relatively small battery capacities, limiting long-term use. To improve the user experience, the battery capacity should be increased, ideally to over 400 mAh, to ensure users do not need to charge frequently during extended use. Furthermore, the display quality and resolution still need improvement. A few products offer relatively poor display quality, which is not suitable for detailed AR applications or high-definition displays. Therefore, it is recommended to increase the display resolution to at least 1920 × 1080 or higher to ensure clarity and meet AR/VR and high-quality video requirements. In terms of interactive features, many smart glasses lack voice recognition or gesture control and mainly rely on physical buttons or limited voice interaction, reducing convenience. Therefore, integrating more advanced voice assistants and gesture control features will enhance the interaction experience. Additionally, although some brands offer AR capabilities, their visual effects, refresh rates, and immersive experiences still have room for improvement. Many products lack a comprehensive AR experience, so optimizing AR display effects and increasing the refresh rate to over 120 Hz will significantly improve users’ sense of immersion and overall experience. Lastly, some brands focus too much on basic features like voice assistants, music, and calls, without fully expanding on high-end applications like AR, object recognition, and translation. Therefore, it is recommended to innovate in the area of feature expansion, adding functionalities like health monitoring, real-time translation, and object recognition to meet the needs of various user groups. In conclusion, while current smart glasses products have made breakthroughs in certain fields, to truly expand the market and enhance user experience, further improvements and optimizations are needed in areas such as battery life, display quality, interactive features, and diverse applications.
The core technology of smart glasses is intricately divided into multiple modules, encompassing both hardware and software components that facilitate the realization of intelligent functionalities. As delineated in Table 3, the hardware module serves as the fundamental basis for the functionality of smart glasses, incorporating essential components such as display systems, sensing technologies, processing units, and interaction interfaces.
As delineated in Table 3, the hardware module encompasses several sophisticated submodules pivotal to advanced augmented reality systems. The display technology submodule adopts Micro LED displays, optical waveguides, and LCD displays, facilitating superior AR overlay capabilities that enhance the visualization of digital information within the physical world. Complementing this, the projection technology submodule leverages lasers or micromirror arrays to project images directly onto the retina or spectacle lenses, engendering deeply immersive visual experiences. The awareness of the hardware submodule involves cameras for environmental visual perception, facial recognition, object tracking, and gesture recognition, enabling advanced user interaction and situational awareness. Sensor’s submodule includes technologies such as Inertial Measurement Units (IMU), GPS, and accelerometers, facilitating motion capture and positional tracking. The handling unit submodule comprises AI chips, like the Qualcomm Snapdragon platform, which processes visual, auditory, and multi-modal inputs in real time, enabling intelligent decision-making and adaptive functionality. Storage and battery modules provide essential support for data processing and ensure continuous operation, facilitating extended use and seamless performance. Interactive devices submodules include trackpads for intuitive control and eye-tracking devices that utilize infrared light or cameras to monitor eye movements, enabling gaze-based control and enhancing user interaction.
As outlined in Table 4, the software and AI technology modules are crucial for the functionality of smart glasses, powering their advanced capabilities. The multi-modal data fusion submodule integrates visual, voice, and environmental data, enhancing perception and interaction through sophisticated multi-modal models (such as GPT-4). The computer vision submodule includes object detection and recognition for real-time scene analysis and SLAM technology for precise spatial positioning and navigation in augmented reality. The NLP submodule supports voice instruction processing, question answering systems, translation, and environmental semantic analysis. The AR interaction submodule utilizes AR SDKs (such as Unity, ARKit, and ARCore) to overlay and interact with virtual information. The edge computing vs. cloud computing submodule distinguishes between edge computing, which handles simple tasks on the device side to reduce latency, and cloud computing, which relies on remote servers to complete complex data calculations and multi-modal model operation.
The functional modules of smart glasses are meticulously engineered to enhance user experience and expand applicability across a spectrum of use cases, including translation, navigation, health monitoring, scene recognition, and education and entertainment, as detailed in Table 5, thereby catering to a diverse array of user needs.
Application scenarios of smart glasses
The core applications of smart glasses encompass visual assistance, smart home control, navigation and positioning, information prompts and reminders, social interaction support, health monitoring, and education and training. Leveraging advanced AI technology and sensor integration, smart glasses offer personalized life support through augmented reality (AR), voice interaction, and other technical capabilities. For example, for visually impaired users, smart glasses can integrate object recognition, real-time obstacle detection, and path planning features, significantly enhancing travel safety and overall convenience48,49.
In the realm of smart homes, smart glasses serve as central control hubs, managing functions such as lighting, temperature, and security systems through voice or visual commands. Building on this capability, the HUAWEI Smart Glasses 2 further extends its functionality by offering application reminders, including weather updates, schedule summaries, and health monitoring, with features like cervical spine fatigue detection50.
Additionally, there are sports-oriented smart glasses designed for fitness enthusiasts, capable of recording exercise data such as steps, heart rate, and GPS routes, providing valuable scientific insights to support training and fitness goals51. In the field of education, smart glasses offer an AR-enhanced learning experience, featuring functions such as translation and recording, supported by visual assistance and interactive learning modules to enhance both learning efficiency and engagement.
Healthcare represents a key development focus for smart glasses, with core technologies encompassing large multi-modal models, high-precision sensors, and real-time data analysis. In the realm of chronic disease management, a study by Guan et al.52 highlights the pivotal role of AI in the prevention and management of diabetes, a field with significant development potential, smart glasses can leverage integrated multi-modal large models and AI algorithms to monitor the health trends of patients with chronic diseases. Current trends focus on incorporating smart sensors into wearable devices, enabling continuous health monitoring within the user’s natural environment53. Figure 3 illustrates the various application possibilities of smart glasses.

Smart glasses integrate sensors and AI technology, pair with wearable devices, and utilize big data analytics to enable real-time health monitoring, personalized health recommendations, telemedicine services, and mental health support (By Figdraw).
By incorporating optical sensors, infrared cameras, and edge computing modules, smart glasses can collect and analyze users’ health data, such as heart rate, blood sugar levels, and blood pressure, in real time, offering precise health monitoring and management for patients. For example, Microsoft’s blood pressure monitoring smart glasses utilize optical technology to quickly and accurately record blood pressure metrics54. In the domain of blood glucose monitoring, smart glasses are still under research and development. However, several projects have already demonstrated the feasibility of this technology. Park et al. introduced the development of a wireless smart contact lens glucose biosensor, showcasing its ability to monitor glucose levels as a non-invasive alternative to traditional blood glucose measurements, highlighting the potential of smart contact lenses in non-invasive glucose monitoring55. Emteq Labs’ smart glasses, Sense, can monitor health and capture data at a rate of 6000 times per second56. VR glasses have been applied intraoperatively to monitor patients’ emotional states and deliver interventions—including guided meditation, relaxation techniques, and distraction strategies—to effectively alleviate anxiety, stress, and pain during surgical procedures57,58,59,60,61,62,63. Based on the current developments of smart glasses in the field of mental health, this study proposes future application scenarios for mental health management using smart glasses, as outlined in Table 6.
As data collection improves and technology advances, smart glasses are poised to play a pivotal role in personalized health management and intervention. By leveraging multi-modal data analysis, these glasses can provide patients with tailored health recommendations on diet, exercise, and more64,65,66,67. Additionally, they can remind patients to take their medications on time and assess the effectiveness of these treatments. This personalized health management service is particularly beneficial for individuals with chronic diseases, as it offers real-time feedback, empowering patients to make timely adjustments to their lifestyle or medication regimens, thereby enhancing their daily lives and overall well-being68,69,70,71,72. Table 7 provides a detailed description of personalized health advice and reminders, as well as medication management and reminders, based on the capabilities of smart glasses.
Smart glasses can assist clinicians in capturing, recording, and storing key findings during consultations, eliminating the need for manual data entry or the use of a scribe73,74,75,76,77. They also facilitate electronic medical record management, enabling direct conversion of records to electronic format, which significantly reduces the time spent transferring data from physical files78,79,80,81. With the integration of AI technology, clinicians can analyze rapid test results and gain insights to optimize patient care. Additionally, smart glasses serve as valuable tools for surgeons during procedures46,82,83,84,85. Smart glasses have been applied to different types of clinical procedures64,86,87,88,89,90,91,92. Following the release of Google Glass, Dr. Phil Haslam and Dr. Sebastian Mafeld demonstrated its potential in interventional radiology. They showcased how Google Glass could assist in liver biopsies and fistuloplasties93, potentially enhancing patient safety, improving operator comfort, and increasing surgical efficiency.
As exemplified by the smart glasses developed by Vuzix specifically for telemedicine applications94. These glasses can bridge the distance between doctors and patients through voice connectivity, which enables caregivers to instantly share medical expertise with healthcare professionals worldwide, offering life-saving guidance from any location95,96,97,98,99,100,101. They facilitate real-time exchange of expert medical feedback without compromising patient care, while also providing surgeons with immediate input to reduce errors and enhance surgical precision through AR technology102,103,104. For patients in remote areas, smart glasses offer doctors the ability to monitor patients’ conditions in real time through remote video calls and data sharing, bringing convenient healthcare services directly to those in need105. This model helps address the challenges of unequal distribution of medical resources and significantly enhances the efficiency and accuracy of patient follow-up. Table 8 showcases the role of smart glasses in telemedicine.
With the continuous advancement of sensor technology and AI algorithms, smart glasses will gradually become miniaturized and adapt to more medical scenarios106,107. The combination of these technologies will not only improve the quality of medical care, but also optimize the allocation of medical resources and provide personalized health management services for more people108.
The industrial applications of smart glasses, leveraging AR, Internet of Things (IoT) connectivity, AI analytics, and high-precision vision and motion sensing technologies, have the potential to significantly enhance productivity, safety, and product quality. The following Table 9 outlines specific application scenarios.
The role and challenges of smart glasses in health management
Under the concept of active health, there is growing emphasis on maintaining a healthy diet and ensuring food quality, with the detection and analysis of food nutrients becoming a major research focus. The proper intake of nutrients such as proteins, fats, carbohydrates, vitamins, and minerals is essential for human health109. Consequently, the development of efficient and precise nutrient detection technologies is crucial for formulating evidence-based dietary plans and upholding food safety standards. Traditional chemical analysis methods, however, are often time-consuming and complex, frequently requiring destructive sampling that limits their practical application110.
Recent advancements in computer vision and deep learning have revolutionized non-destructive nutrient assessment. These technologies excel in automatic feature extraction, accurate classification, and end-to-end learning, positioning them as indispensable tools for food image recognition and nutritional evaluation. For example, the NutriNet system utilizes convolutional neural networks to achieve high classification accuracy but has limitations when processing multi-component images111. The MResNet-50 model, enhanced by natural language processing (NLP), enables automatic recipe extraction, addressing intra-class variability in food images, yet demonstrates limited generalization capabilities for unseen categories112. Wang et al.’s model integrates EfficientNet113, Swin Transformer, and Feature Pyramid Network (FPN) to adapt to complex scenarios, though it requires further development for detailed component identification in traditional Chinese dishes113,114,115,116.
The Im2Calories app exemplifies this progress by combining segmentation and classification techniques to evaluate meals with an accuracy rate of 76%, providing a robust solution for fine-grained differentiation117. Liu et al.’s multi-dish recognition model employs EfficientDec to enhance the accuracy of dietary intake reporting, although it necessitates frequent dataset updates to account for seasonal variations118. The ChinaMarketFood109 database has been instrumental in training Inception V3, improving image classification accuracy; however, there remains room for improvement in estimating nutrient content. Emerging models like DPF-Nutrition leverage monocular images along with depth prediction modules to estimate food nutrition, though they encounter limitations when processing stacked images119. The RGB-D feature fusion network integrates color and depth information, enhancing multi-modal learning capabilities and offering solutions for occlusion management and the recognition of complex scenes120. From food image recognition to comprehensive nutritional assessment, deep learning and multi-modal technologies demonstrate significant potential. However, challenges related to adaptability in complex scenarios, model generalizability, and computational costs must be addressed.
Figure 4 depicts the workflow of smart glasses in the realm of food nutrition recognition. This application is primarily designed to offer real-time food identification121, comprehensive nutritional analysis, and tailored health recommendations by integrating state-of-the-art computer vision122, AI, and AR technologies. Initially, real-time food identification is facilitated through the smart glasses’ integrated camera and advanced image recognition capabilities, which swiftly scan and analyze the visual characteristics of food. The AI algorithm subsequently cross-references the captured image data with an extensive nutritional database, thereby providing users with detailed insights into the food’s composition, encompassing calories, proteins, fats, sugars, and other essential nutrients. Nutritional information is presented in an intuitive and engaging format, equipping users with valuable and actionable dietary knowledge. For instance, ChatDiet123 realizes personalized nutrition-oriented food recommendations through an LLM-augmented framework. It integrates individual and population models. The individual model employs causal discovery and reasoning techniques to evaluate the nutritional effects on specific users, while the population model provides generalized nutritional information about food. The coordinator transmits the outputs of both models to the LLM, thereby offering customized food recommendations. The effectiveness of its food recommendation test reaches 92%.

Initially, real-time food identification is facilitated through the smart glasses’ integrated camera and advanced image recognition capabilities, which swiftly scan and analyze the visual characteristics of food. The AI algorithm subsequently cross-references the captured image data with an extensive nutritional database, thereby providing users with detailed insights into the food’s composition, encompassing calories, proteins, fats, sugars, and other essential nutrients. Nutritional information is presented in an intuitive and engaging format, equipping users with valuable and actionable dietary knowledge.
Utilizing AI technology, smart glasses seamlessly superimpose real-time nutritional information onto the user’s field of vision. By simply directing their gaze at food, users are automatically presented with relevant nutrient data and health recommendations. This functionality empowers users to conduct swift nutritional assessments prior to consumption, thereby facilitating healthier decision-making. Moreover, by incorporating user-specific health data—such as weight, age, activity level, and health goals—the smart glasses can deliver personalized dietary suggestions based on their real-time food recognition capabilities. For example, if the system detects the consumption of high-sugar food, the glasses may prompt the user to monitor their sugar intake or suggest healthier alternatives. Furthermore, the glasses can integrate with the user’s broader health management ecosystem, such as a smartwatch or health app, to provide a more holistic health assessment. By continuously monitoring eating habits and physical activity, these integrated devices offer long-term solutions for personalized health management.
Integrated active health management platform combining smart glasses and health IoT devices
The proposed platform architecture for AI-powered smart glasses is designed to support proactive health management through a multi-layered approach, integrating advanced technologies to ensure seamless functionality as shown in Fig. 5. The architecture consists of four hierarchical layers: the perceptual layer, data layer, application layer, and interactive layer. Each layer is strategically designed to leverage the latest advancements in technology, ensuring a cohesive system that supports real-time health monitoring and personalized health management.

The basic architecture of the platform is composed of the perceptual layer, data layer, application layer, and interactive layer.
The perceptual layer comprises a suite of hardware sensors, including AR glasses cameras, heart rate sensors, body temperature sensors, glucose monitoring devices, GPS modules, and other wearable sensors. These components are responsible for real-time data acquisition, enabling comprehensive physiological and environmental monitoring. For example, wearable hydrogel-based health monitoring systems can provide real-time monitoring of health indicators such as glucose, uric acid, lactose, heart rate, blood pressure, and temperature. Additionally, flexible self-powered bioelectronics (FSPB) can dynamically monitor physiological signals, revealing real-time health abnormalities and providing timely, precise treatments124.
The data layer facilitates the processing, storage, and management of the raw data collected by the perceptual layer. Core modules include: Data storage, utilization of cloud databases, and edge storage technologies to store diverse data formats such as images, videos, and health metrics. Cloud databases collect, deliver, replicate, and push data to the edge using hybrid cloud concepts, ensuring efficient data management125. Data cleaning and processing, technologies like Apache Kafka, Apache Flink, and TensorFlow are employed for efficient data preprocessing and integration. Data analysis and security, advanced analytical frameworks (e.g., MySQL, MongoDB, InfluxDB) combined with encryption tools such as AWS KMS and Azure Key Vault ensure robust data analysis and compliance with privacy standards.
The application layer encapsulates the core functionalities of the platform, focusing on health management and user engagement: Data analysis and processing, algorithms for advanced data interpretation, including health trend predictions and anomaly detection. Deep learning algorithms, which have achieved great success in image processing and speech recognition, are expected to open new depths for health monitoring systems. Intelligent recommender system, personalization of health interventions through AI-driven insights. Telemedicine services facilitate remote consultation and real-time diagnosis, bridging gaps in healthcare accessibility. Health Data Management ensures organized and secure storage of user health records for continuous monitoring and evaluation.
The interactive layer is designed to enhance user experience through multi-modal interaction mechanisms. It includes: User interface (UI), features such as a health dashboard, real-time data monitoring, and health report interfaces for intuitive visualization. Interaction modules, speech command, voice assistant, gesture control, and eye-movement interaction for hands-free operation and accessibility. Multi-modal interaction mechanisms, such as those involving LLMs, can enhance text processing abilities and provide more intuitive user experiences. Personalization and push notifications deliver customized health warnings, insights, and recommendations to the user in real time.
This multi-layered design creates a cohesive system that seamlessly integrates hardware capabilities with advanced software functionalities. It facilitates real-time health monitoring, personalized health management, and enhanced interaction for a wide range of users. The platform’s modular structure ensures scalability, adaptability, and robustness, allowing it to meet the evolving needs of next-generation wearable health technologies.
To enhance the data processing capacity and analysis accuracy of the active health management platform integrated with smart glasses and IoT devices, a multi-dimensional approach is essential. This approach involves strengthening data quality, leveraging advanced data processing techniques, and optimizing the data storage and analysis architecture.
The platform needs to ensure the integration of data from smart glasses, IoT devices (e.g., smart bracelets, smart scales, blood pressure monitors), and other health-related devices. This data should encompass physiological signals, signs, behaviors, and environmental factors to ensure multi-dimensional and diversified inputs. High-precision sensors are critical for real-time data collection, as they reduce data errors and enhance health monitoring reliability.
Regarding data storage and processing architecture, we can leverage edge computing to shift preliminary data analysis and filtering from the cloud to the device side (e.g., smart glasses or smart devices). This will reduce data transmission delay and bandwidth requirements, making real-time health monitoring more efficient. Edge computing will enhance data processing timeliness, while distributed database technology can store large volumes of health data. By scaling out, the platform can efficiently process massive amounts of data while maintaining stability. Additionally, a cloud-based big data analysis framework will process complex health datasets. Using distributed computing and storage ensures the platform can handle various data types at scale and generate real-time analysis reports based on user needs.
To improve the accuracy of data analysis and modeling, machine learning methods such as deep learning (e.g., neural networks)126, reinforcement learning127, and support vector machines (SVMs)128 are employed to analyze and predict users’ health data. Continuous optimization and training of these models enhance the accuracy of the analysis. Furthermore, personalized recommendation algorithms can be developed based on the user’s health history, physical characteristics, and behavioral data. These algorithms provide precise health recommendations tailored to the user’s specific situation, such as chronic medical history and genetic characteristics.
The integration of traditional Chinese and Western medicine theories is achieved by combining the constitution identification of traditional Chinese medicine with the modern medical data of Western medicine to build a hybrid model. This approach improves the predictive ability of users’ health status and the accuracy of dietary recommendations. Natural Language Processing (NLP) technology is used to analyze TCM literature129 and Western medicine research, integrating these theories to make personalized health recommendations. By implementing these multi-dimensional measures, the platform not only enhances the accuracy and reliability of health data analysis but also provides users with more personalized and effective health management services. This approach aligns with the latest advancements in technology, ensuring a cohesive system that supports real-time health monitoring and personalized health management.
The synthesis of smart glasses with Internet of Things (IoT) medical devices represents a pivotal advancement in the realm of active health management platforms. Central to this integration is the establishment of robust, seamless connectivity and interoperable data exchange protocols that facilitate real-time physiological monitoring. As a core wearable technology, smart glasses are envisioned to be outfitted with a comprehensive suite of advanced biosensors, including but not limited to heart rate monitors, pulse oximeters, thermometers, gait analyzers, and accelerometers. These sensors continuously capture granular biometric data from the user.
Supplementing the capabilities of smart glasses, IoT-enabled medical devices such as ambulatory blood pressure monitors, continuous glucose monitors, and bioimpedance scales provide additional critical health parameters, thereby enriching the dataset with metrics like arterial pressure, glycemic levels, and anthropometric measures. This synergistic integration ensures the integrity, comprehensiveness, and precision of the collected health information, offering a panoramic overview of the individual’s wellbeing130.
Figure 6 illustrates the design of integrated solutions for smart glasses and IoT devices. The intelligent health system architecture encompasses the hardware layer (including smart glasses, IoT devices, sensors), the data transmission layer (wireless communication and real-time encrypted transmission), the data processing and analysis layer (cloud storage, intelligent analysis, etc.), the user interaction layer (interaction methods such as eye movement and gestures), and the application scenario layer (such as user health management), while also taking into account privacy and real-time feedback.

The intelligent health system architecture encompasses the hardware layer (including smart glasses, IoT devices, sensors), the data transmission layer (wireless communication and real-time encrypted transmission), the data processing and analysis layer (cloud storage, intelligent analysis, etc.), the user interaction layer (interaction methods such as eye movement and gestures), and the application scenario layer (such as user health management), while also taking into account privacy and real-time feedback.
In terms of data transmission, both smart glasses and IoT devices should employ low-power wireless communication standards—such as Bluetooth Low Energy (BLE)131, IEEE 802.11 Wi-Fi, or ZigBee132—to ensure real-time data synchronization. Smart glasses aggregate biometric data from daily activities and transmit it via secure, encrypted channels over a wireless network to a cloud-based platform for storage and processing133. Data security and privacy are paramount; therefore, all transmissions comply with stringent encryption protocols and adhere to pertinent data protection regulations and industry standards.
For analysis and processing, the cloud platform will consolidate multi-modal datasets from smart glasses and IoT devices, harnessing machine learning algorithms and AI for sophisticated analytics. The system will perform continuous health status surveillance, anomaly detection, and trigger alerts or recommendations when deviations from baseline health metrics are observed. For instance, upon detecting tachycardia or bradycardia, the system would promptly notify the user134 and advise appropriate actions, such as resting or seeking medical consultation. Moreover, the platform will generate personalized health management strategies based on each user’s medical history and lifestyle factors, providing tailored services like physical activity guidance and nutritional counseling135.
The multi-modal interaction design of smart glasses enhances user engagement with the health management platform. Leveraging NLP for voice commands, eye-tracking49,136 for interface navigation, and capacitive touchscreens for manual input, the glasses dynamically adapt their visual displays according to the user’s health indicators, offering real-time monitoring and relevant health advisories.
At the application level, the convergence of smart glasses and IoT devices significantly benefits both end-users and healthcare practitioners. Users gain tools for proactive health management, while clinicians can remotely monitor patient health and provide timely interventions137. Through the integrated platform, physicians can access remote patient monitoring (RPM)138 data, evaluate conditions, assess therapeutic efficacy, and devise personalized care pathways, thus enhancing telemedicine capabilities. Furthermore, the platform facilitates the compilation of detailed, longitudinal health records, archiving all user health data for future reference. Such a repository plays an indispensable role in ongoing health maintenance and predictive analytics for disease prevention.
Building an active health management platform based on the integration of smart glasses and IoT medical devices involves the integration and collaboration of smart glasses with a variety of IoT devices (such as health monitoring devices and medical sensors) through device-side AI technology. The application of edge AI technology can ensure that data is processed locally on the device (i.e., smart glasses or other IoT devices) in real time, reducing latency, improving response speed, and data security, while reducing the dependence on cloud processing139. Figure 7 shows different application scenarios of on-device AI technology.

Application scenarios rely on smart glasses to achieve diversified application forms.
For the application of real-time health data monitoring and personalized health management, smart glasses leverage built-in sensors and IoT devices (such as blood glucose meters, heart rate monitors, and blood pressure monitors) to collect real-time health data, enabling continuous health monitoring. The device-side processing capabilities of smart glasses allow for the analysis of physiological data (e.g., heart rate, blood glucose, and body temperature) directly on the device. With embedded AI chips, the glasses can analyze this data in real time and provide immediate feedback, even identifying health abnormalities (such as high blood sugar or abnormal heart rate) and issuing automatic warnings. This minimizes reliance on cloud services, ensuring real-time monitoring and improving the timeliness and accuracy of health management. Additionally, by integrating with IoT devices like exercise trackers and sleep monitors, the smart glasses offer personalized health recommendations. For instance, based on daily activity and health data, the system may suggest increasing physical activity, adjusting diet, or improving sleep quality. Through localized AI models, the glasses analyze users’ health data and create tailored intervention strategies. As users’ health data evolves, the AI system adapts the recommendations, ensuring continuous, personalized health management that is both network-independent and responsive to real-time needs. And for action recognition and health interventions, smart glasses, in combination with motion sensors (such as IoT smart bracelets), can monitor the user’s movements in real time and detect potential health-risk behaviors, such as prolonged sitting or improper exercise. By leveraging the visual and motion recognition capabilities of the smart glasses, along with motion data from IoT devices, the integrated AI system can evaluate the user’s activity posture and behavior (e.g., sitting or walking posture) in real time. The system provides corrective guidance instantly, reducing data latency with local processing and feedback. This ensures real-time monitoring and prompt, actionable guidance for improving posture and preventing health risks.
Aiming at the application of medical image analysis and diagnostic support, smart glasses can be seamlessly integrated with IoT devices (such as smart thermometers and blood pressure monitors) and medical imaging equipment (like portable ultrasound and X-ray machines) to assist healthcare professionals in diagnosing conditions. Through computer vision technology, smart glasses analyze real-time image data of the user, combining it with health data from IoT devices to facilitate quicker diagnoses. For example, by visually assessing the user’s face and physical signs, smart glasses can detect potential health issues (such as paleness or eye abnormalities) and provide real-time recommendations to doctors. This integration minimizes data transfer needs and cloud dependencies, ensuring faster diagnostic support.
For automatic detection and alarming of abnormal events, IoT devices integrated with smart glasses can promptly detect health emergencies (such as falls or seizures) and alert medical personnel or family members. By leveraging the sensors and image processing capabilities of smart glasses, along with IoT devices (like smart bracelets and environmental sensors), the system continuously monitors the user’s physiological state. In the event of an emergency, such as a fall or significant heart rate fluctuations, the AI system responds immediately by sending an alarm signal to ensure timely assistance. On-device AI ensures rapid response and processing, reducing dependence on cloud services and enhancing response speed. Seamless integration and collaboration can also be achieved. Smart glasses not only integrate with individual IoT devices but also seamlessly collaborate with multiple devices to enhance data sharing and the comprehensiveness of health management. Through on-device AI technology, smart glasses can interact with various IoT devices, such as smart bracelets, smart home systems, and environmental monitoring devices, to exchange data and provide holistic health management. For instance, based on the user’s health data, smart glasses can automatically adjust the smart home environment—such as temperature, humidity, and air quality—to optimize comfort and well-being.
By deploying edge computing on smart glasses and IoT devices, data processing tasks are moved to edge nodes locally or close to the device. This reduces the latency of data transmission and enables real-time data synchronization and processing between devices, ensuring consistency and accuracy of information. Then the real-time data synchronization protocol is adopted, such as Message Queues Telemetry Transport (MQTT)140, WebSocket141, to ensure efficient and stable data transmission between devices. These protocols are designed for low-latency and efficient communication to ensure synchronization and data consistency between devices, especially in IoT environments. And use data consistency algorithms and verification mechanisms, such as Cyclic Codes for Error Detection, Unix timestamp, to ensure that the transferred data is not lost or corrupted. Even if there is a delay or disconnection in communication between devices, the system ensures data integrity and consistency. A distributed architecture can be designed to ensure seamless connection between the local data processing of smart glasses and IoT devices and cloud systems142. The device can process the data locally and upload the results to the cloud for further analysis or storage, ensuring data synchronization and consistency. Set up the device to continue to collect data when it is offline and compensate for the data when the network is restored. The system can ensure consistency between offline and online data through mechanisms such as timestamping and versioning, avoiding data loss or duplication. This ensures consistency and accuracy of data between devices. The combination of these technologies can improve the efficiency and stability of the active health management platform.
link
