Hostname: page-component-6bf8c574d5-xtvcr Total loading time: 0 Render date: 2025-02-20T05:56:58.845Z Has data issue: false hasContentIssue false

Battery-free head orientation measurement using passive RFID tags

Published online by Cambridge University Press:  17 February 2025

Jeyeon Jo
Affiliation:
Department of Textiles, Merchandising, and Interiors, University of Georgia, Athens, GA, USA
Heeju T. Park*
Affiliation:
Department of Human Centered Design, Cornell University, Ithaca, NY, USA
*
Corresponding author: Heeju T. Park; Email: [email protected]

Abstract

Real-time measurement of head rotation, a primary human body movement, offers potential advantages in rehabilitating head or neck motor disorders, promoting seamless human–robot interaction, and tracking the lateral glance of children with autism spectrum disorder for effective intervention. However, existing options such as cameras capturing the entire face or skin-attached sensors have limitations concerning privacy, safety, and/or usability. This research introduces a novel method that employs a battery-free RFID tag-based wearable sensor for monitoring head orientation, as a substitute for the existing options like camera. By attaching a pair of passive RFID tags to the front of the head at a specific distance from each other, the signal strength of each tag within the pair differs based on the discrepancy in distance from the RFID reader caused by head rotation. Important parameters including distance between the tags, distance from the reader, and tag types, are investigated to suggest optimal sensor design. In tests involving random head rotations by 10 healthy adults, there was a significant correlation between the orientation of the head and gaze in the yaw direction and the differences in signal strength from the sensor pairs. The correlation coefficients ($ {r}^2 $) were satisfactory, at 0.88 for head and 0.83 for left eye pupil orientations. However, the sensor failed to estimate pitch rotations for head and gaze, due to the insufficient vertical spacing between the tags. No demographic factors appeared to influence the results.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

1. Introduction

Head rotation is a part of fundamental human movements, serving as a primary means of social interaction and a source of comfort (Langton, Reference Langton2000). In the medical field, the analysis of head motion is a crucial aspect of diagnosing and rehabilitating individuals with head and neck motor disorders (Ali et al., Reference Ali, Sen, Li, Langevin, Myers, Dorsey, Sharma and Hoque2021; Mihajlovic et al., Reference Mihajlovic, Popovic, Brkic and Cosic2018; Yamanobe et al., Reference Yamanobe, Fujioka, Ohashi and Ozawa2022). Head motor impairments are also important signals for diagnosing and monitoring developmental disorders (Flanagan et al., Reference Flanagan, Landa, Bhat and Bauman2012; Raya et al., Reference Raya, Rocon, Ceres, Harlaar and Geytenbeek2011; Saavedra et al., Reference Saavedra, Woollacott and van Donkelaar2010). Head orientation often serves to estimate the gaze or the accompanied social attention (Bizzi, Reference Bizzi1974; Jiang et al., Reference Jiang, Xu, Guo, Liu and Cheng2019; Lee et al., Reference Lee, Jo, Jung, Park and Kim2011; Sidenmark and Gellersen, Reference Sidenmark and Gellersen2019). Head orientation determines the field of view and can often represent eye gaze as an easier and more affordable method (Blattgerste et al., Reference Blattgerste, Renner and Pfeiffer2018; Renner and Pfeiffer, Reference Renner and Pfeiffer2017; Špakov et al., Reference Špakov, Istance, Räihä, Viitanen and Siirtola2019). Lateral glance, defined as looking at objects out of the corners of the eyes without head and gaze orientation alignment, is one of the most frequently observed atypical visual behaviors among the children with autism spectrum disorders (ASD) (Coulter, Reference Coulter2009; Mottron et al., Reference Mottron, Mineau, Martel, Bernier, Berthiaume, Dawson, Lemay, Palardy, Charman and Faubert2007). One of the biggest consequences of this lateral glance is a challenge in social interactions, as maintaining appropriate eye contact is an essential skill in building relationships (Hellendoorn et al., Reference Hellendoorn, Langstraat, Wijnroks, Buitelaar, van Daalen and Leseman2014; Hustyi et al., Reference Hustyi, Ryan and Hall2023).

Systems connecting the head pose and the attention provide seamless interactions between the users and the robot agents (Tamaru et al., Reference Tamaru, Ozaki, Okafuji, Nakanishi, Yoshikawa and Baba2022; B. Zhang et al., Reference Zhang, Chen, Tuna, Dave, Li, Lee and Hartmann2014) or make the vehicles ready for upcoming direction changes for the drivers or wheelchair users (Takahashi et al., Reference Takahashi, Kadone and Suzuki2011; Zhao et al., Reference Zhao, Görne, Yuen, Cao, Sullman, Auger, Lv, Wang, Matthias, Skrypchuk and Mouzakitis2017). Mouse cursor control through head movements for individuals with tetraplegia or cerebral palsy can be an alternative interaction technique to gaze-based mouses (Velasco et al., Reference Velasco, Clemotte, Raya, Ceres and Rocon2017; Williams and Kirsch, Reference Williams and Kirsch2016). Head orientation is a big interest in virtual reality to maintain the coordination between the head movement and the camera view while minimizing any perceptual artifact (LaValle et al., Reference LaValle, Yershova, Katsev and Antonov2014).

In order to identify head motion, inertial measurement units (IMUs) and cameras are the most commonly used approaches. Recent advances in computer vision especially have made it possible to achieve head or gaze tracking using cameras (Chang et al., Reference Chang, Di Martino, Aiello, Baker, Carpenter, Compton, Davis, Eichner, Espinosa, Flowers, Franz, Harris, Howard, Perochon, Perrin, Krishnappa Babu, Spanos, Sullivan and Walter2021). Surveillance camera-style systems are popular type of gaze tracker, but continuously capturing the entire face raises privacy concerns (Bowyer, Reference Bowyer2004; Tepencelik et al., Reference Tepencelik, Wei, Chukoskie, Cosman and Dey2021; Tobii, 2023). Alternative wearable methods have been introduced such as LiDAR, electromyography (EMG), microphones, or radiooculogram (Kamoshida and Takemura, Reference Kamoshida and Takemura2018; Salinas-Bueno et al., Reference Salinas-Bueno, Roig-Maimó, Martínez-Bueso, San-Sebastián-Fernández, Varona and Mas-Sansó2021; Severin, Reference Severin2020; Williams and Kirsch, Reference Williams and Kirsch2008; Zhang and Kan, Reference Zhang and Kan2022), but most of them compiled electronic components that are often rigid, thick, and inflexible. They also require a power source and microcontrollers, increasing the weight and rigidity of the whole system.

Passive RFID (radiofrequency identification) tags are a suitable alternative for creating a soft, comfortable, lightweight, and low-cost wearable head motion sensor with wireless communication capability (Kiourti, Reference Kiourti2018; Luo et al., Reference Luo, Gil and Fernández-García2020). Though it has not been used for the head orientation measurement, passive RFID tags have been introduced to track human body movements such as touch, joint bending, and pressure (Jin et al., Reference Jin, Wang, Yang, Kumar and Hong2018; Jo and Park, Reference Jo and Park2021; Li et al., Reference Li, Brockmeyer, Carter, Fromm, Hudson, Patel and Sample2016). Furthermore, its sensing capabiity toward temperature, and humidity increases its potential for the wearable applications (Meng and Li, Reference Meng and Li2016; Nath et al., Reference Nath, Reynolds and Want2006). When designing a wearable RFID-based sensor, it is important to consider the radio wave absorbance by body location, distance from the reader, accompanying body motions, and obstacles between the reader and the tags (Manzari et al., Reference Manzari, Occhiuzzi and Marrocco2012), because it requires an RFID reader near the tags as a power source. As radio waves are susceptible to the environmental factors listed above, most applications using RFID are based on conditions where the RFID tag or user is relatively less active, such as sleeping. (Sharma and Kan, Reference Sharma and Kan2018a).

This study aims to develop a soft and battery-free wearable sensor based on passive RFID tags to monitor the wearer’s head orientation, replacing existing powered sensors or vision-based systems such as cameras. In this study, we assumed that the user is watching a screen while maintaining the gaze at its center, so that any head turning leads to a misalignment of the head with the direction of the screen. The sensor uses pairs of passive RFID tags embedded in eye glasses, head pieces, and/or a face mask to track the wearer’s head movements while sitting in front of a screen with an RFID reader (Figure 1a). To verify the performance of the developed sensor, the random head orientation was tracked by the sensor and compared to the camera-based recordings in terms of correlation (Figure 1b). Additionally, this paper demonstrated the effectiveness of a pair of RFID tags attached to the frontal side of eyeglasses when used with an RFID reader-embedded laptop.

Figure 1. Passive RFID tag-based head orientation sensor. (a) Sensor design, (b) sensor response to random head orientations. The yaw angle of the head orientation is represented by the x-coordinates of the direction of the head pose projected onto a 2D plane parallel to the RFID reader. The range of values for yaw is 0–720. The horizontal thin dotted line at the center of the plane represents the direction of looking straight ahead on the screen.

2. Sensor design

2.1. Sensing principle

The strength of the backscattered signals depends on the distance between the radio wave source and the tags according to the Frii’s free space loss: $ {P}_{\mathrm{R}}={P}_{\mathrm{T}}{\left(\lambda /4\unicode{x03C0} r\right)}^2{\psi}_{\mathrm{T}}{\psi}_{\mathrm{R}} $ , where $ {P}_{\mathrm{R}} $ and $ {P}_{\mathrm{T}} $ are the strength of the signal of the receiver and transmitter, respectively. $ r $ represents the distance between the transmitter and the receiver, $ \lambda $ is the signal wavelength, and $ {\psi}_{\mathrm{R}} $ and $ {\psi}_{\mathrm{T}} $ represent the gains of the receiver and transmitter antennas, respectively (Bolic et al., Reference Bolic, Simplot-Ryl and Stojmenovic2010). Passive RFID systems derives the electromagnetic field generated by the reader to operate the tag. Therefore, the received signal strength indicator (RSSI) can be described as follows: $ \mathrm{RSSI}={P}_{\mathrm{T}}-{P}_{\mathrm{L}}-{L}_{\mathrm{f}}-{L}_{\mathrm{h}} $ , where $ {P}_{\mathrm{L}} $ is path loss, $ {L}_{\mathrm{f}} $ is additional losses such as multipath or antenna misalignment, and $ {L}_{\mathrm{h}} $ is harvesting loss from the conversion of the received energy to usable power in the tag. While $ {L}_{\mathrm{f}} $ and $ {L}_{\mathrm{h}} $ depend on the environment and the tag specifications, $ {P}_{\mathrm{L}} $ can be approximated as follows: $ {P}_{\mathrm{L}}=40\log (d)+20\log (f)+C $ , where $ d $ is the tag-reader distance, $ f $ is operating frequency, and $ C $ is constant related to the environmental factors such as speed of light. The current sensor uses a pair of passive tags, one on each side of the forehead (e.g., at each end of the foreside of the eyeglasses, as shown in Figure 1a), to track head rotation in the transverse plane (yaw – left and right). When the wearer turns their head to the left, the RFID tag on the left side moves farther from the reader, while the distance on the other side gets closer. This results in a decrease in the received signal strength indicator (RSSI) of the left tag and an increase in the signal strength of the right tag (Figure 2). In this sensor, the assumption is that the user will be continuously looking at the screen of an RFID reader-embedded device such as laptop. As the horizontally long form factor has been widely adopted for a long time (e.g., eyeglasses), this sensor basically aims to measure yaw rotations. However, the same principle can be applied to measure pitch on the sagittal plane (up and down) when a pair of tags are arranged vertically on the head. Therefore, we placed two pairs of RFID tags, one pair on the top and the other pair on the bottom edge of a pair of glasses. This allowed this study to primarily examine sensor performance in yaw while reducing noise by averaging the two pairs, as well as explore the potential for pitch movement.

Figure 2. Sensing principle. The color of the lines in the plot corresponds to the arrow color in the illustrations.

To understand the behavior of the sensor through related parameters and to optimize performance, a pair of RFID tags were attached to the top left and top right corners of a pair of safety glasses. After the glasses were placed on a manikin’s head, tag readings were collected using an RFID reader (Speedway Revolution R220, Impinj, WA, USA) with an antenna (S9028PCR, Rfmax, NJ, USA). The RFID reader used in this study reported the RSSI with a resolution of 1 dBm across a range of 0 to −80 dBm. We rotated the manikin head so that the direction of the head and the reader were − 60, −45, −30, −15, 0, 15, 30, 45, and $ {60}^o $ respectively, and collected the sensor reading for 5 seconds for each angle. The detailed setup is described in the following subsections.

2.2. RSSI difference by distance from reader

Consumer electronics with a screen can embed an RFID reader. However, the distance between the sensor and the user’s head varies depending on the type of the reader. For instance, smartphones are closer to the user’s eyes than televisions. To investigate the impact of the distance between the RFID reader and the tags, we rotated the sensor-worn manikin head from $ -{60}^o $ to $ {60}^o $ in yaw at distances of 25, 50, 75, 100, and 150 cm respectively, based on the various scenarios including those involving smartphones, tablets, laptops, and televisions. The sensor signal at 200 cm in a normal home environment was too weak to be included in the analysis. The sensor exhibited a distinct RSSI difference between the tags on each side when they were 50 cm away from the reader, which represents a common distance for laptop or tablet usage (Figure 3a). As anticipated, the sensor performance indicated by the difference in RSSI decreased as the distance from the RFID reader increased. The sensor was still able to detect angular changes at distances of 100 and 150 cm, but the low mean RSSI suggests that these distances may not be optimal for stable and continuous sensing (Figure 3b).

Figure 3. Sensor performance by distance from the RFID reader. (a) Difference in RSSI based on the distance from the reader and (b) mean RSSI of the left tag. The error bar indicates the standard error, and the red line plot represents the selected setting for the user evaluation study.

2.3. RSSI difference against distance between tags

The distance between the two tags can affect sensor performance as it determines the changes in the distance of each tag from the reader. To verify this, we have tested the available distances between the tag centers within a normal glass form factor. The results are as follows: 5 cm (right next to each other), 9 cm (in the middle of the lenses), 13 cm (left and right corner), 20 cm (on the legs). According to the results, when the center-to-center distance of the two tags was 13 cm, the RSSI difference changed linearly within the $ -{60}^o $ to $ {60}^o $ angle range in terms of the reader (Figure 4). Placing the tags on each leg of the glasses did not improve performance, even at the longest distance of 20 cm (measured on the surface, not linearly). Closer distances resulted in reduced RSSI difference due to smaller differences in distance from the reader.

Figure 4. Sensor performance based on the distance between the tags. The error bar indicates the standard error, and the red line plot represents the setting selected for the user evaluation study.

2.4. RSSI of different RFID tag types

This study investigated five off-the-shelf RFID tags, including three made of PET (polyethylene terephthalate), one made of textile, and one made of ceramic (Figure 5a). The results showed that the signal strength of passive RFID tags is dependent on the dimension, shape, and material of the antenna. Tag C, which consisted of a canvas textile and a conductive thread antenna, exhibited the largest RSSI difference between the two tags overall (Figure 5b). Meanwhile, PET inlay-style tags (A, B, and D), which are the most common in the market, outperformed the textile tag in tag counts and RSSI (Figure 5c and 5d). The ceramic-encased tag E did not yield any favorable results. Tag A had the highest RSSI and tag count due to its larger dimensions, but its RSSI difference was moderate, and its size may not be suitable for eyeglasses or headpieces. The user evaluation implemented tag B with fair RSSI difference, signal strength, and appropriate dimensions.

Figure 5. Sensor performance by type of passive RFID tag. (a) Details on the tags used in the study, (b) orientation sensing performance by tag, (c) mean tag counts per second by tag, and (d) mean RSSI by tag. The error bar indicates the standard error, and the red line plot represents the selected setting for the user evaluation study.

3. Evaluation

3.1. Experimental protocol and data processing

Ten healthy adults (7 females and 3 males) with an average age of 25.5 $ \pm $ 5.5 years and an average height of 168.3 $ \pm $ 9.5 cm participated in the experiment. According to the experiment protocol approved by the Institutional Review Board (IRB), the participants wore safety glasses with four RFID tags (top-left, top-right, bottom-left, and bottom-right) attached at each corner. Having two pairs of RFID tags allowed not only to reduce RSSI noise due to multipath but also to examine two rotations (yaw and pitch). While the distance between the left and right tags was 13 cm based on results depicted in Figure 4, the glasses used in this study could only accommodate the maximum vertical distance of 2.5 cm (Figure 6). They sat on a chair in front of a desk, and an RFID antenna (S9028PCR, Rfmax, NJ, USA) connected to an RFID reader (Speedway Revolution R220, Impinj, WA, USA) was positioned above the laptop screen on the desk, approximately 50 cm away from their face. The laptop’s built-in webcam is located at the top of the screen, directly below the RFID reader antenna.

Figure 6. Sensor used in the experiments. Two pairs of RFID tags were attached to each corner of a pair of glasses to track two rotations (yaw and pitch) and to reduce noise.

The current sensing principle requires the RSSI from the four tags to calculate the difference within each pair at the moment. For example, for yaw, the difference between the top-left and top-right tag readings and the difference between the bottom-left and bottom-right tag readings simultaneously were calculated. However, the RFID reader utilized in this study can only read one tag at a time, resulting in missing values when the RSSI of one tag is recorded. To address this issue, missing values were filled through linear interpolation in Matlab using adjacent data (Matlab, n.d.). The average RSSI differences from the two pairs of tags were smoothed using the moving-average method (window = 5). In the same way, the RSSI difference between the top-left and bottom-left, and the difference between the top-right and bottom-right were used for the pitch rotation.

To verify the performance of the current sensor, the user’s face was captured by the laptop camera at a rate of 30 Hz during the tasks. MediaPipe (Developers, Reference Developersn.d.), a computer vision Python library, was used to identify landmarks on the face. As the camera projects the head orientation onto the two-dimensional screen, the orientation data is collected as x and y coordinates instead of angles (Aflalo, Reference Aflalo2022). This representation was suitable for this study, as the examination was conducted with only one direction at a time (yaw or pitch).

Participants were asked to perform two tasks. First, participants were instructed to randomly but continuously rotate their head for 1 min while fixating their gaze on the laptop screen. Following three repetitive sessions of this task, participants were asked to use the laptop for 5 min without any specific instructions.

The data from our sensor (i.e., the RSSI differences) were compared to the head orientation data from the camera by correlation in both yaw and pitch. For yaw, the main interest, we also performed correlation analysis with the projected direction of the eye pupils and the distance between the corners of the eyes (near the center) and the pupil. This allowed us to explore the potential for gaze tracking, as well as the possibility of monitoring discrepancies between head orientation and gaze (e.g., lateral gaze in children with ASD). They are labeled to as “Eye Pupil” and “Pupil-Eye Corner” in the following figures in this paper.

3.2. Results

In general, the sensor showed a lower RSSI on the head (−47.25 dBm) compared to the laboratory test based on the manikin head (−33.77 dBm) when the heads were aligned with the reader. The various materials in the human body (water, tissue, etc.) tend to absorb or scatter the RF signals from the transmitter compared to the synthetic materials of the manikin, which may reduce the strength of the received signals in both the tag and the reader. The sensor based on passive RFID tags demonstrated a strong correlation with both head and eye pupil movements when the user randomly rotated their head (Figure 7a). Specifically, the sensor had a correlation coefficient ( $ {r}^2 $ ) of 0.88 with head pose and 0.83 with left eye pupil movement. However, the distance between the pupil and the eye corner showed a relatively lower coefficient ( $ {r}^2 $ = 0.50), indicating a limitation in the current sensor system’s ability to track gaze rather than just head pose.

Figure 7. Evaluation results. (a) Mean correlation coefficient between the sensor and the head, eye pupil, and distance between the eye pupil and the eye corner during random head rotations. The error bars represent the standard error. (b) Head orientation from the vision and the sensor during 5-min free laptop usage. The yaw angle of the head orientation is represented by the x-coordinates of the direction of the head pose projected onto a 2D plane. The range of values for yaw is 0–720. The center of the screen is indicated by a horizontal gray dotted line, representing the direction of looking straight ahead.

Meanwhile, the sensor’s performance during free laptop usage was not as impressive as during random, slow, and continuous head rotations. The correlation coefficient ( $ {r}^2 $ ) between the sensor and the head, left eye pupil, and the distance between the pupil and the eye corner were 0.59, 0.49, and 0.30, respectively. During the free laptop usage, there was less head movement compared to the intended and continuous head movement. Additionally, the noises became more distinguishable (Figure 7b). The lower signal-to-noise ratio (SNR) during the free laptop usage (−1.32 $ \pm $ 6.32 dB) compared to the intended movements (9.90 $ \pm $ 6.81 dB) confirms the higher influence of the noises.

The demographic factors of sex, age, and height did not significantly influence or relate to the performance of the current sensor (Figure 8). The Wilcoxon rank sum test, a nonparametric method, did not find any significant differences (p > 0.05) between the two sexes. Similarly, the correlation tests of height and age with the correlation coefficients of the sensor did not yield any significant results (p > 0.05, respectively).

Figure 8. Correlation coefficient between the sensor and the ground truth by (a) sex, (b) height, and (c) age of the participants.

The glasses-type sensor currently contained two pairs of RFID tags to reduce the vulnerability of RSSI of the passive tags against the multipath effect, which allowed it to explore the application of the same rotation sensing capability to measure head rotation in the pitch direction. This is particularly relevant for children with ASD who tend to look downwards (Noris et al., Reference Noris, Nadel, Barker, Hadjikhani and Billard2012). However, the distance between the vertically arranged RFID tags was only 25 mm (Figure 9a), which was too small to create a noticeable difference in the distance from the reader caused by head pitch rotation. This resonates with the poor performance of the pair that were too close to each other in Figure 4. Although the experiment detected a clear trend in RSSI (Figure 9b), it could not confirm that the difference represented head or gaze movements due to the low correlation between the RSSI readings and the random and continuous head rotation ( $ {r}^2 $ = 0.35). Furthermore, the correlation between the sensor readings and the head movements during free laptop usage was even lower ( $ {r}^2 $ = 0.24).

Figure 9. Measurement for pitch rotation. (a) The prototype with vertically arranged tags to measure head rotation in pitch, (b) head movements in pitch and the RSSI difference between the vertically arranged tags. The pitch angle of the head orientation was represented by the y-coordinates of the direction of the head pose projected onto a 2D plane parallel to the RFID reader antenna. The range of values for pitch is 0–480. The center of the screen is indicated by a horizontal gray dotted line, representing the direction of looking straight ahead.

4. Discussion

This study presents a soft and battery-free wearable sensor based on passive RFID tags to detect the head orientation of the wearer. The sensor showed a high correlation with camera-based head orientation tracking, demonstrating the potential to replace vision-based systems with the new battery-free sensor. The system had a higher sensing capability at a distance of 50 cm from the reader (Figure 3), making it more suitable for tablet or laptop usage rather than for smartphones or televisions. Although the current study used PET film-based tags for eyeglasses, textile tags could be a viable alternative when implementing the system into other accessories, such as headpieces or caps, due to their superior wearability and ease of maintenance. Meanwhile, the free laptop usage scenario showed much less impressive performance compared to the random and continuous rotation due to the noise during the static condition. This is a critical limitation that needs to be overcome in future studies, as the user scenario in this study required stationary media viewing, which does not involve much head movement. More pairs of tags than two as well as a faster sampling rate will help to reduce the noise when the wearer does not move the head much. Regarding the high correlation between the eye pupil and the sensor, it may mean that the participants moved the gaze following the head movements, even though we asked them to fixate on the center of the screen. This should be due to the natural tendency to move the eyes in the direction of the head, demonstrating the potential of the current sensor to capture gaze movements. Nevertheless, it is also possible that it was a limitation of the camera-based system not to track gaze appropriately, which needs to be investigated in future studies. Though pitch rotation measurement was not successful with the vertically arranged tags in the current prototype (Figure 8), the other wearable devices that cover a larger area of the face, such as masks or VR headsets, may be able to provide enough distance for pitch direction sensing. With that, the head circumference of children with ASD should also be carefully considered in the product specifically designed for them, as their head size tends to be smaller than that of adults but bigger than that of typically developing children (Sacco et al., Reference Sacco, Gabriele and Persico2015).

The current sensor can be used to optimize treatments for populations with above-neck mobility needs, such as those with neck disk or in rehabilitation. Additionally, the sensor can be applied to evaluate sleeping quality based on head pose and movements, which can be useful for developing sleeping masks (Sharma and Kan, Reference Sharma and Kan2018b). The current sensor’s head rotation measurement capability can be utilized in entertainment systems for VR or AR experiences, enhancing user interaction with physical environments. For instance, cardboard 3D glasses can maximize the battery-free motion sensing capability of the current system for improved interactions. As a single RFID reader can detect an unlimited number of tags within its detectable range, it is possible to provide this experience to multiple users who are looking in the same direction, such as in a museum, theater, or classroom (Jiang et al., Reference Jiang, Xu, Guo, Liu and Cheng2019). Similarly, viewing media on a tablet or monitor while seating can align head, gaze, and screen. In this sense, although the sensor was not successful in tracking the eye pupil-eye corner distance representing side looking, head orientation monitoring itself may be able to contribute to tracking or diagnosing lateral gaze behavior in children with ASD.

The rotation sensing capability without a battery can be applied not only to wearable devices but also to other applications related to human–computer interaction. For instance, an interactive screen with an RFID reader can respond to a product with a pair of RFID tags when the customer rotates or shakes the product in retail stores. This enables the screen to track the orientation of the product and display it on the screen like a magic mirror, without using a camera. In an educational setting, RFID tag-embedded toys or materials can be used to create interactive and immersive learning experiences when paired with an RFID reader-embedded table or display. Furthermore, RFID technology has been studied for indoor localization of robot agents, and the current method can help identify the direction in which the agent is heading, improving overall agent localization (Zhang et al., Reference Zhang, Xu and Kan2023).

The current study validated the concept with a small number of healthy adults. The results showed equivalent sensor performance across demographic factors, but further investigation is required to assess feasibility in the target population of younger children with smaller body/head size. Although this study has assumed a scenario where the wearer is watching media on a screen, the requirement that the wearer remain seated stationary to ensure performance is a significant limitation of this sensor. Achieving stable reading of passive RFID tag sensors remains a significant challenge due to the dependence of RF signal strength on the surrounding environment. To eliminate weak signals coming through multi-paths and exclude interference from obstacles, other RFID tags, or metal, the study assumed a short distance between the reader and tag. However, the presence of these factors can significantly impact sensor functionality.

Currently available portable devices in the market, such as smartphones or laptops, only have an RFID reader functioning at a lower frequency (HF – high frequency, NFC – near field communication). RFID reader-embedded devices operating at UHF are promising for larger detection distances and more versatile applications for everyday sensing. RF exposure to human tissue is another important factor that must be carefully designed for the safety of the user, especially when the reader is close to the body. The U.S. Federal Communications Commission (FCC) requires the Specific Absorption Rate (SAR) to be less than 1.6 W/kg over 1 g of human tissue, and the transmitting power of the RFID reader used in this study is 32 dBm (1.58 W), which is significant at close distances. Therefore, the transmission power of the RF signals must be carefully adjusted to ensure the safety of users and the optimal and stable performance of the sensor.

5. Conclusion

This study introduces a battery-free wearable head orientation sensor using passive RFID tags. The RSSI of the RFID tags changes based on the distance to the reader, which depends on the head rotation in yaw or pitch. Preliminary lab tests have confirmed that settings such as 13 cm between tags, 50 cm from the reader, and PET-based tag can generate optimal performance. Testing with human participants showed that the RSSI difference between the tags correlated with head orientation data collected by a computer vision-based system, especially when the wearer continuously and randomly rotated their head rather than freely looking at the laptop. However, the sensor could not track pitch rotation as well as the wearer’s gaze, which remains as a possibility for expansion of the current study.

Data availability statement

The data that support the findings of this study are available on request from the corresponding author, H.T.P. The data are not publicly available due to their containing information that could compromise the privacy of research participants.

Acknowledgments

The authors express gratitude to all participants who volunteered for the study.

Authorship contribution

J.J conceptualized, carried out, and analyzed the system design and evaluation under the supervision of H.T.P. All authors contributed and approved the final manuscript.

Funding statement

This study was partially supported by the Dissertation Research Fund of College of Human Ecology, Cornell University.

Competing interest

The authors declare no competing interests exist.

Ethical standard

This study was conducted under the approval of Institutional Review Board (IRB) of Cornell University (IRB0143675, approved on 4/1/2022).

References

Aflalo, A (2022) Github – Amitt1236/gazeestimation: Gaze Tracking [[Online; accessed 2023-05-14]]. https://github.com/amitt1236/Gaze_estimationGoogle Scholar
Ali, MR, Sen, T, Li, Q, Langevin, R, Myers, T, Dorsey, ER, Sharma, S and Hoque, E (2021) Analyzing head pose in remotely collected videos of people with Parkinson’s disease. ACM Transactions on Computing for Healthcare 2(4), 113. https://doi.org/10.1145/3459669CrossRefGoogle Scholar
Bizzi, E (1974) The coordination of eye-head movements [publisher: Scientific American, a division of Nature America, Inc.]. Scientific American 231(4), 100109Google Scholar
Blattgerste, J, Renner, P and Pfeiffer, T (2018) Advantages of eye-gaze over head-gaze-based selection in virtual and augmented reality under varying field of views [[Online; accessed 2023-05-04]]. Proceedings of the Workshop on Communication by Gaze Interaction, 19.https://doi.org/10.1145/3206343.3206349Google Scholar
Bolic, M, Simplot-Ryl, D and Stojmenovic, I (2010, September 23) Rfid Systems: Research Trends and Challenges [Google – Books-ID: VansInOpixEC]. John Wiley Sons.CrossRefGoogle Scholar
Bowyer, K (2004) Face recognition technology: Security versus privacy [event-title]. IEEE Technology and Society Magazine 23(1), 919. https://doi.org/10.1109/MTAS.2004.1273467CrossRefGoogle Scholar
Chang, Z, Di Martino, JM, Aiello, R, Baker, J, Carpenter, K, Compton, S, Davis, N, Eichner, B, Espinosa, S, Flowers, J, Franz, L, Harris, A, Howard, J, Perochon, S, Perrin, EM, Krishnappa Babu, PR, Spanos, M, Sullivan, C, Walter, BK, et al (2021) Computational methods to measure patterns of gaze in toddlers with autism spectrum disorder. JAMA Pediatrics 175(8), 827836. https://doi.org/10.1001/jamapediatrics.2021.0530CrossRefGoogle ScholarPubMed
Coulter, RA (2009) Understanding the visual symptoms of individuals with autism spectrum disorder (ASD). Optometry and Vision Development 40(3), 164175Google Scholar
Developers, G (n.d.) Mediapipe [[Online; accessed 2023-05-08]]. https://developers.google.com/mediapipeGoogle Scholar
Flanagan, JE, Landa, R, Bhat, A and Bauman, M (2012) Head lag in infants at risk for autism: A preliminary study. The American Journal of Occupational Therapy 66(5), 577585. https://doi.org/10.5014/ajot.2012.004192CrossRefGoogle ScholarPubMed
Hellendoorn, A, Langstraat, I, Wijnroks, L, Buitelaar, JK, van Daalen, E and Leseman, PP (2014) The relationship between atypical visual processing and social skills in young children with autism. Research in Developmental Disabilities 35(2), 423428. https://doi.org/10.1016/j.ridd.2013.11.012CrossRefGoogle ScholarPubMed
Hustyi, KM, Ryan, AH and Hall, SS (2023) A scoping review of behavioral interventions for promoting social gaze in individuals with autism spectrum disorder and other developmental disabilities. Research in Autism Spectrum Disorders 100, 102074. https://doi.org/10.1016/j.rasd.2022.102074CrossRefGoogle ScholarPubMed
Jiang, B, Xu, W, Guo, C, Liu, W and Cheng, W (2019) A classroom concentration model based on computer vision [[Online; accessed 2023-04-29]]. Proceedings of the ACM Turing Celebration Conference – China, 16. https://doi.org/10.1145/3321408.3322856Google Scholar
Jin, H, Wang, J, Yang, Z, Kumar, S and Hong, J (2018) Rf-wear: Towards wearable everyday skeleton tracking using passive RFIDs. Proceedings of the 2018 ACM International Joint Conference and 2018 International Symposium on Pervasive and Ubiquitous Computing and Wearable Computers, 369372.Google Scholar
Jo, J and Park, H (2021) RFInsole: Batteryless gait-monitoring smart insole based on passive RFID tags. 2021 International Symposium on Wearable Computers, 141143. https://doi.org/10.1145/3460421.3478810CrossRefGoogle Scholar
Kamoshida, R and Takemura, K (2018) Head pose classification by using body-conducted sound [[Online; accessed 2023-04-29]]. Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology, 3941. https://doi.org/10.1145/3266037.3266094Google Scholar
Kiourti, A (2018) Rfid antennas for body-area applications: From wearables to implants [event-title]. IEEE Antennas and Propagation Magazine 60(5), 1425. https://doi.org/10.1109/MAP.2018.2859167CrossRefGoogle Scholar
Langton, SRH (2000) The mutual influence of gaze and head orientation in the analysis of social attention direction [publisher: SAGE Publications]. The Quarterly Journal of Experimental Psychology Section A 53(3), 825845. https://doi.org/10.1080/713755908CrossRefGoogle Scholar
LaValle, SM, Yershova, A, Katsev, M and Antonov, M (2014) Head tracking for the oculus rift [ISSN: 1050–4729]. 2014 IEEE International Conference on Robotics and Automation (ICRA), 187194. https://doi.org/10.1109/ICRA.2014.6906608CrossRefGoogle Scholar
Lee, SJ, Jo, J, Jung, HG, Park, KR and Kim, J (2011) Real-time gaze estimator based on driver’s head orientation for forward collision warning system [event-title]. IEEE Transactions on Intelligent Transportation Systems 12(1), 254267. https://doi.org/10.1109/TITS.2010.2091503CrossRefGoogle Scholar
Li, H, Brockmeyer, E, Carter, EJ, Fromm, J, Hudson, SE, Patel, SN and Sample, A (2016) PaperID: A technique for drawing functional battery-free wireless interfaces on paper. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 58855896. https://doi.org/10.1145/2858036.2858249CrossRefGoogle Scholar
Luo, C, Gil, I and Fernández-García, R (2020) Wearable textile UHF-RFID sensors: A systematic review. Materials 13(15), 3292. https://doi.org/10.3390/ma13153292CrossRefGoogle ScholarPubMed
Manzari, S, Occhiuzzi, C and Marrocco, G (2012) Feasibility of body-centric systems using passive textile rfid tags [eventtitle]. IEEE Antennas and Propagation Magazine 54(4), 4962. https://doi.org/10.1109/MAP.2012.6309156CrossRefGoogle Scholar
Matlab (n.d.) 1-d Data Interpolation (Table Lookup) – Matlab Interp1 [[Online; accessed 2024-03-24]]. https://www.mathworks.com/help/matlab/ref/interp1.htmlGoogle Scholar
Meng, Z and Li, Z (2016) Rfid tag as a sensor – A review on the innovative designs and applications. Measurement Science Review 16(6), 305315. https://doi.org/10.1515/msr-2016-0039CrossRefGoogle Scholar
Mihajlovic, Z, Popovic, S, Brkic, K and Cosic, K (2018) A system for head-neck rehabilitation exercises based on serious gaming and virtual reality. Multimedia Tools and Applications 77(15), 1911319137. https://doi.org/10.1007/s11042-017-5328-zCrossRefGoogle Scholar
Mottron, L, Mineau, S, Martel, G, Bernier, CS-C, Berthiaume, C, Dawson, M, Lemay, M, Palardy, S, Charman, T and Faubert, J (2007) Lateral glances toward moving stimuli among young children with autism: Early regulation of locally oriented perception? [[Online; accessed 2022-05-10]]. Development and Psychopathology 19(01). https://doi.org/10.1017/S0954579407070022CrossRefGoogle Scholar
Nath, B, Reynolds, F and Want, R (2006) Rfid technology and applications. IEEE Pervasive Computing 5(1), 2224. https://doi.org/10.1109/MPRV.2006.13CrossRefGoogle Scholar
Noris, B., Nadel, J., Barker, M., Hadjikhani, N., & Billard, A. (2012). Investigating Gaze of Children with ASD in Naturalistic Settings. PLoS ONE, 7(9), e44144. https://doi.org/10.1371/journal.pone.0044144CrossRefGoogle ScholarPubMed
Raya, R, Rocon, E, Ceres, R, Harlaar, J and Geytenbeek, J (2011) Characterizing head motor disorders to create novel interfaces for people with cerebral palsy: Creating an alternative communication channel by head motion [ISSN: 1945–7901]. 2011 IEEE International Conference on Rehabilitation Robotics, 16. https://doi.org/10.1109/ICORR.2011.5975409Google ScholarPubMed
Renner, P and Pfeiffer, T (2017) Augmented reality assistance in the central field-of-view outperforms peripheral displays for order picking: Results from a virtual reality simulation study. 2017 IEEE International Symposium on Mixed and Augmented Reality (ISMAR-Adjunct), 176181. https://doi.org/10.1109/ISMAR-Adjunct.2017.59CrossRefGoogle Scholar
Saavedra, S, Woollacott, M and van Donkelaar, P (2010) Head stability during quiet sitting in children with cerebral palsy: Effect of vision and trunk support. Experimental Brain Research 201(1), 1323. https://doi.org/10.1007/s00221-009-2001-4CrossRefGoogle ScholarPubMed
Sacco, R, Gabriele, S and Persico, AM (2015) Head circumference and brain size in autism spectrum disorder: A systematic review and meta-analysis. Psychiatry Research: Neuroimaging 234(2), 239251. https://doi.org/10.1016/j.pscychresns.2015.08.016CrossRefGoogle ScholarPubMed
Salinas-Bueno, I, Roig-Maimó, MF, Martínez-Bueso, P, San-Sebastián-Fernández, K, Varona, J and Mas-Sansó, R (2021) Camera-based monitoring of neck movements for cervical rehabilitation mobile applications [number: 6 publisher: Multidisciplinary digital publishing institute]. Sensors 21(6), 2237. https://doi.org/10.3390/s21062237CrossRefGoogle ScholarPubMed
Severin, I-C (2020) Head posture monitor based on 3 imu sensors: Consideration toward healthcare application [ISSN: 2575–5145]. 2020 International Conference on e-Health and Bioengineering (EHB), 14. https://doi.org/10.1109/EHB50910.2020.9280106Google Scholar
Sharma, P and Kan, EC (2018a) Sleep scoring with a UHF RFID tag by near field coherent sensing [ISSN: 2576–7216]. 2018 IEEE/MTT-S International Microwave Symposium – IMS, 14191422. https://doi.org/10.1109/MWSYM.2018.8439216CrossRefGoogle Scholar
Sharma, P and Kan, EC (2018b) Sleep scoring with a UHF RFID tag by near field coherent sensing [ISSN: 2576-7216]. IEEE/MTT-S International Microwave Symposium – IMS 2018, 14191422. https://doi.org/10.1109/MWSYM.2018.8439216CrossRefGoogle Scholar
Sidenmark, L and Gellersen, H (2019) Eyehead: Synergetic eye and head movement for gaze pointing and selection [[Online; accessed 2023-05-01]]. Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology, 11611174. https://doi.org/10.1145/3332165.3347921CrossRefGoogle Scholar
Špakov, O, Istance, H, Räihä, K-J, Viitanen, T and Siirtola, H (2019) Eye gaze and head gaze in collaborative games [[Online; accessed 2023-05-04]]. Proceedings of the 11th ACM Symposium on Eye Tracking Research Applications, 19. https://doi.org/10.1145/3317959.3321489Google Scholar
Takahashi, K, Kadone, H and Suzuki, K (2011) Head orientation sensing by a wearable device for assisted locomotion [[Online; accessed 2022-05-12]]. Proceedings of the 2nd Augmented Human International Conference on – AH ‘11, 14. https://doi.org/10.1145/1959826.1959842Google Scholar
Tamaru, Y, Ozaki, Y, Okafuji, Y, Nakanishi, J, Yoshikawa, Y and Baba, J (2022) 3d head-position prediction in first-person view by considering head pose for human-robot eye contact [[Online; accessed 2023-04-29]]. 2022 17th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 10641068. https://doi.org/10.1109/HRI53351.2022.9889437Google Scholar
Tepencelik, ON, Wei, W, Chukoskie, L, Cosman, PC and Dey, S (2021) Body and head orientation estimation with privacy preserving lidar sensors [ISSN: 2076-1465]. 2021 29th European Signal Processing Conference (EUSIPCO), 766770. https://doi.org/10.23919/EUSIPCO54536.2021.9616111CrossRefGoogle Scholar
Tobii (2023) Tobii: Global Leader in Eye Tracking for Over 20 Years [[Online; accessed 2023-05-03]]. https://www.tobii.com/Google Scholar
Velasco, MA, Clemotte, A, Raya, R, Ceres, R and Rocon, E (2017) Human–computer interaction for users with cerebral palsy based on head orientation. Can cursor’s movement be modeled by fitts’s law? International Journal of Human-Computer Studies 106, 19. https://doi.org/10.1016/j.ijhcs.2017.05.002CrossRefGoogle Scholar
Williams, MR and Kirsch, RF (2008) Evaluation of head orientation and neck muscle EMG signals as command inputs to a human–computer interface for individuals with high tetraplegia [event-title]. IEEE Transactions on Neural Systems and Rehabilitation Engineering 16(5), 485496. https://doi.org/10.1109/TNSRE.2008.2006216CrossRefGoogle ScholarPubMed
Williams, MR and Kirsch, RF (2016) Case study: Head orientation and neck electromyography for cursor control in persons with high cervical tetraplegia. Journal of Rehabilitation Research and Development 53(4), 519530. https://doi.org/10.1682/JRRD.2014.10.0244CrossRefGoogle ScholarPubMed
Yamanobe, Y, Fujioka, M, Ohashi, M and Ozawa, H (2022) Potential usefulness of tracking head movement via a wearable device for equilibrium function testing at home. Journal of Medical Systems 46(11), 80. https://doi.org/10.1007/s10916-022-01874-4CrossRefGoogle Scholar
Zhang, Z and Kan, EC (2022) Radiooculogram (rog) for eye movement sensing with eyes closed [ISSN: 2168-9229]. IEEE Sensors 2022, 14. https://doi.org/10.1109/SENSORS52175.2022.9967173Google Scholar
Zhang, B, Chen, Y-H, Tuna, C, Dave, A, Li, Y, Lee, E and Hartmann, B. (2014) Hobs: Head orientation-based selection in physical spaces [[Online; accessed 2023-04-29]]. Proceedings of the 2nd ACM Smposium on Spatial User Interaction, 1725. https://doi.org/10.1145/2659766.2659773CrossRefGoogle Scholar
Zhang, Z, Xu, G and Kan, EC (2023) Outlooks for UHF RFID-based autonomous retails and factories [event-title]. IEEE Journal of Radio Frequency Identification 7, 1219. https://doi.org/10.1109/JRFID.2022.3211474CrossRefGoogle Scholar
Zhao, Y, Görne, L, Yuen, I-M, Cao, D, Sullman, M, Auger, D, Lv, C, Wang, H, Matthias, R, Skrypchuk, L and Mouzakitis, A (2017) An orientation sensor-based head tracking system for driver behaviour monitoring. Sensors 17(11), 2692. https://doi.org/10.3390/s17112692CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. Passive RFID tag-based head orientation sensor. (a) Sensor design, (b) sensor response to random head orientations. The yaw angle of the head orientation is represented by the x-coordinates of the direction of the head pose projected onto a 2D plane parallel to the RFID reader. The range of values for yaw is 0–720. The horizontal thin dotted line at the center of the plane represents the direction of looking straight ahead on the screen.

Figure 1

Figure 2. Sensing principle. The color of the lines in the plot corresponds to the arrow color in the illustrations.

Figure 2

Figure 3. Sensor performance by distance from the RFID reader. (a) Difference in RSSI based on the distance from the reader and (b) mean RSSI of the left tag. The error bar indicates the standard error, and the red line plot represents the selected setting for the user evaluation study.

Figure 3

Figure 4. Sensor performance based on the distance between the tags. The error bar indicates the standard error, and the red line plot represents the setting selected for the user evaluation study.

Figure 4

Figure 5. Sensor performance by type of passive RFID tag. (a) Details on the tags used in the study, (b) orientation sensing performance by tag, (c) mean tag counts per second by tag, and (d) mean RSSI by tag. The error bar indicates the standard error, and the red line plot represents the selected setting for the user evaluation study.

Figure 5

Figure 6. Sensor used in the experiments. Two pairs of RFID tags were attached to each corner of a pair of glasses to track two rotations (yaw and pitch) and to reduce noise.

Figure 6

Figure 7. Evaluation results. (a) Mean correlation coefficient between the sensor and the head, eye pupil, and distance between the eye pupil and the eye corner during random head rotations. The error bars represent the standard error. (b) Head orientation from the vision and the sensor during 5-min free laptop usage. The yaw angle of the head orientation is represented by the x-coordinates of the direction of the head pose projected onto a 2D plane. The range of values for yaw is 0–720. The center of the screen is indicated by a horizontal gray dotted line, representing the direction of looking straight ahead.

Figure 7

Figure 8. Correlation coefficient between the sensor and the ground truth by (a) sex, (b) height, and (c) age of the participants.

Figure 8

Figure 9. Measurement for pitch rotation. (a) The prototype with vertically arranged tags to measure head rotation in pitch, (b) head movements in pitch and the RSSI difference between the vertically arranged tags. The pitch angle of the head orientation was represented by the y-coordinates of the direction of the head pose projected onto a 2D plane parallel to the RFID reader antenna. The range of values for pitch is 0–480. The center of the screen is indicated by a horizontal gray dotted line, representing the direction of looking straight ahead.