Today the US Patent & Trademark Office published an Apple granted patent that relates to user interfaces for interacting with their future mixed reality headset (HMD) by simply using eye gaze (technology).
Apple’s granted patent describes techniques for interacting with an HMD using an eye gaze. According to some embodiments, a user uses their eyes to select a text input field displayed on the HMD display.
The techniques provide a more natural and efficient interface by, in some exemplary embodiments, allowing a user to identify where text is to be entered primarily using eye gazes.
The techniques are advantageous for virtual reality, augmented reality, and mixed reality devices and applications.
The techniques can also be applied to conventional user interfaces on devices such as desktop computers, laptops, tablets, and smartphones.
Apple’s patent FIG. 2 below depicts a top view of a user (#200) whose gaze is focused on an object (#210). The user’s gaze is defined by the visual axes of each of the user’s eyes (as depicted by rays 201A and 201B). The direction of the visual axes defines the user’s gaze direction, and the distance at which the axes converge defines the gaze depth.
The gaze direction can also be referred to as the gaze vector or line-of-sight. In FIG. 2, the gaze direction is in the direction of the object and the gaze depth is the distance D, relative to the user. The gaze direction and/or gaze depth are characteristics used to determine a gaze location.
(Click on image to Enlarge)
In some embodiments, the center of the user’s cornea, the center of the user’s pupil, and/or the center of rotation of the user’s eyeball are determined to determine the position of the visual axis of the user’s eye. Accordingly, the center of the user’s cornea, the center of the user’s pupil, and/or the center of rotation of the user’s eyeball can be used to determine the user’s gaze direction and/or gaze depth.
In some embodiments, gaze depth is determined based on a point of convergence of the visual axes of the user’s eyes (or a location of minimum distance between the visual axes of the user’s eyes) or some other measurement of the focus of a user’s eye s). Optionally, the gaze depth is used to estimate the distance at which the user’s eyes are focused.
Apple’s patent FIG. 4 below illustrates a Head Mounted Display device (HMD) with a built-in Gaze Sensor. A user will be able to look at a form (for example) in VR world or real-world environment and be able to direct their text input to specific areas of the form as detected by the gaze sensor detecting the user’s focus. The technology is so precise that a minor gaze from the “First Name” input slot to the next box “Last Name” could be accurately detected so that the user can fill in the box without the use of a mouse.
(Click on image to Enlarge)
Apple notes that the gaze sensor (#410) is directed toward a user and, during operation, captures characteristics of the user’s eye gaze, such as image data of the eyes of the user.
In some embodiments, the gaze sensor includes an event camera that detects event data from a user (eg, the user’s eyes) based on changes in detected light intensity over time and uses the event data to determine gaze direction and/or gaze depth.
Optionally, the HMD uses both image data and event data to determine gaze direction and/or gaze depth. Optionally, the HMD uses ray casting and/or cone casting to determine the gaze direction and/or gaze depth. In some embodiments, multiple gaze sensors are used.
For more details, review Apple’s granted patent 11,314,396.