In the past few years, we have covered various gaze and gesture based input methods for computers. As Carnegie Mellon University and Harvard researchers have explored, these input methods can be quite complementary. By using gaze and gesture, you are going to have an easier time interacting with digital interfaces in a more accurate fashion.
As the above video suggests, Gaze+Gesture outperforms systems using gaze or gesture alone. You can use such a system to grab windows, interact with files, and control notification. What do you think? How would you improve this input method?