Human vision works very differently than the lens of a camera, in that only the very center of our field of view is in focus, where the periphery is less detailed, and slightly blurry.
Cameras, on the other hand are designed to capture images that are entirely in focus. This is because people are going to be looking at many different places on the image or video, and keeping only a small area is focus is usually not feasible. Granted there are exceptions to this rule, but rarely used in general media applications.
The same concept has applied to video games for many decades. While no real camera is involved, the image displayed on the screen of a computer or television is in focus across the screen, not just where the player happens to be focusing.
Graphics card giant Nvidia has incorporated eye tracking software into virtual reality headsets. By monitoring exactly where the user’s eyes are aimed, the hardware knows what area of the screen needs to be in focus, and that the rest of the field of view can reduce in quality.
If done correctly, the eye won’t be able to perceive any reduction in quality in its peripheral view. So why go through the trouble of developing technology that, when working properly, no one is supposed to notice?
The advantage of this method of digital rendering is that less computational power needs to be spent on rendering areas of the screen that aren’t being looked at. This allows for higher quality images to be displayed on less expensive, lower end hardware.
The question now is whether or not the eye tracking technology can be implemented for less than the offset cost of less powerful video rendering devices.