In 2002, the hit movie, “Minority Report,” offered us a glimpse of what the world might look like in 2054. Watching the film now, we realize that some of the technology has arrived sooner than the writers predicted. Still, while the Tom Cruise flick offered a dystopian view of the future uses of technology, especially facial recognition, that tech is now being used for good. So, while the story may continue to be a helpful reminder to keep an eye on tech adoption ethics, we should still pay attention to ways these advances are changing our world for the better.
The field of computer vision combines multiple scientific fields to discover how computers can gain a higher level of understanding from digital imagery. Many of us are used to unlocking our phones and tablets with facial recognition, but this is only the beginning. With the help of artificial intelligence (AI), facial recognition can do more than unlock a device.
While many worries about the negative impact on our privacy, used correctly, facial recognition could be the best way to protect our privacy and identity. The better machines get at recognition, the harder it becomes for there to be any confusion regarding identity. Identity theft could become a thing of the past. Future systems may be able to make a positive ID for travel, purchasing, and banking that is nearly 100% accurate.
These new facial recognition systems can also detect human emotion, which may enable your smart devices to learn what you need when you need it. The subtle nuances of the human face contain nearly infinite data points. Many of these are subconsciously noted by other humans. The more computers can capture and translate this information, the more our machines can customize their work to our needs.
Of course, as “Minority Report” taught us, ethicists will always need to keep their own eyes on how we use this technology.