I've been thinking a lot recently about how our bodies, and especially our faces — the most intimate aspect of our physical selves, synonymous with our very identities — have become machine readable, little more than QR codes the eyes of increasingly ubiquitous webcams and CCTV. What are the psychosocial and cultural implications of this? How will we adapt? I took a stab at addressing these questions in an article for Wall Street Journal MarketWatch, which (because of its orientation as a publication) also has a business analysis element to it.
Are we ready to live in a world that doesn’t only have ubiquitous cameras, but also the capacity to find us wherever we may be, and to guess how we feel, what we’re thinking, and what we’re going to do next, with ever greater accuracy and speed? At the very least, this will profoundly change the way we express ourselves physically. (Exhibit A: Selfie duck faces.) Furthermore, the “smarter” our cameras get, the dumber we can afford to be. Once we begin relying on machines to tell us what we’re looking at, we are essentially outsourcing our decision-making process — our wills — to the cloud. [read the rest on MarketWatch]