I found myself unable to laugh when I saw the video of the lawyer who could not turn off the cat filter during a court case on Zoom.
The filters are made possible through augmented reality or AR technology—which poses serious ethical challenges at both individual and collective levels. This technology can be used to violate privacy, and to fake identities and experiences to suit manipulative agendas, and may have negative psycho-social effects, among its other impacts.
As someone who has studied cross-cultural and depth models of the psyche along with the arts of ritual and magic, I know that consciousness is not meant to be linear/insular. Across societies and temporalities, there have been inner or life-based technologies to help people shift into varied realms of perception. These usually arise from the organic connection a community has with land/spirit, and there is a level of numinosity associated with participating in such life-based technologies.
What makes the AR technology different is the hijacking of experience without the requirement of collective/spiritual connection. It was built within an individualistic, capitalistic paradigm that centers individual "fun" or "exploration" over the deep roots of connection to living strata within consciousness, be it nature, the ancestors, or the body.
Let's look at some specific concerns around AR. "A user’s privacy can be threatened because AR technologies can “see” what the user sees. Thus, AR can collect a lot of information about who the user is and what he/she is doing." Privacy may seem like an innocuous issue if you don't think you are doing anything wrong and hence have nothing to hide, still, we live in a world marked by increasing authoritarianism and militarization of law enforcement. Facial recognition information and other information from the user's environment is liable to be used for militaristic/law enforcement purposes. This puts marginalized groups at risk, in particular.
AR technology also "sees" and therefore threatens the privacy of other people around.
Google recently fired its ethics researcher Dr. Timnit Gebru for writing a paper raising critical questions about the dangers of facial recognition/artificial intelligence, demonstrating that there is no in-house check on these technologies as is claimed by the company. And, that it is in fact actively quelling dissent around the development of controversial technologies.
“It’s being framed as the AI optimists and the people really building the stuff [versus] the rest of us negative Nellies, raining on their parade,” [Rumman] Chowdhury said. “You can’t help but notice, it’s like the boys will make the toys and then the girls will have to clean up.”
From a capitalistic profit-making logic, making and launching new technologies with dubitable effects is okay even if they compromise human or planetary interests. And, let's be clear, biometrics is data that big techs want to extract for its profitability.
This nexus between Big Tech and governments is building regimes of control based on frameworks that are not interested in the well-being of the Earth or the rights of its human and nonhuman inhabitants.
Technology is not what can solve the problems we currently find ourselves amidst. Most of the technology that is being seen as the panacea that will liberate us has been built within the same paradigm of disconnection that have created the problems in the first place.
"How are you feeling? Wait, don’t tell me. I’ll let the machine figure it out.”
The reason why companies put efforts into building technologies based on suspect frameworks (see "emotion recognition technology" in the article linked above)--despite their assailing norms of privacy, dignity, freedom of opinion, right against self-incrimination--is because they are not invested in our empowerment. Corporations and militaristic regimes would have us buy into the loss/ceding of self-knowing and self-sovereignty that would help further their agendas. By investing in technology that severs us from our consent—in effect controls us—they can continue to extract what falls into the "useless" side of the Cartesian mind/matter split.
The payoffs makes it easy to ignore the real environmental costs of some of these technologies. (e.g. "Just one hour of videoconferencing or streaming emits 150-1,000 grams of carbon dioxide (a gallon of gasoline burned from a car emits about 8,887 grams), requires 2-12 liters of water and demands a land area adding up to about the size of an iPad Mini.")
Many will probably say here that we are already too much of a technoculture to be able to change how we navigate our digital and analog experiences of life.
But just as we need to whenever we deal with toxic situations, we have to find our no's in regards to the digital culture. We have to do a cost-benefit analysis that reflects our actual values, not just those of soulless capitalism. We have look outside the grip of convenience so we create options based not only on sustainable frameworks but on regenerative frameworks, with care for the Earth front and center.
Every technology has costs. A connectionist worldview would soberly weigh the costs and then determine if the path of growth is serving collective well-being. Its medicine and technologies would tie into the imperative of healing and connection.
From everything we are reading, many technologies hailed by the tech corporations as revolutionary (another instance is 5G) are not serving collective well-being. Yet we are rushing in that direction, consumed by the desire to get somewhere that is a bodiless utopia. But we are right here. Our bodies are right here. Nature is right here. Life is right here. What we need is the deepening of attunement and connection and slowing down and care. The miracle of that which is life/nature/body continues to be revealed to us, through us.*
*Here, to add some awe into your day, are some findings by Dr. Katie Hinde, anthropologist and sustainability scientist, on breast milk.