:: Privacy, State of tech

The Perils of Participation: how we pay for online existence with our privacy

This is no longer about just not having read terms and services agreements, which we can all collectively agree we do not read, because, we are not lawyers. This is more about a culture of obligatory participation with unknown consequences and accountabilities. The fruits of our participation don’t really benefit us directly (unless you are really into targeted advertising) and are shared widely, usually without our knowledge.

Google even records your unfinished search results, and if you use a Gboard keyboard on your mobile or tablet, it has ability to record information such as passwords and other sensitive information. Tech security expert Lenny Zeltser mentions that major mobile device platforms allow users to replace built-in keyboard apps with third-party alternatives, which have the potential to capture, leak and misuse the keystroke data they process. Before enabling the apps, their users should understand the security repercussions of third-party keyboards, along with the safeguards implemented by their developers.

The potential of what happens to this retained information is not really clear other than helping with predictive text.

And once you’re signed in to Google, it actively tracks the following:

  • What you search for
  • How you search
  • Your location
  • Your search patterns
  • The ads you’re interested in
  • The links you click
  • Images you view
  • Which videos you watch

For a more in-depth description of what Google sees and how you can turn it off, you can read it on Lifewire’s website.

This is all detailed in Google’s terms of service, as well as their privacy policies. While these are dense legal documents, it’s wise to at least give them a quick look if you are at all concerned about how Google tracks and stores your information.

Data value and surveillance capitalism

The core of this issue is that these services are free for us, and we’ve come to expect that. But, the problem with this model is that when we use free services given us to by major companies like Google and Facebook, the product and paying customer are shifted elsewhere and we play a part in a different equation. In an attention economy we become the product, and in the days of big data, the advertisers are the paying customer.

As far as being products go, that means we are also performing work for these companies. In order for us to perform essential, basic, and moreover, practical communication, those transactions are generating more user data for these companies.

In addition to a sense of almost mandatory participation with unknown results, Recaptchas (Completely Automated Public Turing Test To Tell Computers and Humans Apart) dually verify you are indeed not a robot (though they already know this based on your history), and make you complete a job which cannot be done by computers. Google’s crowdsourcing data verification services would be more expensive and labor-intensive if done by computers and imaging algorithms, so they are instead outsourced to you, the free user. Hence, you are providing free labor for the company, and those little service tasks you and everyone else performed can be used en masse anywhere from training self-driving cars to weaponized drone technology.

Manuel Beltrán, an artist, activist and founder of The Institute of Human Obsolescence talks about this idea that he, and many others, refer to as surveillance capitalism. He has even gone so far as to create a “Data Workers Union”. Wikipedia defines surveillance capitalism as a novel market form and a specific logic of capitalist accumulation that was first described in a 2014 essay by business theorist and social scientist Shoshana Zuboff. She characterized it as a “radically disembedded and extractive variant of information capitalism” based on the commodification of “reality” and its transformation into behavioral data for analysis and sales.

Basically, the way people act online equals big bucks for companies. And the way to reveal that is by monitoring human behavior, and it’s those methods which are in many ways beginning to be considered unethical.

Beltrán states in an interview,

“The traditional understanding of labour requires the intentional performance of an action, either manual or intellectual. In comparison, the production of data emerges from the un-intentionality of our daily life activities, with a passive intention from the side of the worker. Processes and actions we follow in our daily lives are collected in a way that does not produce an awareness or understanding of the implications of these actions as being the domain of work. The end objective and future business models derived from the current massive collection of data are to be used to train artificial intelligence algorithms and machine learning systems.”

According to Zuboff, surveillance capitalism was pioneered at Google and later Facebook, in much the same way that mass-production and managerial capitalism were pioneered at Ford and General Motors a century earlier, and has now become the dominant form of information capitalism.

So it seems the business model of the internet is increasingly being constructed on mass surveillance.

Katrin Fritsch, a data & society researcher argues in her essay Towards an Emancipatory Understanding of Widespread Datafication that,

“The underlying logic of these capitalist systems is obscured by several metaphors, such as the individual as user. By stating that everyone can utilise their free service, capitalist companies hide the datafication and exploitation of the technological connected society. Like the term platform, user is an important part of the discursive work of social media companies to claim openness, egalitarianism and empowerment of the individual (Gillespie, 2010). Social media monopolies build upon these misleading academic frameworks by promising voice, freedom and equality for users. As van Dijck (2014) has examined, both governmental and economic institutions that extract and capitalise data are built upon a deeply rooted system of trust that legitimises capitalist datafication.”

In other words, using vocabulary to cover false intentions of exploitation (which in the corporate world is nothing new).

A new economy is blossoming, and unlike one based off the material production of goods since the beginning of the industrial revolution, this economy is based on the extraction of data. The reason that’s such a problem is because those harvests tend to be very personal and can encroach on core values like freedom and privacy. It jeopardizes many aspects of basic democracy and diverges from the centuries-long progressions of market capitalism.

Nick Couldry writes in The price of connection: ‘surveillance capitalism’,

“Deep economic pressures are driving the intensification of connection and monitoring online. The spaces of social life have become open to saturation by corporate actors, directed at the making of profit and/or the regulation of action. As Joseph Turow writes: ‘… the centrality of corporate power is a direct reality at the very heart of the digital age.’ Online platforms, in spite of their innocent-sounding name, are a way of optimising the overlap between the domains of social interaction and profit. Capitalism has become focused on expanding the proportion of social life that is open to data collection and data processing: it is as if the social itself has become the new target of capitalism’s expansion.”

Why doesn’t the prospect of being constantly surveilled make us more upset or nervous? And what are the costs of this new economy in dimensions that economists cannot estimate or predict?

Beltrán anticipates that,

“Citizens will start to be deprived from the access to these preceding so-called “free services” as the (corporate) giants no longer require the performance of mass collection of data to sustain their business models or train artificial intelligence systems. At some point in the near future, these mechanisms will have harvested enough data for artificial intelligence to function autonomously, without requiring more collection of human-produced data to be fed into those systems, so the economic model of mass collection of data will no longer be required.”

This means new baselines and normals will have already been made, and power structures will have changed. Imagine the scenario that one day wearable technology is a tool employed by insurance companies to make sure they have the upper hand (by the way, they are already starting to do trials of this).

Part of what being human means is the right to privacy, to shield yourself from the state if you wish. Beltrán insists, “We need new vocabularies and new imaginaries of what technology does to us that are built from a human-centric perspective and based on human values.”

In a short hand solution to things, as adopting a luddite perspective is not going to really work here, maybe it’s worth it pay for some of those online services you want instead of using the free version. It puts you back in control of the things you see (and we already know we shouldn’t still be using facebook!).

Jerri Collins from Lifewire puts it well:

“Whether or not you’re concerned about the information in your Google searches, profile, and personal dashboards being used to enhance the relevancy of your queries online, it’s always a good idea to make sure that all information shared on any service is within the bounds of personal privacy that you are most comfortable with. While you should certainly keep the platforms and services you use accountable to a common standard of user privacy, the safety and security of your information online is ultimately up to you to determine.”

Cybil

Cybil

Perception and reality at the intersections of art, science and technology.

Read More