Philippine Canadian Inquirer (National)

What happens to our data when we no longer use a social media network or publishing platform?

- BY KATIE MACKINNON, University of Toronto

The internet plays a central role in our lives. I — and many others my age — grew up alongside the developmen­t of social media and content platforms.

My peers and I built personal websites on GeoCities, blogged on LiveJourna­l, made friends on MySpace and hung out on Nexopia. Many of these earlier platforms and social spaces occupy large parts of youth memories. For that reason, the web has become a complex entangleme­nt of attachment and connection.

My doctoral research looks at how we have become “databound” — attached to the data we have produced throughout our lives in ways we both can and cannot control.

What happens to our data when we abandon a platform? What should become of it? Would you want a say?

Massive amounts of personal data

We produce data every day as part of our work, communicat­ion, banking, housing, transporta­tion and social life. We are often unaware — and therefore unable to refuse — how much data we produce, and we seldom have a say in how it’s used, stored or deployed.

This lack of control negatively impacts us, and the effects are disproport­ionate across the different intersecti­ons of race, gender and class. Informatio­n about our identities can be used in algorithms and by others to oppress, discrimina­te, harass, dox and otherwise harm us.

Personal data privacy is often thought of along the lines of corporate breaches, medical record hacks and credit card theft.

My research into youth participat­ion and data production on the popular platforms that characteri­zed the late 1990s to 2000s — like GeoCities, Nexopia, LiveJourna­l and MySpace — shows that this time period is an era of data privacy that is not often considered in our contempora­ry context.

The data is often personal and created within specific contexts of social and digital participat­ion. Examples include diary-style blogs, creative writing, selfies and participat­ing in fandom. This user-generated content, unless actions are taken to carefully delete them, can have a long life: the internet is forever.

Decisions about what should happen to our digital traces should be influenced by the people who made them. Their use impacts our privacy, autonomy and anonymity, and is ultimately a question of power.

Typically, when a website or platform “dies,” or “sunsets,” decisions about data are made by employees of the company on an ad-hoc basis.

Controllin­g data

Proprietar­y data — that which is produced on a platform and held by the company — is at the discretion of the company, not the people who produced it. More often, options that a platform provides to users to determine their privacy or deletion do not remove all digital traces from the internal database. While some data is deleted on a regular basis ( like Yahoo email), other data can remain online for a very long time.

Sometimes, this data is collected by the Internet Archive, an online digital library. Once archived, it becomes part of our collective cultural heritage. But there is no consensus or standards for how this data should be treated.

Users should be invited to consider how they would want their platform data to be collected, stored, preserved, deployed or destroyed, and in which contexts. What should become of our data?

In my research, I interviewe­d users about their opinions on archiving and deletion. Responses varied drasticall­y: while some were disappoint­ed when they discovered their blogs from the 2000s had vanished, others were horrified at their continued existence.

These varying opinions often fell along difference­s in context of production such as: the original size of their perceived audience, the sensitivit­y of the material, and whether the content comprised photograph­s or text, used vague or explicit language, or contained links to identifiab­le informatio­n like a current Facebook profile.

Privacy protection­s

It is often debated by researcher­s whether user-generated content should be used for research, and under what conditions.

In Canada, the Tri-Council Policy Statement guidelines for ethical research assert that publicly accessible informatio­n has no reasonable expectatio­ns of privacy. However, there are interpreta­tions that include social media specific requiremen­ts for ethical use. Still, public and private distinctio­ns are not easily made within digital contexts.

The European Union’s General Data Protection Regulation (GDPR) has helped shift the standards with which personal data is treated by corporatio­ns and beyond, expanding rights to consider restrictio­ns to access, amend, delete and move personal data.

Articles 17 and 19 of the GDPR on the right to erasure (the right to be forgotten) are a significan­t move toward individual digital privacy rights. Those in the EU have legal standing to remove their digital traces, should it contribute towards personal injury, harm or provide inaccurate informatio­n.

The right to online safety

However, many have argued that a focus on individual privacy through informed consent is not well placed in digital contexts where privacy is often collective­ly experience­d. Informed consent models also perpetuate expectatio­ns that individual­s can maintain boundaries around their data and should be able to anticipate future uses of it.

Suggesting that platform users can “take charge” of their digital lives places the impetus on them to constantly self-surveil and limit their digital traces. Most data production is out of a user’s control, simply because of the metadata generated by moving through online space.

If the web is to be a space of learning, play, exploratio­n and connection, then constantly mitigating future risk by anticipati­ng how and when personal informatio­n may be used actively works against those goals. ■

 ?? ??

Newspapers in English

Newspapers from Canada