Mis- and Disinformation, Privacy and Technology

The scholars file into a building in the "Brickyard" section of ASU's Tempe Campus

Managing information, on a personal, community and national level, is an intricate element of human organization. Today, however, it’s more unpredictable than ever. 

Understanding how the management of information and the narratives that distribute such information underpin capacities to influence human organization – for better or for worse – is a reality we all must face today. This is so for many aspects of human social life: perceptions of politics, health communication and behavior, religion, education systems, the sciences and even the arts. Digital technologies have, however, changed the rules of the game for all involved. Information and the narratives that accompany information today can be unpredictable, perhaps created by AI, and promoted by internet bots. The vast overload of information available to us today means the internet is far from orderly and controllable. 

It is not altogether surprising that around the world, centers of disinformation and narrative studies are forming in universities as researchers try to grapple with the new information ‘norms’. Nor is it surprising that often the concerns about information management are deeply intertwined with questions of national security.  

A visit to ASU’s Tempe campus

SUSI scholars today visited ASU’s Global Security Initiative located in ASU’s Tempe campus’s aptly named “Decision Theater” building. The GSI is the recipient of US$46 million funding support from the Department of Defense, which goes to ASU’s American national security research in areas of cybersecurity; human, artificial intelligence and robot teaming; STEM education for national security; accelerating operational efficiency; and developing partnerships with industry on security research and lab collaboration. SUSI scholars were the benefactors of presentations from two ASU experts: Associate Research Professor Joshua Garland (interim director of the Center for Narrative, Disinformation & Strategic Influence) and Dr Rakibul Hasan (assistant professor at the ASU School of Computing & Augmented Intelligence).

First, Professor Garland led SUSI scholars through a discussion about the seven types of mis- and disinformation (satire or parody; misleading content; imposter content; fabricated content; manipulated content; and false content), and how these may be understood in various national contexts. He pointed out that while in a given context information actively circulated may be in one national interest but may not be in the interest of another, confusing the lines of how we understand the credibility of messaging in international settings. As my colleague, Dr. Nicoleta Munteanu, noted too, even linguistically the concepts we may be discussing may not immediately be translatable, giving the example of how the Romanian language does not distinguish between “false” and “fake” as we do in English. 

We also explored ideas about the emotional reasons any of us may be driven to trust mis- and disinformation, and how it can often serve to satisfy our need to reduce cognitive dissonance in the world around us. Today’s visit neatly complemented our group’s visit to the Arizona’s State Election Director, Lisa Marra’s, and Assistant Secretary of State, Ms. Keely Varvel’s, office, where we learnt about Arizona’s challenge to manage election disinformation and uphold electoral integrity. My colleague, Lekhanath Pandey’s blog post covers this particular trip for the SUSI group if you haven’t read it already! 

This group is never at a loss for words
SUSI scholars in discussion mode with GSI academics (Photo by Asel Sooronbaeva)

Watch out what you post!

Our second speaker this afternoon was Dr. Rakibul Hasan, who presented his research focusing on visual mis- and disinformation, and also work done to develop software that is able to “automatically detect privacy sensitive contents in images so that they can be obfuscated.” Dr. Hasan gave examples in the news media where common forms of misinformation occur, the most common occurrence being the use of old or existing images and/or video clips to reflect a different time of place. His examples included the proliferation of clips and images that circulated both online and in the media at the start of Russia’s military operation into Ukraine of two factory footage blasts in Tianjin (China) and Beirut (Lebanon) that claimed to be videos of Russian attacks on Ukraine headquarters. 

To wrap up the afternoon, though, it was Dr. Hasan’s discussion with us about digital “privacy” in a world of online discussion, data sharing (or not for that matter!) and deepfakes that was most unsettling for me so far. With 3.2 billion images uploaded on social media every day, visual data and records are being publicized and stored with many users unaware of what in fact even occurs with their images once posted. Indeed, Dr. Hasan pointed out that the high rate of deepfake images circulating featuring the abuse of children are photos originally found on parents’ postings of their children’s photos to social media platforms such as Facebook. 

Yet, as the term “privacy” has no clear definition that all can agree on, it is entirely possible that taking of such data is not altogether invading someone’s privacy. After all, where are the limits of interpretation of one’s digital privacy? Does it relate to surveillance? The flow of the processing of the data? The freedom from undue influence? Or control over personal data? Is it, even, the capacity to delete all digital trace from the internet as some services promise to be able to do? (side note:  on this, Professor Garland pointed out that even if you do attempt to remove all digital trace, you still exist within AI such as GPT’s gathered from the internet over the years – argh, there’s no way out!!). 

While I (admittedly) left ASU’s decision theatre building feeling slightly defeated by the technological world that has so much sway over both the nature of information around us all and the vagueness surrounding what in fact is privacy, suggesting we don’t have as much control as we’d like to think we do, I did confirm one thing:

The experts in this field do cover their laptop cameras when they use their computers. And I think will start too.