Tim Berners-Lee was unaware of the monumental impact his 1990 invention of the World Wide Web would have on how we consume media just a few decades later. We have shifted from newspaper and radio gatekeepers to 2.27 billion monthly active Facebook users (Jan. 2019). YouTube, WhatsApp, Messenger, WeChat and Instagram each have over one billion of their own monthly active users (Jan. 2019), and all of them have become platforms for the rapid spread of media, mis- and disinformation. 

Misinformation is false information that is shared unintentionally, and disinformation is knowingly shared with the purpose of causing harm. From bots to algorithms, and trolls to high-quality journalism, we are completely inundated by the overflow of information and the sea of question marks that swarm around one simple word: truth. What does truth even mean in 2019?  Misinformation and disinformation is shared on a larger scale than ever before, making it increasingly difficult for consumers to trust their news sources. The Truth Matters: Strategies for Combating Manipulated Realities seminar was a wonderful opportunity to engage in an open conversation with media literacy researchers and professionals about how to navigate the manipulated realities in which we are currently living.  

Farida Vis, the Director of the Visual Social Media Lab, and her team aim to uncover just how threatning and influential images on social media are across the globe. They have developed a framework based on previous research that guides the in-depth interrogation process of images with questions such as; was it shared on social media by the person who made it? Was it shared by a human, cyborg or a bot? Is there an intent to harm and/or mislead?

We are constantly bombarded with pictures and illustrations that carry direct as well as subliminal messages on social media platforms. These messages influence our lives in minor and significant ways; from our morning coffee conversations to international debates on climate change. Unfortunately, even the most advanced algorithms cannot detect the rapid dissemination of this sort of content. Images require interpretation by human eyes. Vis expresses a profound concern toward the development of deepfakes, or hyper-realistic face swaps using AI technology. Soon enough, it will be increasingly more difficult for consumers to detect whether or not videos of interviews with our political leaders are authentic or entirely fabricated.

What can we do?

We are in a unique time in which the laws of the worldwide web are still being defined. It is essential that we as consumers remain critical about the information we consume and that we think twice before sharing. We must be sensitive to the fact that we are being manipulated.

The pursuit of truth in the Information Age will undoubtedly be a frustrating challenge for all of us. We do not need to be afraid, but we do need to be aware. The internet has brought with it many possibilities for growth, education, and technological advancement. Our role as users is to take advantage of them and do our best to prevent the negative impacts caused by mis- and disinformation.

See the seminar program and photos.

Fulbright-University of Turku graduate awardee Michelle Paterick
Michelle Paterick
2018-2019 Fulbright-University of Turku Graduate Awardee

Fulbright-University of Turku Graduate grantee Michelle Paterick is completing a master’s degree in Education and Learning at the University of Turku. Her master’s thesis research focuses on how entrepreneurship education at the high school level strengthens workforce and life skills.