Reflections on the digital age: 7 improvements that brought about a decade of positive change

The new digital age enabled billions of people to collaborate and mobilize to fight climate change.

The new digital age enabled billions of people to collaborate and mobilize to fight climate change. Image:  Photo by kazuend on Unsplash

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Don Tapscott C.M.

advantages of digital age essay

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} The Digital Economy is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, the digital economy.

Listen to the article

September 2030 . The early 2020s were full of dramatic turning points in global history.

Powerful new technologies like artificial intelligence, blockchain, the internet of things and the metaverse upended traditional systems, institutions and ways of life. Meanwhile, the COVID-19 pandemic of 2020-22 accelerated these trends as people everywhere moved much of their lives online. The pandemic also exposed deep problems in our governments and systems for everything from supply chains to public health data.

Moreover, the early 2020s were jolted by political upheaval. Notably, in January 2021, the American election was challenged, exacerbating deep fissures in the United States and emboldening populists and extremists around the world. The Russian invasion of Ukraine, global sanctions and significant disruptions to food supplies further convulsed the global economy and exacerbated tensions. These challenges, among others, created a perfect storm and resulted in extraordinary social anxiety and unrest.

Fortunately, a miracle of sorts occurred. Driven by a deep hope for a brighter future, people everywhere began to reimagine the relationship between government and civil society, ushering in a new societal framework for the digital age. This was not some kind of academic process but rather the result of mass mobilizations around broad change.

Reflecting on the digital age

Today, looking back a decade, let’s examine seven key improvements that stemmed from this period of positive change:

1. New models of prosperity and work

Given the bifurcation of wealth and structural unemployment in many economies engendered by the new digital age, expectations of employment shifted, with people understanding that the private sector cannot provide jobs and prosperous life for all. New rules and regulations were instituted that created a strong social safety net for workers. These reforms helped mitigate the gross inequality that plagued the early years of the 21 st century. New technologies also brought more underserved people into the global economy and readied workers for lifelong learning.

2. New models of digital identity

New regulations allowed individuals to own and benefit from the digital data they create. This ended the era of “digital feudalism,” which was characterised by a centralized group of “digital landlords” who collected, aggregated and profited from the data that collectively constituted our digital identities. Furthermore, Web3 gave people the ability to harvest their data trail and use it to plan their lives, enhancing their prosperity and protecting their privacy.

3. More informed digital age society

Through public and private partnerships, media systems were rebuilt in ways that safeguarded independence and free speech. New tools were implemented that enabled citizens to track the veracity and provenance of information. This helped reduce the ability of bad-faith actors to spread false information about everything from climate change to public health. Clear rules were also set that ensured large media companies were prohibited from supporting hate on their platforms in the digital age. These reforms helped us rebuild public education systems to ensure that every young person can function fully, not just as a worker or entrepreneur, but as a citizen. Media literacy programs were also introduced into schools to help young people develop their capabilities to handle the onslaught of information and discern the truth.

4. Renewed trust in government and democracy

Innovative technologies and other modern reforms enabled us to create a new era of democracy based on public deliberation, transparency, active citizenship and accountability. Technology also helped to embed electoral promises into smart contracts that allowed citizens to track and engage in their democracies through the mobile platforms they use every day. These reforms helped boost trust in politicians and the legitimacy of our governments as leaders are now more beholden to the people and not the powerful interests that funded their campaigns in the years prior. Moreover, these improvements helped stifle radical populists and extreme politicians on both the right and left.

5. A new commitment to justice

It was clear that new technologies exacerbated racial divides, so governments and organisations throughout civil society committed to ending racial inequities. In the United States, action was taken to end the era of mass incarceration and the financial hamstringing of minority groups. The criminal subjection of indigenous peoples as evidenced by Canada’s “Residential School System” was also readdressed. These steps helped move racism, class oppression and subjugation of all peoples into the dustbin of history, along with those who perpetrate these vile relics of the past. The reforms also went past the tropes about bad apples and forgiveness. They recognized that racism and oppression are systemic and must be addressed society-wide.

6. A deep commitment to sustainability

Through major reforms, the world is now on track to reduce carbon emissions by 90% by the year 2050. The new digital age enabled billions of people to collaborate and mobilize to fight climate change. This included not just governments but businesses large and small, commuters, vacationers, employees, students, consumers – everyone – from every walk of life. Public pressure and new regulations have also forced business executives to participate responsibly in the reindustrialization of our planet and embrace carbon pricing.

7. Global interdependence

The crises of the past decade—the COVID-19 pandemic, the political legitimacy crisis, the war in Ukraine and the climate catastrophes—demonstrated that no country could succeed fully in a world that is in trouble. And while significant national differences remain, countries have embraced common interests and an understanding of a common fate. The new way of thinking also allowed governments, companies and NGOs to better organise around solving major problems like public health, education, social justice, environmental stability and peace.

These positive changes did not bring about a utopia. But they were improvements—and ones that were achieved through bottom-up struggle.

Victor Hugo said there is nothing so powerful as an idea whose time has come. In our case, there was nothing so powerful as ideas that had become necessities.

The World Economic Forum’s Platform for Shaping the Future of Digital Economy and New Value Creation helps companies and governments leverage technology to develop digitally-driven business models that ensure growth and equity for an inclusive and sustainable economy.

  • The Digital Transformation for Long-Term Growth programme is bringing together industry leaders, innovators, experts and policymakers to accelerate new digital business models that create the sustainable and resilient industries of tomorrow.
  • The Forum’s EDISON Alliance is mobilizing leaders from across sectors to accelerate digital inclusion . Its 1 Billion Lives Challenge harnesses cross-sector commitments and action to improve people’s lives through affordable access to digital solutions in education, healthcare, and financial services by 2025.

Contact us for more information on how to get involved.

This article is abridged from an major essay written by Don Tapscott called “A Declaration of Interdependence: Towards a New Social Contract for the Digital Age” and a recent short essay entitled “ Why We Built a Social Contract for the New Digital Age.”

Don Tapscott is author of 16 widely read books about technology in business and society, including the best-seller Blockchain Revolution , which he co-authored with his son Alex. His most recent book is Platform Revolution: Blockchain Technology as the Operating System of the Digital Age. He is Co-Founder of the Blockchain Research Institute , an Adjunct Professor at INSEAD, and Chancellor Emeritus of Trent University in Canada. He is a Member of the Order of Canada and drafted a framework for “ A New Social Contract for the Digital Economy.”

Have you read?

Why businesses must embrace change in the digital age, companies' esg strategies must stand up to scrutiny in the digital age, what is the role of government in the digital age, don't miss any update on this topic.

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

The Agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} Weekly

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Forum Institutional .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

advantages of digital age essay

Day 2 #SpecialMeeting24: Key insights and what to know

Gayle Markovitz

April 28, 2024

advantages of digital age essay

Day 1 #SpecialMeeting24: Key insights and what just happened

April 27, 2024

advantages of digital age essay

#SpecialMeeting24: What to know about the programme and who's coming

Mirek Dušek and Maroun Kairouz

advantages of digital age essay

Climate finance: What are debt-for-nature swaps and how can they help countries?

Kate Whiting

April 26, 2024

advantages of digital age essay

What to expect at the Special Meeting on Global Collaboration, Growth and Energy for Development

Spencer Feingold and Gayle Markovitz

April 19, 2024

advantages of digital age essay

From 'Quit-Tok' to proximity bias, here are 11 buzzwords from the world of hybrid work

April 17, 2024

Visual Life

  • Creative Projects
  • Write Here!

June 9, 2023 / 1 comment / Reading Time: ~ 10 minutes

Connecting in the Digital Age: Navigating Technology and social media

Shawndeep Virk

In the contemporary era, technology and social media have revolutionized how we connect with others, significantly impacting various aspects of our lives. This article explores the pervasive influence of technology and social media on individuals and society, shedding light on the benefits and drawbacks of this digital transformation. As technology advances, social media platforms have become integral to our daily routines, shaping our interactions, communication patterns, and self-perceptions. The convenience of instant messaging and virtual communities has facilitated global connectivity, transcending geographical barriers and fostering relationships. However, the constant exposure to virtual environments has also led to many challenges. One significant concern is the potential erosion of face-to-face interactions, as the allure of digital communication often replaces genuine human connection. In addition, the addictive nature of social media can lead to diminished social skills and a sense of loneliness, exacerbating mental health issues. Moreover, the proliferation of online misinformation and the echo chamber effect has introduced new challenges to critical thinking and public discourse. This article delves into the impact of technology on various domains, including education, relationships, and self-identity. The prevalence of online learning platforms has transformed the traditional classroom, offering new opportunities while raising concerns about unequal access and diminished interpersonal engagement. Relationships have been reshaped as virtual connections become more prevalent, impacting intimacy, trust, and the overall quality of social interactions. Furthermore, social media platforms have fueled the rise of personal branding and the cultivation of idealized digital personas, contributing to the “digital self” and its effects on self-esteem and mental well-being. Technology and social media have undeniably become ingrained in our lives, revolutionizing how we connect and interact with others. Yet, while they offer unprecedented connectivity and vast opportunities, we must also navigate their potential pitfalls. Recognizing the importance of balancing digital and real-world experiences can help us harness the benefits of technology while preserving the essential elements of human connection and well-being in the digital age.

————

Introduction:

In today’s fast-paced society, the pervasive influence of technology and social media on our lives cannot be denied. These platforms have transformed the way we communicate, work, and interact with others. From the convenience of instant messaging to the accessibility of online communities, technology has undoubtedly made our lives more convenient and connected. However, amidst the undeniable benefits, it is essential to acknowledge the challenges that arise from our increasing reliance on these digital platforms.

One significant concern that has emerged in recent years is the impact of excessive screen time on mental health. Twenge and Campbell’s (2019) longitudinal study shed light on the potential adverse effects of constant exposure to social media. They found a strong correlation between heavy social media use and increased feelings of anxiety, depression, and loneliness. In addition, the endless stream of carefully curated posts, the constant comparison to others’ highlight reels, and the fear of missing out (FOMO) contribute to a sense of inadequacy and disconnection from real-life experiences. Hence, individuals must be mindful of their digital consumption and take steps to strike a healthy balance.

Sherry Turkle (2011), a renowned expert in the field of social psychology, highlights another aspect of technology’s impact on interpersonal relationships. While social media allows us to stay connected with a vast network of friends and acquaintances, it often needs to fulfill our innate need for genuine connections. Turkle argues that we have come to expect more from technology and less from each other. Virtual interactions lack the depth and authenticity of face-to-face communication, leaving us craving for deeper emotional connections. In an era where emojis and likes have become substitutes for heartfelt conversations, it is essential to recognize the limitations of digital interactions and actively nurture meaningful relationships in real life.

Research conducted by Hampton et al. (2014) further underscores the importance of face-to-face communication in establishing genuine connections. Their studies reveal that virtual interactions, devoid of nonverbal cues like body language and facial expressions, hinder our ability to understand and empathize with others truly. These nonverbal cues provide crucial context and emotional depth that is often lost in digital conversations. Meeting in person lets us fully engage with others, pick up on subtle cues, and forge stronger bonds. While technology enables us to bridge geographical gaps and connect with individuals across the globe, it is vital to recognize the value of physical presence and direct human interaction.

Achieving a healthy balance between technology and real-life connections may seem daunting in a world increasingly reliant on digital platforms. However, recognizing the potential consequences of excessive technology use is the first step toward cultivating healthier relationships. McEwan and Zanolla (2020) assert that the impact of smartphone use on human interaction should be considered. As individuals, we must proactively set boundaries, manage our screen time, and consciously engage in face-to-face interactions. This may involve establishing designated “tech-free” zones or allocating specific periods for uninterrupted personal interactions.

Moreover, fostering a culture that values genuine connections and offline experiences is crucial. Educational institutions, workplaces, and communities can be pivotal in promoting face-to-face interactions and organizing activities that encourage meaningful human connections. By creating spaces where individuals can engage in open dialogue, practice active listening, and collaborate on shared goals, we can build stronger communities and nurture relationships that transcend the digital realm.

In conclusion, while technology and social media have undeniably revolutionized how we connect and communicate, we must approach them with caution and mindfulness. Excessive screen time and overreliance on digital platforms can harm our mental health and interpersonal relationships. Striking a balance between technology and real-life connections is paramount to fostering meaningful relationships, empathy, and emotional well-being. By recognizing the limitations of virtual interactions and actively engaging in face-to-face.

Dangers Of Excessive Social Media Use:

In today’s digital age, social media has become an indispensable part of our lives. While it may seem harmless to stay connected with friends and family, excessive use of social media can harm our physical and mental health. According to Twenge and Campbell (2019), spending too much time scrolling through social media feeds can lead to feelings of anxiety, depression, and loneliness. In addition, they found that individuals who reported higher levels of screen time experienced lower levels of mental well-being over time.

However, the dangers of excessive social media use go beyond mental health. One of the concerning aspects highlighted by Turkle (2011) is the decrease in face-to-face interactions and interpersonal connections. She argues that people often expect more from technology and less from each other, leading to a sense of detachment and a decline in genuine human interaction. In addition, the constant scrolling and engagement with virtual connections can detract from meaningful real-life experiences, leaving individuals feeling isolated and disconnected.

Furthermore, excessive social media use can lead to the “spiral of silence” phenomenon described by Hampton, Rainie, Lu, Shin, and Purcell (2014). They explain that social media platforms can create an environment where individuals feel pressured to conform to popular opinions and are less likely to express dissenting views. This can limit the diversity of perspectives and hinder open dialogue and meaningful discussions. As a result, social media can inadvertently contribute to echo chambers, where people are exposed to only one side of an issue, reinforcing their existing beliefs without being challenged.

In addition to its impact on mental health and social dynamics, excessive social media use can adversely affect our physical well-being. McEwan and Zanolla (2020) emphasize that prolonged sitting while using electronic devices has been linked to various health problems, such as obesity, back pain, and eye strain. They further state that the blue light emitted from electronic devices disrupts sleep patterns and can lead to insomnia. The passive nature of social media consumption, combined with its addictive qualities, can contribute to a more sedentary lifestyle, posing risks to our overall physical health and well-being.

Moreover, the spread of misinformation on social media is a growing concern. Hall, Baym, and Miltner (2019) emphasize that false information about politics, health issues, or current events spreads quickly across these platforms due to their viral nature. They note that people often share information without verifying its accuracy or source, leading to the propagation of false information. This can be particularly dangerous when it comes to making important decisions related to public health or political issues. Incorrect information can misguide individuals, shape public opinion, and have real-world consequences.

While social media has revolutionized how we connect with people worldwide and stay informed about current events, it is crucial to be mindful of our social media use and set healthy boundaries to prevent its detrimental effects. By quoting these studies, we can better understand the potential risks associated with excessive social media use and make informed decisions about our online behavior. Furthermore, awareness of the dangers can empower individuals to strike a balance between the benefits and drawbacks of social media, fostering healthier habits and more meaningful connections in both the digital and real world.

Importance Of Face-to-face Communication:

In the digital age, technology and social media have become the primary means of communication for many individuals. However, it is crucial to recognize that face-to-face communication remains an essential aspect of interpersonal interactions. While digital communication can be convenient and efficient, it lacks the depth and nuance that comes with in-person conversations. In a world where we are bombarded with endless notifications and distractions, taking the time to engage in face-to-face conversations can help us connect on a deeper level.

According to Twenge and Campbell (2019), “screen time has been associated with various negative mental health outcomes, including increased levels of anxiety and depression.” This highlights the potential detrimental effects of excessive reliance on digital communication, emphasizing the need for balanced interaction that includes face-to-face communication.

One of the most significant benefits of face-to-face communication is its ability to convey nonverbal cues effectively. Turkle (2011) asserts that “up to 90% of communication is nonverbal,” indicating the importance of facial expressions, body language, and tone of voice in conveying meaning accurately. These nonverbal cues can often be lost or misinterpreted through digital communication channels such as email or instant messaging. In-person conversations allow us to read these cues accurately, providing valuable context that helps us understand each other better.

Additionally, Hampton et al. (2014) discuss the “spiral of silence” phenomenon, where people are more likely to express their opinions in face-to-face conversations compared to online platforms. This suggests that face-to-face communication encourages a more open and honest exchange of ideas, fostering empathy and building trust between individuals.

Face-to-face communication promotes a sense of connection often lacking in digital interactions. Being physically present with someone shows a level of commitment and engagement that cannot be replicated online. It allows for genuine human connection and more profound, more meaningful relationships. In an age of loneliness and isolation, face-to-face communication can help alleviate these issues by fostering a sense of belonging and community. Face-to-face conversations offer an opportunity for spontaneity and improvisation. In digital communication, messages can be carefully.

Furthermore, face-to-face communication allows for more meaningful collaboration and problem-solving opportunities than digital channels do. McEwan and Zanolla (2020) highlight that “physical presence facilitates spontaneous exchanges and enhances creativity.” When working on complex projects or brainstorming ideas, being physically present with colleagues allows for immediate feedback and interactive discussions that lead to creative breakthroughs. Additionally, when conflicts arise within teams or organizations, having difficult conversations in person helps ensure all parties are heard and understood.

While technology has revolutionized how we communicate with one another in many positive ways, it should not replace face-to-face communication altogether. In-person conversations allow us to convey nonverbal cues, foster empathy and trust, and facilitate more meaningful collaboration opportunities. As we navigate the digital age, it is essential not to lose sight of the value that face-to-face communication brings to our personal and professional relationships (Hall, Baym, & Miltner, 2019).

Balancing Technology And Real-life Connections:

In today’s digital age, it is easy to get lost in the world of technology and social media. It seems like everyone is constantly glued to their screens, whether it be their phone, computer, or tablet. According to Twenge and Campbell (2019), extensive screen time has been associated with negative effects on mental health. They found that excessive use of digital devices, especially social media, was linked to higher levels of depression and loneliness. Therefore, it is essential that we learn how to balance technology with real-life connections.

One way to achieve this balance is by setting boundaries for ourselves when it comes to technology use. Turkle (2011) highlights the importance of consciously limiting the amount of time we spend on our devices each day and making an effort to engage in face-to-face interactions with people around us. She emphasizes the need to have meals together with family or friends without any distractions from our phones or other gadgets.

Furthermore, using social media platforms wisely can contribute to the balance between technology and real-life connections. Hampton et al. (2014) discuss the concept of the “spiral of silence,” which suggests that people may be hesitant to express their opinions online due to the fear of social isolation. However, when used effectively, social media can serve as a tool for connecting with others in a meaningful way. Hall, Baym, and Miltner (2019) suggest joining groups or communities that share our interests or values, participating in online discussions, and even arranging meetups with people we have met through these platforms.

It’s also important to recognize the value of unplugging from technology altogether from time to time. McEwan and Zanolla (2020) argue that excessive smartphone use can have a detrimental impact on human interaction. Taking breaks from our screens can help us feel more present and connected in the moment, fostering deeper relationships with those around us.

Ultimately, finding balance between technology and real-life connections requires intentionality and discipline. We must be willing to prioritize human interaction over virtual communication at times, even if it means stepping outside of our comfort zones. As Turkle (2011) reminds us, “We expect more from technology and less from each other.” Therefore, let us strive to navigate the digital age with mindfulness and intentionality so that we can cultivate meaningful relationships both online and offline. This graph shows why teens in America think social media has positive effects on people and the number 1 reason being that they are able to connect with friends and family.

Conclusion:

In conclusion, as we reflect on the impact of technology and social media on our lives, it becomes evident that while they have undoubtedly revolutionized the way we connect with others, they also carry potential dangers when used excessively. The prevalence of social media addiction has raised concerns about its detrimental effects on mental health, relationships, and productivity. It is therefore crucial for individuals to recognize the importance of striking a balance between online and offline interactions.

While social media platforms offer a myriad of benefits, such as connecting people across distances and providing access to information, it is essential to approach their use mindfully and with intentionality. All too often, individuals find themselves consumed by the virtual world, neglecting the tangible relationships and experiences that await in the offline realm. Face-to-face communication remains a cornerstone of human connection, offering depth, empathy, and emotional resonance that cannot be replicated through a screen.

By prioritizing real-life connections and setting healthy boundaries on our social media usage, we can cultivate a healthier and more fulfilling existence. Engaging in meaningful conversations, actively listening to others, and nurturing personal relationships allow us to experience genuine human connections that contribute to our overall well-being. It is in these face-to-face interactions that we can truly understand the nuances of non-verbal communication, interpret emotions, and forge deeper bonds.

In navigating the digital age, it is crucial to strike a delicate balance. Rather than completely rejecting technology or mindlessly indulging in its vast offerings, we should strive for a harmonious coexistence. This entails embracing the positive aspects of technology while being aware of its limitations and potential pitfalls. By intentionally carving out time for offline activities, such as hobbies, physical exercise, and spending quality time with loved ones, we can create a more well-rounded and fulfilling lifestyle.

Additionally, developing a healthy relationship with technology involves being mindful of the impact it has on our mental health. It is essential to recognize when social media usage becomes excessive or starts to negatively affect our well-being. Setting boundaries, such as designating specific times for technology use or limiting the number of social media platforms we engage with, can help prevent addiction and promote a healthier lifestyle.

In conclusion, while technology and social media have undoubtedly transformed the way we connect with others, it is vital to approach their use with caution and mindfulness. By striking a balance between online and offline interactions, prioritizing face-to-face communication, and setting healthy boundaries, we can harness the benefits of technology while maintaining genuine human connections. By doing so, we can cultivate a more fulfilling and balanced life, fostering our mental well-being, nurturing relationships, and maximizing our potential in both the virtual and physical realms.

References:

Twenge, J.M., & Campbell,W.K.(2019). The association between screen time and mental health: A longitudinal study. Psychological Science.

Turkle,S.(2011). Alone together: Why we expect more from technology and less from each other.New York: Basic Books.

Hampton,K.N., Rainie,L., Lu,W., Shin,I., & Purcell,K.(2014). Social mediaand the ‘‘spiral of silence.’’ Pew Research Center.

McEwan,B., & Zanolla,E.(2020). The impact of smartphone use on human interaction.Philosophical Transactions B. 5) Hall,J.A., Baym,N.K., & Miltner,K.M.(2019). Momentary pleasuresand lingering costs of using social media in daily life.Journalof Social Psychology.

And So It Was Written

advantages of digital age essay

Author: Shawndeep Virk

Published: June 9, 2023

Word Count: 2922

Reading time: ~ 10 minutes

Edit Link: (emailed to author) Request Now

Creative Commons CC-BY Attribution License

ORGANIZED BY

Articles , Creative Projects , In Progress

MORE TO READ

Add yours →.

  • My Homepage

Provide Feedback Cancel reply

You must be logged in to post a comment.

A TRU Writer powered SPLOT : Visual Life

Blame @cogdog — Up ↑

Student Writing in the Digital Age

Essays filled with “LOL” and emojis? College student writing today actually is longer and contains no more errors than it did in 1917.

student using laptop

“Kids these days” laments are nothing new, but the substance of the lament changes. Lately, it has become fashionable to worry that “kids these days” will be unable to write complex, lengthy essays. After all, the logic goes, social media and text messaging reward short, abbreviated expression. Student writing will be similarly staccato, rushed, or even—horror of horrors—filled with LOL abbreviations and emojis.

JSTOR Daily Membership Ad

In fact, the opposite seems to be the case. Students in first-year composition classes are, on average, writing longer essays (from an average of 162 words in 1917, to 422 words in 1986, to 1,038 words in 2006), using more complex rhetorical techniques, and making no more errors than those committed by freshman in 1917. That’s according to a longitudinal study of student writing by Andrea A. Lunsford and Karen J. Lunsford, “ Mistakes Are a Fact of Life: A National Comparative Study. ”

In 2006, two rhetoric and composition professors, Lunsford and Lunsford, decided, in reaction to government studies worrying that students’ literacy levels were declining, to crunch the numbers and determine if students were making more errors in the digital age.

They began by replicating previous studies of American college student errors. There were four similar studies over the past century. In 1917, a professor analyzed the errors in 198 college student papers; in 1930, researchers completed similar studies of 170 and 20,000 papers, respectively. In 1986, Robert Connors and Andrea Lunsford (of the 2006 study) decided to see if contemporary students were making more or fewer errors than those earlier studies showed, and analyzed 3,000 student papers from 1984. The 2006 study (published in 2008) follows the process of these earlier studies and was based on 877 papers (one of the most interesting sections of “Mistakes Are a Fact of Life” discusses how new IRB regulations forced researchers to work with far fewer papers than they had before.

Remarkably, the number of errors students made in their papers stayed consistent over the past 100 years. Students in 2006 committed roughly the same number of errors as students did in 1917. The average has stayed at about 2 errors per 100 words.

What has changed are the kinds of errors students make. The four 20th-century studies show that, when it came to making mistakes, spelling tripped up students the most. Spelling was by far the most common error in 1986 and 1917, “the most frequent student mistake by some 300 percent.” Going down the list of “top 10 errors,” the patterns shifted: Capitalization was the second most frequent error 1917; in 1986, that spot went to “no comma after introductory element.”

In 2006, spelling lost its prominence, dropping down the list of errors to number five.  Spell-check and similar word-processing tools are the undeniable cause. But spell-check creates new errors, too: The new number-one error in student writing is now “wrong word.” Spell-check, as most of us know, sometimes corrects spelling to a different word than intended; if the writing is not later proof-read, this computer-created error goes unnoticed. The second most common error in 2006 was “incomplete or missing documentation,” a result, the authors theorize, of a shift in college assignments toward research papers and away from personal essays.

Additionally, capitalization errors have increased, perhaps, as Lunsford and Lunsford note, because of neologisms like eBay and iPod. But students have also become much better at punctuation and apostrophes, which were the third and fifth most common errors in 1917. These had dropped off the top 10 list by 2006.

The study found no evidence for claims that kids are increasingly using “text speak” or emojis in their papers. Lunsford and Lunsford did not find a single such instance of this digital-era error. Ironically, they did find such text speak and emoticons in teachers’ comments to students. (Teachers these days?)

The most startling discovery Lunsford and Lunsford made had nothing to do with errors or emojis. They found that college students are writing much more and submitting much longer papers than ever. The average college essay in 2006 was more than double the length of the average 1986 paper, which was itself much longer than the average length of papers written earlier in the century. In 1917, student papers averaged 162 words; in 1930, the average was 231 words. By 1986, the average grew to 422 words. And just 20 years later, in 2006, it jumped to 1,038 words.

Why are 21st-century college students writing so much more? Computers allow students to write faster. (Other advances in writing technology may explain the upticks between 1917, 1930, and 1986. Ballpoint pens and manual and electric typewriters allowed students to write faster than inkwells or fountain pens.) The internet helps, too: Research shows that computers connected to the internet lead K-12 students to “conduct more background research for their writing; they write, revise, and publish more; they get more feedback on their writing; they write in a wider variety of genres and formats; and they produce higher quality writing.”

The digital revolution has been largely text-based. Over the course of an average day, Americans in 2006 wrote more than they did in 1986 (and in 2015 they wrote more than in 2006). New forms of written communication—texting, social media, and email—are often used instead of spoken ones—phone calls, meetings, and face-to-face discussions. With each text and Facebook update, students become more familiar with and adept at written expression. Today’s students have more experience with writing, and they practice it more than any group of college students in history.

Get Our Newsletter

Get your fix of JSTOR Daily’s best stories in your inbox each Thursday.

Privacy Policy   Contact Us You may unsubscribe at any time by clicking on the provided link on any marketing message.

In shifting from texting to writing their English papers, college students must become adept at code-switching, using one form of writing for certain purposes (gossiping with friends) and another for others (summarizing plots). As Kristen Hawley Turner writes in “ Flipping the Switch: Code-Switching from Text Speak to Standard English ,” students do know how to shift from informal to formal discourse, changing their writing as occasions demand. Just as we might speak differently to a supervisor than to a child, so too do students know that they should probably not use “conversely” in a text to a friend or “LOL” in their Shakespeare paper. “As digital natives who have had access to computer technology all of their lives, they often demonstrate in theses arenas proficiencies that the adults in their lives lack,” Turner writes. Instructors should “teach them to negotiate the technology-driven discourse within the confines of school language.”

Responses to Lunsford and Lunsford’s study focused on what the results revealed about mistakes in writing: Error is often in the eye of the beholder . Teachers mark some errors and neglect to mention (or find) others. And, as a pioneering scholar of this field wrote in the 1970s, context is key when analyzing error: Students who make mistakes are not “indifferent…or incapable” but “beginners and must, like all beginners, learn by making mistakes.”

College students are making mistakes, of course, and they have much to learn about writing. But they are not making more mistakes than did their parents, grandparents, and great-grandparents. Since they now use writing to communicate with friends and family, they are more comfortable expressing themselves in words. Plus, most have access to technology that allows them to write faster than ever. If Lunsford and Lunsford’s findings about the average length of student papers stays true, today’s college students will graduate with more pages of completed prose to their name than any other generation.

If we want to worry about college student writing, then perhaps what we should attend to is not clipped, abbreviated writing, but overly verbose, rambling writing. It might be that editing skills—deciding what not to say, and what to delete—may be what most ails the kids these days.

JSTOR logo

JSTOR is a digital library for scholars, researchers, and students. JSTOR Daily readers can access the original research behind our articles for free on JSTOR.

More Stories

A General View of the Falls of Niagara by Alvan Fisher, 1820

  • The Fashionable Tour : or, The First American Tourist Guidebook

Anna May Wong

  • Celebrating Asian American and Pacific Islander Heritage Month

An illustration from Technic and Practice of Chiropractic, 1915

The Metaphysical Story of Chiropractic

Jizō, c. 1202

A Bodhisattva for Japanese Women

Recent posts.

  • The Development of Central American Film
  • Remembering Maud Lewis
  • Rice, Famine, and the Seven Wonders of the World

Support JSTOR Daily

Sign up for our weekly newsletter.

Privacy in the digital age: comparing and contrasting individual versus social approaches towards privacy

  • Original Paper
  • Open access
  • Published: 17 July 2019
  • Volume 21 , pages 307–317, ( 2019 )

Cite this article

You have full access to this open access article

advantages of digital age essay

  • Marcel Becker   ORCID: orcid.org/0000-0003-2848-5305 1  

34k Accesses

18 Citations

19 Altmetric

Explore all metrics

This paper takes as a starting point a recent development in privacy-debates: the emphasis on social and institutional environments in the definition and the defence of privacy. Recognizing the merits of this approach I supplement it in two respects. First, an analysis of the relation between privacy and autonomy teaches that in the digital age more than ever individual autonomy is threatened. The striking contrast between on the one hand offline vocabulary, where autonomy and individual decision making prevail, and on the other online practices is a challenge that cannot be met in a social approach. Secondly, I elucidate the background of the social approach. Its importance is not exclusively related to the digital age. In public life we regularly face privacy-moments, when in a small distinguished social domain few people are commonly involved in common experiences. In the digital age the contextual integrity model of Helen Nissenbaum has become very influential. However this model has some problems. Nissenbaum refers to a variety of sources and uses several terms to explain the normativity in her model. The notion ‘context’ is not specific and faces the reproach of conservatism. We elaborate on the most promising suggestion: an elaboration on the notion ‘goods’ as it can be found in the works of Michael Walzer and Alisdair Macintyre. Developing criteria for defining a normative framework requires making explicit the substantive goods that are at stake in a context, and take them as the starting point for decisions about the flow of information. Doing so delivers stronger and more specific orientations that are indispensible in discussions about digital privacy.

Similar content being viewed by others

advantages of digital age essay

What is a social pattern? Rethinking a central social science term

advantages of digital age essay

Social Media and its Negative Impacts on Autonomy

advantages of digital age essay

Security, Privacy and Risks Within Smart Cities: Literature Review and Development of a Smart City Interaction Framework

Avoid common mistakes on your manuscript.

Introduction

Rethinking the concept of privacy in the digital age inevitably entangles the descriptive and the normative dimensions of this concept. Theoretically these two dimensions of privacy can be distinguished. One dimension can describe the degree of privacy people enjoy, without taking a normative stance about the desirable degree of privacy. In normative discussions, the focus is on the reasons why privacy is important for leading a fulfilling life. This distinction should not distract us from the fact that privacy is not a completely neutral concept; instead, it has a positive connotation. For example, an invasion of privacy is a violation of or intrusion into something valuable that should be protected. Discussion of the concept, however, brings into question why privacy should be cherished and protected. In the digital age, the normative dimension is the object of intense discussion. Existing dangers to privacy—because of big data applications, cloud computing, and profiling—are widely recognized, but feelings of resignation and why should we bother lie dormant. Defenders of privacy are regularly faced with scepticism, which is fueled by Schmidt’s ‘Innocent people have nothing to hide’ (Esguerra 2009 ) and Zuckerberg’s ‘Having two identities for yourself is a lack of integrity’ (Boyd 2014 ).

Traditionally in defences of privacy the focus has been on the individual (Rule 2015 ). Privacy was defined in terms of an individual’s space , which was seen as necessary for meeting the individual’s vital interests. In the last decade, however, we have seen a shift in the emphasis. A view of privacy as the norm that regulates and structures social life (the social dimension of privacy) has gained importance in both law and philosophical literature. For instance, the European Court of Human Rights previously stressed that data protection was an individual’s right not to be interfered with. However, more and more the Court is focusing on individuals’ privacy as protection of their relationships with other human beings (van der Sloot 2014 ). In philosophical literature on privacy, many scholars have explicitly distanced themselves from the individual approach and instead study the social dimensions of privacy (Roessler and Mokrosinska 2015 ). Helen Nissenbaum is by far the most important spokesperson for the social approach. She has introduced the notion of contextual integrity as an alternative to what she describes as too much focus on individuals’ rights based notions of privacy (Nissenbaum 2009 ). Nissenbaum criticizes the so-called interest-based approach, which defines conflicts in terms of (violated) interests of the parties involved. For instance, ‘Uncontroversial acceptance of healthcare monitoring systems can be explained by pointing to the roughly even service to the interest of patients, hospitals, healthcare professionals and so on’. The problem with this approach, according to Nissenbaum, is that it sooner or later leads to ‘hard fought interest brawls’, which more often than not are settled to the advantage of the more powerful parties (Nissenbaum  2009 , p. 8). It is necessary to create a justificatory platform to reason in moral terms. As a rights-based approach is not satisfactory, she proposes a normative approach that does more justice to the social dimension.

The distinction between a focus on the individual and privacy as social value is not only of academic importance. For policies on privacy, this makes quite a difference. On the one hand, the emphasis can be on an individual’s right to decide about personal interests and transparency for empowering the individual , as for instance the European Data Protection Supervisor asserts (EDPS Opinion 2015 ). On the other hand, the emphasis can also be on institutional arrangements that protect social relationships. The fact that good privacy policies require measures should not be a reason to overlook their fundamental differences.

In this paper, we compare individual-based justifications of privacy with the social approach. We open with a discussion of the strengths of the individual-focused approach by relating privacy to a concept that has a strong normative sense and is most closely associated with individual-based privacy conceptions: autonomy. As we will see, a defence of privacy along these lines is both possible and necessary. In our discussion of the social approach, we focus on Helen Nissenbaum’s model. A critical discussion of the normative dimension will lead to suggestions for strengthening this model.

The individual approach

The importance of privacy: autonomy.

The history of justifications of privacy starts with Warren and Brandeis’s ( 1890 ) legal definition of privacy as the right to be left alone ( 1890 ). This classic definition is completely in line with the literal meaning of privacy. The word is a negativum (related to deprive ) of public . The right to privacy is essentially the right of individuals to have their own domain , separated from the public (Solove 2015 ). The basic way to describe this right to be left alone is in terms of access to a person. In classic articles, Gavison and Reiman characterize privacy as the degree of access that others have to you through information, attendance, and proximity (Gavison 1984 ; Reiman 1984 ).

Discussion about the importance of privacy for the individual intensified in the second half of the twentieth century, as patterns of living in societies became more and more individualistic. Privacy became linked to the valued notion of autonomy and the underlying idea of individual freedom. In both literature on privacy and judicial statements, this connection between privacy and autonomy has been a topic of intense discussion. Sometimes the two concepts were even blended together, even though they should remain distinct. A sharp distinction between privacy and autonomy is necessary to get to grips with the normative dimension of privacy.

The concept autonomy is derived from the ancient Greek words autos (self) and nomos (law). Especially within the Kantian framework, the concept is explicated in terms of a rational individual who, reflecting independently, takes his own decisions. Being autonomous was thus understood mainly as having control over one’s own life. In many domains of professional ethics (healthcare, consumer protection, and scientific research), autonomy is a key concept in defining how human beings should be treated. The right of individuals to control their own life should always be respected. The patient, the consumer, and the research participant each must be able to make his or her own choices (Strandburg 2014 ). Physicians are supposed to fully inform patients; advertisers who are caught lying are censured; and informed consent is a standard requirement of research ethics. In each of these cases, persons should not be forced, tempted, or seduced into performing actions they do not want to do.

When privacy and autonomy are connected, privacy is described as a way of controlling one’s own personal environment. An invasion of privacy disturbs control over (or access to) one’s personal sphere. This notion of privacy is closely related to secrecy. A person who deliberately gains access to information that the other person wants to keep secret is violating the other person’s autonomy through information control. We see the emphasis on privacy as control over information in, for instance, Marmor’s description of privacy as ‘grounded in people’s interest in having a reasonable measure of control over the ways in which they can present themselves to others’ (Marmor 2015 ). Autonomy, however, does not entail an exhaustive description of privacy. It is possible that someone could have the ability to control, yet he or she lacks privacy. For instance, a woman who frequently absentmindedly forgets to close the curtains before she undresses enables her neighbour to watch her. If the neighbour does so, we can speak about a loss of the woman’s privacy. Nevertheless, the woman still has the ability to control. At any moment, she could choose to close the curtains. Thus, privacy requires more than just autonomy.

The distinction between privacy and autonomy becomes clearer in Judith Jarvis Thompson’s classic thought experiment (Taylor 2002 ). Imagine that my neighbour invented some elaborate X-ray device that enabled him to look through the walls. I would thereby lose control over who can look at me, but my privacy would not be violated until my neighbour actually started to look through the walls. It is the actual looking that violates privacy, not the acquisition of the power to look. If my neighbour starts observing through the walls but I’m not aware of it and believe that I am carrying out my duties in the privacy of my own home, my autonomy would not be directly undermined. Not only in thought experiments, but also in literature and everyday life, we witness the difference between autonomy and privacy. Taylor refers to Scrooge in Dickens’ A Christmas Carol who is present as a ghost at family parties. His covert observation of the intimate Christmas dinner party implies a breach of privacy, although he does not influence the behaviour of the other people. In everyday life, we do not experience an inadvertent breach of privacy (for instance, a passer by randomly picking up some information) as loss of autonomy.

These examples make it clear that there is a difference between autonomy, which is about control, and privacy, which is about knowledge and access to information. The most natural way to connect the two concepts is to consider privacy as a tool that fosters and encourages autonomy. Privacy thus understood contributes to demarcation of a personal sphere , which makes it easier for a person to make decisions independently of other people. But a loss of privacy does not automatically imply loss of autonomy. A violation of privacy will result in autonomy being undermined only when at least one additional condition is met: the observing (privacy-violating) person is in one way or another influencing the other person (Taylor 2002 ). Such a violation of privacy can take various forms. For instance, the person involved might feel pressure to alter her behaviour just because she knows she is being observed. Or a person who is not aware of being observed is being manipulated. This, in fact, occurs more than ever before in the digital age.

Loss of autonomy in the digital age

In the more than 100 years following Warren and Brandeis’ publication of their definition, privacy was mainly considered to be a spatial notion. For example, the right to be left alone was the right to have one’s own space in a territorial sense, e.g., at home behind closed curtains, where other people were not allowed. An important topic in discussions of privacy was the embarrassment experienced when someone else entered the private spatial domain. Consider, for example, public figures whose privacy is invaded by obtrusive photographers or people who feel invaded when someone unexpectedly enters their home (Roessler 2009 ; Gavison 1984 ).

The digital age is characterized by the omnipresence of hidden cameras and other surveillance devices. This kind of observation and the corresponding embarrassment that it can cause have changed our ideas about privacy. The main concern is not the intrusive eye of another person, but the constant observation, which can lead to the panopticon experience of the interiorized gaze of the other. It is self-evident that the additional conditions are now being met, viz., the person’s autonomy is threatened. In situations in which the observed person feels impeded to follow his impulses (Van Otterloo 2014 ), the loss of privacy leads to diminished autonomy.

The loss of autonomy resulting from persistent surveillance becomes even more striking when we take into consideration the unprecedented collection and storage of non-visual information. Collecting data on individuals, such as through the activity of profiling, offers commercial parties and other institutions endless possibilities for approaching people in ways that meet the institution’s own interests. Driven by invisible algorithms, these institutions temp, nudge, seduce, and convince individuals to participate for reasons that are advantageous to the institution. The widespread application of algorithms in decision-making processes intensifies the problem of loss of autonomy in at least two respects. First, when algorithms are used to track people’s behaviour, there is no ‘observer’ in the strict sense of the word; no human (or other ‘cognitive entity’) actually ever checks the individual’s search profile. Nevertheless, the invisibility of the watchful entity does not diminish the precision with which the behaviour is being tracked; in fact, it is quite the opposite. Second, in the digital age mere awareness of the possibility that surveillance techniques exist has an impact on human behaviour, independently of whether there is actually an observing entity. More than ever before, Foucault’s ( 1975 ) addition to Bentham’s panopticon model is relevant. The gaze of the other person is internalized.

This brings us to the conclusion that, despite the fact that a loss of privacy does not necessarily involve a loss of autonomy, in the digital age when privacy is under threat, the independence of individual decisions is typically also compromised.

These observations are striking when we consider that Western societies in particular focus on the individual person, whose autonomy is esteemed very highly. We can contrast the self-image and ego vocabulary that prevail in everyday life with online situations where an individual’s autonomy is lost. There are two examples of this from domains where autonomy has traditionally been considered to be very important and where it has come under threat.

Advertising

In consumer and advertising ethics, the consumer’s free choice is the moral cornerstone. In the online world, this ethical value is scarcely met. Digitalisation facilitates customised advertising, which originally was presented as a service for the individual. Tailored information was supposed to strengthen a person’s capacities to make choices to his own advantage. But now the procedure has become degenerated; people are placed into a filter bubble based on algorithms and corporate policies that are unknown to the target persons. Individuals’ control and knowledge about the flow of information are lost. As we all are keenly aware, requiring people to agree with terms and conditions does nothing to solve the problem. In the first place, very few people even read them. This kind of autonomy is apparently too demanding for most people to exercise. Secondly, the terms and conditions do not themselves say anything about the algorithms. Today’s consumer finds himself in a grey area, where he struggles between exercising autonomy and being influenced by others.

Of course, it is an empirical question as to what degree the algorithms influence customers’ behaviour. The least we can say is that the wide application of algorithms suggests that they must have a substantial effect. Following the critical study of Sunstein ( 2009 ) in which he warns that the political landscape might become fragmented (‘cyberbalkanization’), much research has been undertaken on the influence of algorithms on political opinions. This has resulted in a nuanced view of the widespread existence of ‘confirmation bias’. For instance, it has been shown that the need for information that confirms one’s opinion differs from other kinds of information and that it is stronger in those people who have more extreme political opinions. Furthermore, there turns out to be a major difference between how often individuals actively search for opinions similar to their own (what people usually do) and how often they consciously avoid noticing opinions that differ from their own (which are far more infrequent). People surfing the Internet often encounter news they were not consciously looking for, but which they nevertheless take seriously. This is called ‘inadvertent’ attention for news (Garret 2009 ; Tewksbury and Rittenberg 2009 ; Becker 2015 , Chap. 4).

The question how online networks influence exposure to perspectives that cut across ideological lines received a lot of attention after the Brexit referendum and Trump election. Using data of 10.1 million Facebook users Bakshy et al. confirm that digital technologies have the potential to limit exposure to attitude-challenging information. The authors observed substantial polarization among hard content shared by users, with the most frequently shared links clearly aligned with largely liberal or conservative populations. But one-sided algorithms are not always of decisive importance. The flow of information on Facebook is structured by how individuals are connected in the network. How much cross-cutting content an individual encounters depends on who his friends are and what information those friends share. According to Bakshy et al. on average more than 20% of an individual’s Facebook friends who report an ideological affiliation are from the opposing party, leaving substantial room for exposure to opposing viewpoints (Bakshy et al. 2015 ). Dubois and Blank, using a nationally representative survey of adult internet users in the UK found that individuals do tend to expose themselves to information and ideas they agree with. But they do not tend to avoid information and ideas that are conflicting. Particularly those who are interested in politics and those with diverse media diets tend to avoid echo chambers. Dubois & Blank observe that many studies are single platform studies, whereas most individuals use a variety of media in their news and political information seeking practices. Measuring exposure to conflicting ideas on one platform does not account for the ways in with individuals collect information across the entire media environment. Even individuals who have a strong partisan affiliation report using both general newssites which are largely non-partisan and include a variety of issues (Dubois and Blank 2018 , see also Alcott et al.). These findings are consistent with other studies that indicate that only a subset of Americans have heavily skewed media consumption patterns (Guess et al. 2016 ).

Research ethics

Corporations such as Google and Facebook, as well as data brokers use people’s personal information in their research activities. One disturbing example is the research that Facebook conducted in 2014. The corporation experimented on hundreds of thousands of unwitting users, attempting to induce an emotional state in them by selectively showing either positive or negative stories in their news feeds (Kramer et al. 2014 ; Fiske and Hauser 2014 ). Acquiring information by manipulating people without their informed consent and without debriefing them is a gross violation of the ethical standards that established research institutions must follow.

Such violations of people’s autonomy indicate a striking contrast between the offline ideals of most users and their online practices. Whereas in the offline world we typically take autonomy as a moral cornerstone, on the Internet this ideal is not upheld. How to deal with this discrepancy in values upheld in the real world and on the Internet is one of the central challenges in discussions about privacy. When we do not strive for more clarity and transparency in the flow of information, we relinquish autonomy, a value that is deeply embedded in Western cultures.

The social approach

We might be tempted to associate the emergence of the social approach in discussions about privacy with the digital age, as if only in these times of rapid information flow reflection on the social dimension of privacy is justified. This, however, would be a false suggestion. During the twentieth century, an important undercurrent in discussions of privacy was an emphasis on the importance of privacy for social relationships. Privacy was seen as a component of a well- functioning society (Regan 2015 ), in that it plays an important role in what is described as a differentiated society. Privacy guarantees social boundaries that help to maintain the variety of social environments. Because privacy provides contexts for people to develop in different kinds of relationships, respect for privacy enriches social life. Privacy also facilitates interactions among people along generally agreed patterns (Schoemann 1984 ). As the poet Robert Frost remarked in Mending Wall ( 1914 ), Good fences make good neighbors .

This characteristic of privacy is important not only at an institutional level. In people’s private lives the creation and maintenance of different kinds of relationships is possible only when subtle differences in patterns of social behaviour and social expectations are recognized (Rachels 1984 , Marmor 2015 ). Remarkably, this subtlety becomes clearest in examples of intrusions of privacy in unoccupied public places. Consider (a) someone who deliberately attempts to sit beside lovers who are sitting together on a park bench, or (b) intrusive bystanders at the scene of a car accident. In both cases, the intrusions of the privacy of the persons involved are very important. The most trivial words and gestures can reflect a deep dedication and intense relationship between two people. In one of the first descriptions of the core of privacy, the English jurist and philosopher Stephens depicted it as an observation which is sympathetic (Schoemann 1984 ). Sympathetic is derived from the Greek word sympathein , which means being involved with the same. Indeed, in private situations, different people experience the same things as important. A small, clearly distinguished domain is created, and the events should be shared only by those who directly participate in them. The persons involved are tied together by having undergone common experiences. They have an immediate relationship to what is at stake, and in this relationship they are deeply engrossed. An outside observer who has not participated in the common experience is viewed as invading their privacy. He cannot share the meaning of what is going on because he has not been directly involved.

When understood this way the concept of privacy is helpful in explaining the difference between occasionally being noticed and being eavesdropped upon. In cases involving eavesdropping, someone participates in an indirect and corrupt way in what is going on. The participation is indirect because the person acquires knowledge without participating directly; the things that are at stake should not concern him. The participation is corrupt because the indirect participant is not genuinely interested in what is going on. He sees the others involved not primarily as people with their own sensibilities, goals, and aspirations, but as the objects of his own curiosity. When the other people become aware that they are being observed, they begin to see themselves through the eyes of the observing person, and they thereby lose spontaneity. Their direct involvement in the meaning of what is at stake is lost.

In cases like these, neither the content of the action nor the secrecy surrounding it qualifies the actions as belonging to the private sphere. The content might be very trivial, but it would be offensive to the lovers sitting on the park bench to suggest that what they are expressing to each other could be made public. The most commonplace of actions—for instance, walking with one’s children down the street—can be private. Note the indignation of people in the public eye about obtrusive photographers who take photographs of public figures while they are doing ordinary things like we all do. The essence of secrecy is intentional concealment, but the private situations that we discuss here concern behaviour, inward emotions, and convictions that can be shown and experienced in various places that are accessible to everyone, as for instance in the case of the young couple we saw sitting in the park (Belsey 1992 ).

This characteristic of privacy in social relationships cannot be captured by the concept of autonomy in the sense of an individual independently and deliberately making his or her own choices. What is at stake in situations like these is not a lack of transparency. There is no question about the autonomy of an independent individual. The person would be deeply engrossed in precarious and delicate situations involving social relationships. An intrusion on this person’s privacy would mean that he feels inhibited in being immersed in the social interaction and share the meaning at stake.

In order to do justice to this notion of privacy, other strategies for protecting privacy are required. It is not primarily an individual’s mastery that must be protected; rather, it is the possibility for the individual to be properly embedded in social relationships. To answer the question of how this concept of privacy manifests itself in the digital age, we turn to Helen Nissenbaum’s contextual integrity model, which is an elaboration of socially embedded privacy in the digital age.

Helen Nissenbaum’s contextual integrity model

After having conducted several preliminary studies, Helen Nissenbaum published Privacy in Context (2009), a book that became very influential in philosophical and political debates on privacy. It inspired the Obama administration in the United States to focus on the principle of respect for context as an important notion in a document on the privacy of consumer data (Nissenbaum 2015 ). The core idea of Nissenbaum’s model is presented in the opening pages of her book: ‘What people care most about is not simply restricting the flow of information but ensuring that it flows appropriately .’ In Nissenbaum’s view, the notion ‘appropriate’ can be understood to mean that normative standards are not determined by an abstract, theoretically developed default. The criteria for people’s actions and the expectations of the actions of other people are developed in the context of social structures that have evolved over time, and which are experienced in daily life. As examples of contexts, Nissenbaum mentions health care, education, religion, and family. The storage, monitoring, and tracking of data are allowed insofar as they serve the goals of the context. Privacy rules are characterized by an emphasis on data security and confidentiality, in order to ensure that the flow of information is limited only to the people directly involved. The key players in the context have the responsibility to prevent the data from falling into the wrong hands.

Nissenbaum’s model is well-suited for the information age. It describes privacy in terms of the flow of information, and the model is easy to apply to institutional gatekeepers who deal with data streams. At the same time, the contextual approach deviates from the classical view of autonomy. The personal control of information loses ground, and shared responsibility that is expressed through broader principles becomes more important. Nissenbaum considers it a serious disadvantage of the autonomy approach that it is usually associated with notions of privacy that are based on individuals’ rights. In the articulation of justificatory frameworks in policymaking and the legal arena, we often see major conflicts among parties who insist that their rights and interests should be protected. She also distances herself from the connection between privacy and secrecy (for a recent description of this connection, see Solove 2015 ). Privacy is not forfeited by the fact that someone knows something about another person. Within contexts, information about persons might flow relatively freely. In line with this, Nissenbaum puts into perspective the classic distinction between the private and the public realm. Contexts might transgress borders between the public and the private. For instance, professionals in social healthcare work with information that comes from intimate spheres. As professionals, they are, however, part of the public domain. It is their professional responsibility to deal properly with the flow of information within the realm of their own activities.

Normative weakness and the threat of conservatism

Nissenbaum’s rejection of autonomy as the basis for privacy raises questions about the normative strength of her model. Does she indeed deliver the justificatory platform or framework to reason in moral terms? She asserts that her model does do so when she claims that the context procures a clear orientation, which can guide policies on privacy. This claim suggests that it is completely clear what a context is, as is the way in which it delivers a normative framework. In this respect, Nissenbaum’s work has some flaws.

In her description of context as a structured social setting that guides behaviour, Nissenbaum refers to a wide array of scholars from social theory and philosophy. Nissenbaum ( 2009 ), for instance, reviews Bourdieu’s field theory, Schatzki’s notion of practice in which activities are structured teleologically, and Walzer’s Spheres of Justice . There are, however, major differences among these authors. Schatzki focuses on action theory and the way in which people develop meaningful activities; Walzer describes the plural distribution of social goods in different spheres of human activity; and Bourdieu focuses on power relationships. When searching for a normative framework, it matters which of these approaches is being taken as the starting point. The theories also differ in their emphasis on a descriptive (Bourdieu) versus a normative (Walzer) analysis.

This vagueness about the normative framework is a serious problem because protection of privacy in the digital age requires systemic criteria to measure new developments against established customs. Nissenbaum assumes at the start that online technologies change the way in which information flows, but they do not change the principles that guide the flow of information. The principles by which digital information flows must be derived from the institutions as they function in the off-line world, i.e., the background social institutions (Nissenbaum 2009 ). Consider online banking as an example. In the digital age, contacts between costumers and banks have completely changed. Impressive buildings in which people previously made financial transactions have been partly replaced by the digital flow of information. But the core principles regarding the actions of the actors (the so called information and transmission principles) have not changed. This implies that people working within the context are familiar with the sensible issues, and they have the final say. The only thing that must be done is to translate the principles to the new situation. In case the novel practice results in a departure from entrenched norms, as Nissenbaum says, the novel practice is flagged as a breach, and we have prima facie evidence that contextual integrity has been violated (Nissenbaum 2009 ). Indeed, Nissenbaum admits that this starting point is inherently conservative, and she flags departures from entrenched practice as problematic ( 2009 ). She leaves open the possibility that completely new developments can lead to a revision of existing standards, and she gives ample guidelines about how to implement such a revision (Nissenbaum 2015 ).

Nissenbaum’s emphasis on existing practices must be understood in the context of a non-philosophical and non-sociological source, e.g., the notion of reasonable expectation, which plays an important role in United States jurisprudence on privacy. In the conclusion of her book, Nissenbaum ( 2009 ) describes privacy as ‘a right to live in a world in which our expectations about the flow of personal information are, for the most part, met’. Reasonable expectation was the core notion in the famous case of Katz versus United States , which laid the foundation for privacy discussions in the United States. Before Katz , it had already been recognized that within one ’ s own home, there was a justified expectation of privacy. Katz dealt with the kind of privacy situations in the public sphere that was described in the preceding paragraph. In this case, a phone call had been made from a public phone booth while enforcement agents used an external listening device to listen to the conversation. The Court considered this to be unjustified. The Fourth Amendment to the United States Constitution protects people, but not places; therefore, the actions of the enforcement agents constituted an intrusion. Regardless of location, oral statements are protected if there is a reasonable expectation of privacy. This extension of privacy was a revolutionary development, and the notion of reasonable expectation turned out to work well. For instance, in cases where the distinction between hard-to-obtain information and information that is in plain view plays an important role. In many cases, however, just because information is in plain view does not mean there is a reasonable expectation of privacy. Consider the situation where the police accidentally uncover illegal drugs concealed in an automobile. In cases like this, an appeal to privacy to protect criminals cannot be justified.

However, the normative strength of the notion reasonable expectation is weak. The notion refers to existing practices; reasonable is what in a society counts as reasonable. In many cases, this might work out well. We usually do not need polls to make it clear what reasonable means. Eavesdropping is despised, yet video surveillance in a taxi is generally accepted. Police arbitrarily invading a house is not justified; however, police actively working to find concealed drugs are justified. In times of rapid development, referring to existing practices to find ultimate normative justification is not a good strategy, for at least two reasons. First, the danger of rigid conservatism might be just around the corner. This danger was already present in Nissenbaum’s idea that standards for online intrusions of privacy must be derived from the offline world. In times of technological developments new problems make their appearance, and new technologies change the effects of existing rules. Particularly in the digital age, practices and normative conceptions are under pressure; existing frameworks cannot be used unequivocally. In the times of Katz , the distinction between hard-to-obtain information and information in plain view was based on how easy it was to access the information, irrespective of the type of information. This distinction is out-of-date in the digital age. The revolution in techniques of surveillance makes almost all information that is in plain view information. Any development in surveillance or monitoring, if communicated well, might be placed under the umbrella of reasonable expectation. Suppose a government takes highly questionably measures (for instance, it collects all metadata on phone calls) and is completely honest about doing so. The government does not want to surprise its citizens, so it duly informs the public that this is how things are being done. Anyone who makes a phone call has the expectation that her data will be stored. We all know this is not simply a hypothetical example. The same pattern can be distinguished in the way Google and Facebook justify their practices. Thanks to Mark Zuckerberg’s and Eric Schmidt’s statements, Facebook and Google users do not have expectations about privacy. Ironically, the insistence on transparency, which is so often heard in debates on privacy, takes the sting out of the idea of reasonable expectation. Transparency implies that data streams can flow in all directions, as long as the responsible persons are open and honest about it (Schoonmaker 2016 ).

Some would suggest that the word reasonable (as opposed to unreasonable ) has a certain normative strength. The word refers to standards that have a certain degree of plausibility and are widely shared. Again, however, in order to guarantee protection of privacy, we need more guidance about what these standards mean, for the concept itself does not provide this guidance. The matter is turned upside down when we search for normative strengths simply by referring to current practices.

The threat of conservatism in the digital age and the failure of the notion reasonable expectation lead us to the conclusion that strong anchors, which meet certain criteria, are needed. This is first of all apparent in the conservative-progressive dimension. The standards must be related to existing frameworks; alienation from these hampers acceptance. On the other hand, they shouldn’t be so rigid that promising new developments are impeded. Second, it is apparent in the general-specific dimension. To motivate people, they must be so general that a wide range of applications is possible. Nevertheless, they should not be too vague; they must be specific enough to contain guidelines for action.

A variety of notions that describe normative standards accompany Nissenbaum’s reference to various philosophical and sociological sources. As far as the dimension conservative-progressive is concerned, she switches, on the one hand, between internal logic of and settled rationale for social systems, and she pleads, on the other hand, for the moral superiority of new practices (Lever 2015 ). Nissenbaum also speaks about ultimate criteria as delivered by the purposes and ends of the context. This description is too concrete in times of rapid technological developments. Today’s targets become outmoded tomorrow. Some more general notion is required. In a recent refinement of her model, Nissenbaum ( 2015 ) provides more clarity. For example, she mentions a few domains of cooperative activities that need not count as context per se. The business model for instance does not count as context, because in business the core value is earning money. When everything is for sale, it is impossible to develop independent, substantive landmarks. She also makes it clear that a describing context as a technological system is highly problematic. It leads to technological determinism, and therefore is a petitio principi . Normative standards about how to deal with technological problems are derived from technological developments. A proper context can count as what she describes as a social domain. Remarkably, she hardly considers this notion.

The search for independent substantive landmarks might be guided by the expression norms and values, which Nissenbaum uses in her book. Norms are fixed standards. Usually they are concrete descriptions of particular things that must be realised or derived. Norms are necessary for guiding actions, but in times of fast changes they are too rigid. Values, on the other hand, are very general, even though they are not vague. Values such as justice, responsibility, and efficiency are used in a wide variety of contexts. This is especially true for the group of values (e.g., justice, respect, integrity, decency) that is concerned with the way in which we treat other people. These values surpass the context; they are important in society as a whole. They are, therefore, too general to deliver a normative orientation for actions within a context. One way to solve this problem would be to rewrite the values in a context-specific sense. This requires orientation points that refer to characteristics of the contexts.

At the end of her book, Nissenbaum admits that her description of context is deficient; she acknowledges that further research on the concept is necessary. We suggest following a suggestion that Nissenbaum herself made. In a short paragraph in Privacy in context , she refers to Michael Walzer’s conception of goods as constitutive for contexts. It is the only notion to which she devotes a full paragraph; intriguingly, however, she does not elaborate on this concept in her later work. This notion could be very useful for making more explicit the underlying normativeness in contexts.

The concept of substantial goods

In his famous Spheres of Justice , Walzer ( 1983 ) stresses that he does not include material objects of transaction in his definition of goods. Instead, he uses a broader and more abstract notion of goods. They are immaterial qualities that people conceive and create in the course of their actions. In his book, he comments on goods such as security, education, health, kinship, and life. While performing an action, people are oriented towards goods such as these. The goods come into people’s minds before they come into their hands. Goods are, moreover, crucial for social relationships (Walzer 1983 ). The development of goods takes place in social contexts. For people to be able to live together, they must have more or less shared conceptions about the meaning of vital goods. The main goal of Walzer’s book is to show that different spheres of actions are characterized by different conceptions of goods, and subsequently different distributions of principles. The book turned out to be a very important expression of an idea that became very influential in determining standards for professional conduct: When human beings closely share an orientation on good actions with other human beings, this leads to a proper professional life. Only when goods are determined is it possible to adjust the standards. Without going into detail, we can point to two lines of thought that have contributed to elucidation and specification of the notion good .

Both Charles Taylor and Bernard Williams have distinguished goods from objects of impulsive desires and wishes by explaining that goods have an impact on a deep level of motivation. Goods ‘are judged as belonging to qualitatively different modes of living’ (Taylor 1999 ). They are the fulfilment of deeper commitments and engagements. Not the intensity of the desire but the sense of worth that makes life meaningful is characteristic of human attitudes towards goods. Attachment to and engagement with goods extend over a longer period and lead to a deeper fulfilment when satisfied. Goods give meaning to professional life (Williams 1981 ).

For professional ethics, Alisdair MacIntyre’s contributions have been of great importance. He elaborated on a distinctive characteristic of the concept good, which professionals have very often used in dealing with moral dilemmas. In socially established cooperative activities—MacIntyre mentions various examples of these, such as chess, portrait-painting, and education—people are guided by internal goods, which are defined as abstract qualities that are realised in the course of an active life. MacIntyre distinguishes between internal goods and external goods such as money, power, and prestige. External goods are necessary only for maintaining organizations and institutions, so that the kernel for a practice exists in realising internal goods. The distinction between internal and external goods can be made along two lines. First, external goods are called external as they also can be acquired through activities that are not restricted to the practice. This is true in the sense that in activities outside the practices money, power, and prestige play a role, but it is also true in the sense that within practices it is possible to acquire money, power, and prestige through dishonest means. In opposition to this internal goods can be acquired only by excelling in activities that belong to the practice. Secondly, external goods are always in some individual’s possession. The more someone has of them, the less there is for other people. They are always objects for competition. Internal goods, on the other hand, are not in short supply. Their achievement is good for the whole community whose members participate in the practice; they can be shared in the full sense of the word. Many people can be orientated towards acquiring them without being in conflict with one another. In fact, a common orientation strengthens the motivation of each member of the community.

The differences among these authors do not invalidate their common focus. The distinctions they make are insightful for understanding how certain kinds of activities contribute to a meaningful life. We describe them under the heading ‘substantial goods’, which furnish us with a normative framework that can be used to evaluate activities. During recent decades, this line of thought has played an important role in public administration (Becker and Talsma 2015 ), journalism (Borden 2007 ), business (Solomon 1992 ), healthcare (Day 2007 ), and science. It has been particularly helpful to distinguish between qualities that are related to the content of a work and institutional and external pressures. For instance, the appropriate task for a variety of professions that stress quantitative performance measures can be elucidated using the emphasis on substantial goods; they include journalists working in a democracy, scientists working in academic institutions, and public administrators who must answer to higher-level management. The ultimate goal of their actions does not lie in complying with external standards, but in realising goods that are themselves recognized as being of substantive importance.

Substantial goods in Helen Nissenbaum’s model

In the application of this line of thought to Helen Nissenbaum’s model, we make explicit the goods that are at stake in a context, and we take them as the starting point for decisions about the flow of information. This strategy can contribute to the solution of several problems that currently stand in the way of further applying Nissenbaum’s model.

A more explicit articulation of the goods at stake will be helpful in solving the problem of conservatism. We have discussed how the notion of ‘reasonable expectation’ and Nissenbaum’s model might evoke the reproach of someone with a conservative orientation to fixed standards that do not do justice to new developments. The notion of substantial goods enables us to describe activities under a normative perspective without being restricted to certain activities. The meaning of goods can be translated in various activities. New developments lead to new interpretations of the goods involved, which, in turn, facilitate innovation. Take, for instance, education. Under the umbrella of having a good education, a wide variety of patterns of education can be developed, and new trends can be incorporated.

Another advantage of elaboration of the notion of goods is that it contributes to a sharper context-specific meaning of broad, general values, such as justice, respect, and integrity. These values are very important throughout society as a whole. But the price they have to pay for the overall appreciation is that they are vague and abstract. A more precise meaning requires them to be applied in concrete contexts. This is exactly what Michael Walzer does with the value ‘justice’ in Spheres of Justice . He shows that the criteria for distribution are dependent on standards that differ from one context to another. Likewise, a precise description of the meaning of the notion ‘respect’ in education (i.e. respect for the student or the teacher) differs from respect as understood in healthcare (i.e. respect for the patient). Knowledge of the substantial goods at stake is helpful when it comes to concretizing these broad notions. And this is not simply a superfluous luxury in the digital age. For instance, in healthcare explicit awareness of the meaning of ‘respect’ for the patient helps to determine the appropriate flow of information that benefits the patient’s health. It is, therefore, helpful in protecting the interests of the patient from institutional pressures or pressures from special interest groups.

In addition to these merits, an articulation of the substantial goods delivers a welcomed intervention in an otherwise awkward debate about the different roles that privacy can play. Privacy is not exclusively positive. It can, for instance, be used to conceal poor practices. Hiding information is a central feature of deception. For instance, feminists have stated that privacy is the enemy of equality… placing ordinary people at the mercy of powerful people (Marx 2015 ). For criminals, privacy is a cover-up for their activities. Relating privacy to the substantial goods it serves is helpful in these debates, in which privacy seems to be a double-edged sword. When it is clear which kinds of goods privacy serves (e.g. goods of particular interest groups; emancipation; the common good), a context-specific discussion on the value of privacy is possible.

Finally, the notion of goods importantly contains a normative orientation, which is distinguished from, for instance, economic imperatives. After all, commercial interests are increasingly hampering privacy. A stronger awareness of the substantial goods at stake strengthens arguments against commodification. This is even more important as privacy is increasingly encroached upon in terms of trade-offs. People are being seduced to choose between, for instance, more privacy versus more customized offers from corporations or more privacy versus paying a lower insurance premium. In such trade-offs, privacy is described as a luxury that only wealthy people can afford (Criado and Such 2015 ). How far can we go without spoiling what is vital for leading a good life? A strong articulation of substantive goods will be helpful placing a barrier between commercial pressures and leading a good life.

Conclusions

During the past decade, we have witnessed the emergence of the so-called social approach to privacy. This approach must be clearly distinguished from an autonomy approach. These two approaches rely on different normative frameworks and different justification strategies. Both of them have their merit in the digital age. Changing technologies threaten autonomy, and autonomy is indispensable for making clear what is at stake in discussions of privacy. Neglecting autonomy and the processes that threaten to undermine it is harmful for individuals. The social approach, which has been an undercurrent for decades, gains importance in the digital age. When privacy is defined in terms of control over flows of information, an approach is required that surpasses the perspective of the individual. The right to privacy provides protection in relationships with other human beings and with institutions, where the fulfilment and development of one’s personal identity can be realised. The normative strength of this approach can be improved by a more explicit elaboration of the goods that are at stake.

Bakshy, E., Messing, S., & Adamic, L. (2015). Exposure to ideologically diverse news and opinion on Facebook. Science, 348 (6239), 1130–1132.

Article   MathSciNet   Google Scholar  

Becker, M. (2015). Ethiek van de digitale Media . Amsterdam: Boom.

Google Scholar  

Becker, M., & Talsma, J. (2015). Adding colours to the shades of grey: Enriching the integrity discourse with virtue ethics concepts. In A. Lawton, Z. Van der Wal, & L. Huberts (Eds.), Ethics in public policy and management: A global Research companion (pp. 33–49). London & New York: Routlegde.

Chapter   Google Scholar  

Belsey, A. (1992). Privacy, publicity and politics. In A. Belsey & R. Chadwick (Eds.), Ethical issues in journalism and the media (pp. 77–92). London: Routledge.

Borden, S. (2007). Journalism as a practice: Macintyre, virtue ethics and the press . Farnham: Ashgate.

Boyd, D. (2014). It’s complicated: The social Lives of networked Teens . New Haven, London: Yale University Press.

Criado, N., Such, J. M. (2015). Towards implicit contextual integrity. The Second International Workshop on Agents and CyberSecurity (ACySE), pp. 23–26.

Day, L. (2007). Courage as a virtue necessary to good nursing practice. American Journal of Critical Care, 16 (6), 613–616.

Dubois, E., & Blank, G. (2018). The echo chamber is overstated: The moderating effect of political interest and diverse media. Information, Communication & Society, 21 (5), 729–745.

Article   Google Scholar  

Esguerra, R. (2009). Google CEO Eric Schmidt dismisses the importance of privacy. Blog Posting. Electronic Frontier Foundation. Retrieved January 10, 2014 from https://www.eff.org/deeplinks/2009/12/google-ceo-eric-schmidt-dismisses-privacy .

European Data Protection Supervisor (EDPS) Opinion 4/2015 ‘Towards a new digital Ethics’.

Fiske, S., & Hauser, R. M. (2014). Protecting human research participants in the age of big data. PNAS, 111 (38), 1375–1376.

Foucault, M. (1975). Surveiller et Punir. Naissance de la Prison . Paris: Gallimard.

Frost, R. (1914). Mending Wall. Poem in online collection. Retrieved March 13, 2014 from http://writing.upenn.edu/~afilreis/88/frost-mending.html .

Garrett, G. (2009). Echo chambers online? Politically motivated selective exposure among internet news users. Journal of Computer-mediated Communication, 14, 265–285.

Gavison, R. (1984). Privacy and the Limits of Law. In F. A. Schoemann (Eds.), Philosophical Dimensions of Privacy: An Anthology (pp. 347–402). Cambridge: Cambridge University Press. (Repr. From Gavison, R. (1980). Privacy and the limits of law. The Yale Law Journal, 89(3), 421–471).

Guess, A., Brendan, N., Reifler, J. (2016). Selective exposure to misinformation: Evidence from the consumption of fake news during the 2016 U.S. presidential campaign. Paper European Research Council, https://www.dartmouth.edu/~nyhan/fake-news-2016.pdf .

Kramer, A. D. I., Guillory, J. E., & Hancock, J. T. (2014). Experimental evidence of massive-scale emotional contagion through social networks. PNAS, 111 (24), 8788–8790.

Lever, A. (2015). Privacy, democracy and freedom of expression. In B. Roessler & D. Mokrosinska (Eds.), Social dimensions of privacy: Interdisciplinary perspectives (pp. 162–183). Cambridge: Cambridge University Press.

Marmor, A. (2015). What is the right to privacy? Philosophy & Public Affairs, 43 (1), 4–26.

Marx, G. T. (2015). Coming to terms: The kaleidoscope of privacy and surveillance. In B. Roessler & D. Mokrosinska (Eds.), Social dimensions of privacy: Interdisciplinary perspectives (pp. 32–49). Cambridge: Cambridge universiy Press.

Nissenbaum, H. (2009). Privacy in context: Technology, policy and the integrity of social life . Stanford: Stanford University Press.

Nissenbaum, H. (2015). Respect for context as a benchmark for privacy online: What it is and isn’t. In B. Roessler & D. Mokrosinska (Eds.), Social dimensions of privacy: Interdisciplinary perspectives (pp. 278–302). Cambridge: Cambridge University Press.

Rachels, J. (1984). Why privacy is important. In F. A. Schoemann (Ed.), Philosophical dimensions of privacy: An anthology (pp. 290–294). Cambridge: Cambridge University Press.

Regan, P. (2015). Privacy and the common good: Revisited. In B. Roessler & D. Mokrosinska (Eds.), Social dimensions of privacy: Interdisciplinary perspectives (pp. 50–70). Cambridge: Cambridge University Press.

Reiman, J. (1984). Privacy, intimacy and personhood. In F. A. Schoemann (Ed.), Philosophical dimensions of privacy: An anthology (pp. 300–316). Cambridge: Cambridge University Press.

Roessler, B. (2009). De glazen samenleving en de waarde van privacy. Filosofie & Praktijk, 30 (5), 20–29.

Roessler, B., & Mokrosinska, D. (2015). Introduction. In B. Roessler & D. Mokrosinska (Eds.), Social dimensions of privacy: Interdisciplinary perspectives (pp. 1–8). Cambridge: Cambridge University Press.

Rule, J. B. (2015). Privacy: The longue durée. In B. Roessler & D. Mokrosinska (Eds.), Social dimensions of privacy: Interdisciplinary perspectives (pp. 11–31). Cambridge: Cambridge University Press.

Schoeman, F. A. (1984). Privacy: Philosophical dimensions of the literature. In F. A. Schoemann (Ed.), Philosophical dimensions of privacy: An anthology (pp. 1–33). Cambridge: Cambridge University Press.

Schoonmaker, J. (2016). Proactive privacy for a driverless age. Information & Communications Technology Law, 25 (2), 96–128.

Solomon, R. (1992). Ethics and excellence: Cooperation and integrity in business . New York: Oxford University Press.

Solove, D. J. (2015). The meaning and value of privacy. In B. Roessler & D. Mokrosinska (Eds.), Social dimensions of privacy: Interdisciplinary perspectives (pp. 71–82). Cambridge: Cambridge University Press.

Strandburg, K. (2014). Monitoring, datafication and consent: Legal approaches to privacy in the big data context. In J. Lane, V. Stodden, S. Bender, & H. Nissenbaum (Eds.), Privacy, big data and the public good: Frameworks for engagement . Cambridge: Cambridge University Press.

Sunstein, C. (2009). Republic 2.0 . New Jersey: Princeton University Press.

Taylor, C. (1999). Human agency and language: Philosophical papers 1 . Cambridge: Cambridge University Press.

Taylor, J. S. (2002). Privacy and autonomy: A reappraisal. Southern Journal of Philosophy, 40 (4), 587–604.

Tewksbury, D., & Rittenberg, J. (2009). Online news creation and consumption. Implication for modern democracies. In A. Chadwick & P. Howard (Eds.), The Routledge handbook of internet politics (pp. 186–200). London: Routledge.

van der Sloot, B. (2014). Privacy as human flourishing: Could a shift towards virtue ethics strengthen privacy protection in the age of Big Data? Journal of Intellectual Property, Information Technology and Electronic Commerce Law, 5 (3), 230–244.

van Otterlo, M. (2014). Automated experimentation in Walden 3.0: The next step in profiling, predicting, control and surveillance. Surveillance and Society, 12, 255–272.

Walzer, M. (1983). Spheres of justice: A defence of pluralism and equality . Oxford: Blackwell.

Warren, S., & Brandeis, L. (1890). The right to privacy. Harvard Law Review , 4 (5), 193–220.

Williams, B. (1981). Persons, character and morality. In J. Rachels (Ed.), Moral luck . Cambridge: Cambridge University Press.

Download references

Author information

Authors and affiliations.

Radboud University Nijmegen, Nijmegen, The Netherlands

Marcel Becker

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Marcel Becker .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Becker, M. Privacy in the digital age: comparing and contrasting individual versus social approaches towards privacy. Ethics Inf Technol 21 , 307–317 (2019). https://doi.org/10.1007/s10676-019-09508-z

Download citation

Published : 17 July 2019

Issue Date : December 2019

DOI : https://doi.org/10.1007/s10676-019-09508-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Contextual integrity
  • Find a journal
  • Publish with us
  • Track your research
  • Our Mission

How the Digital Age Is Affecting Students

Five books that give insight into how social media and technology are shaping today’s students and their learning.

Professional Development Books

Teachers don’t have to look far to see how changes in technology and social media are shaping students and influencing classrooms. We watch kids obsess over the latest apps as they chat before class. We marvel at the newest slang edging its way into student essays, and wonder at the ways constant smartphone communication is shaping students’ friendships, bullying, and even study habits.

To understand the internet-savvy students who fill our classrooms and the changing landscape of social media they inhabit, we need more than hot new gadgets or expensive educational software. The book list below is a starting point if you’re looking for insight into how the digital age is shaping students and ideas about how you can respond in the classroom.

Each book was chosen for its combination of research, story, and applicability to the classroom. Grab one or two to help you invent new strategies to reach students or reimagine your application of technology in your classroom.

Social Media

If you’ve ever wondered what students are doing with all their time on the internet, It’s Complicated: The Social Lives of Networked Teens  is for you. Author danah boyd dissects how and why kids rush to the online world. Using student interviews and stories, boyd describes the ways youngsters use social media to connect, escape, and eke out a little privacy away from their parents and teachers. She includes a chapter on how the internet has shaped young people’s understanding of personal and public spaces. Read this book if you want to help students optimize the knowledge and skills they already have as digital natives.

A clinical psychologist and researcher at MIT, Sherry Turkle isn’t against the smartphones our students love so much. But she is worried that the obsession with phones—and the texting and social media posting they enable—is impacting in-person discussion and deep conversation. In her book Reclaiming Conversation: The Power of Talk in a Digital Age , Turkle claims that students’ communication skills have changed. Her suggestions for taking back in-person conversation in a digital world can shape collaborative classrooms and guide teachers on how to help students improve peer-to-peer interactions.

Social media and the free flow of information have also influenced the language we use every day. In A World Without ‘Whom’: The Essential Guide to Language in the Buzzfeed Age , Emmy Favilla lays out a case for language shaped by the internet. This entertaining and informative 2017 book is peppered with pop culture examples ready for use in class, though like all pop culture references they’ll quickly become dated. Favilla’s writing is pragmatic; she offers advice on where to hold the line on traditional language and when readability and appeal to a new generation might be more important. As Favilla puts it, “We’re all just trying to be heard here.” The book is a timely reminder that social-media-fueled language innovation deserves some classroom discussion.

If you’re eager to understand larger trends affecting young students, pick up Jean Twenge’s iGen: Why Today's Super-Connected Kids Are Growing Up Less Rebellious, More Tolerant, Less Happy—and Completely Unprepared for Adulthood . Drawing from large data sets and longitudinal studies, Twenge examines everything from SAT scores to rates of loneliness. Her research-heavy book offers helpful hints about the impact of technology and other cultural changes. Read this book if you want to brainstorm about how to adapt classes and school structures to meet student needs. To bring students in on the conversation, consider using Twenge’s easy-to-read graphs as discussion kick-starters or as a way to provide historical context to current trends.

If you want to reimagine the way computers and video games might be used in the classroom, check out David Williamson Shaffer’s book How Computer Games Help Children Learn . A professor at the University of Wisconsin-Madison, Shaffer believes that video games can help schools foster creative thinking, problem solving, and strategic decision making. After all, making mistakes and trying out innovative strategies are less risky in a game than in real life. And even reluctant learners will often dive eagerly into video games. A lot has changed since the book’s publication in 2007, but its ideas—about what students can learn from video games, how video games engage students, and what issues to avoid—can guide you toward thoughtful, effective video game use.

Our students are steeped in the internet, social media, and all types of technological innovations, and it’s time for schools and teachers to carefully examine how these things interact with curriculum and learning.

November 1, 2013

12 min read

The Reading Brain in the Digital Age: Why Paper Still Beats Screens

E-readers and tablets are becoming more popular as such technologies improve, but reading on paper still has its advantages

By Ferris Jabr

One of the most provocative viral YouTube videos in the past two years begins mundanely enough: a one-year-old girl plays with an iPad, sweeping her fingers across its touch screen and shuffling groups of icons. In following scenes, she appears to pinch, swipe and prod the pages of paper magazines as though they, too, are screens. Melodramatically, the video replays these gestures in close-up.

For the girl's father, the video— A Magazine Is an iPad That Does Not Work —is evidence of a generational transition. In an accompanying description, he writes, “Magazines are now useless and impossible to understand, for digital natives”—that is, for people who have been interacting with digital technologies from a very early age, surrounded not only by paper books and magazines but also by smartphones, Kindles and iPads.

Whether or not his daughter truly expected the magazines to behave like an iPad, the video brings into focus a question that is relevant to far more than the youngest among us: How exactly does the technology we use to read change the way we read?

On supporting science journalism

If you're enjoying this article, consider supporting our award-winning journalism by subscribing . By purchasing a subscription you are helping to ensure the future of impactful stories about the discoveries and ideas shaping our world today.

Since at least the 1980s researchers in psychology, computer engineering, and library and information science have published more than 100 studies exploring differences in how people read on paper and on screens. Before 1992 most experiments concluded that people read stories and articles on screens more slowly and remember less about them. As the resolution of screens on all kinds of devices sharpened, however, a more mixed set of findings began to emerge. Recent surveys suggest that although most people still prefer paper—especially when they need to concentrate for a long time—attitudes are changing as tablets and e-reading technology improve and as reading digital texts for facts and fun becomes more common. In the U.S., e-books currently make up more than 20 percent of all books sold to the general public.

Despite all the increasingly user-friendly and popular technology, most studies published since the early 1990s confirm earlier conclusions: paper still has advantages over screens as a reading medium. Together laboratory experiments, polls and consumer reports indicate that digital devices prevent people from efficiently navigating long texts, which may subtly inhibit reading comprehension. Compared with paper, screens may also drain more of our mental resources while we are reading and make it a little harder to remember what we read when we are done. Whether they realize it or not, people often approach computers and tablets with a state of mind less conducive to learning than the one they bring to paper. And e-readers fail to re-create certain tactile experiences of reading on paper, the absence of which some find unsettling.

“There is physicality in reading,” says cognitive scientist Maryanne Wolf of Tufts University, “maybe even more than we want to think about as we lurch into digital reading—as we move forward perhaps with too little reflection. I would like to preserve the absolute best of older forms but know when to use the new.”

Textual Landscapes Understanding how reading on paper differs from reading on screens requires some explanation of how the human brain interprets written language. Although letters and words are symbols representing sounds and ideas, the brain also regards them as physical objects. As Wolf explains in her 2007 book Proust and the Squid , we are not born with brain circuits dedicated to reading, because we did not invent writing until relatively recently in our evolutionary history, around the fourth millennium b.c. So in childhood, the brain improvises a brand-new circuit for reading by weaving together various ribbons of neural tissue devoted to other abilities, such as speaking, motor coordination and vision.

Some of these repurposed brain regions specialize in object recognition: they help us instantly distinguish an apple from an orange, for example, based on their distinct features, yet classify both as fruit. Similarly, when we learn to read and write, we begin to recognize letters by their particular arrangements of lines, curves and hollow spaces—a tactile learning process that requires both our eyes and hands. In recent research by Karin James of Indiana University Bloomington, the reading circuits of five-year-old children crackled with activity when they practiced writing letters by hand but not when they typed letters on a keyboard. And when people read cursive writing or intricate characters such as Japanese kanji , the brain literally goes through the motions of writing, even if the hands are empty.

Beyond treating individual letters as physical objects, the human brain may also perceive a text in its entirety as a kind of physical landscape. When we read, we construct a mental representation of the text. The exact nature of such representations remains unclear, but some researchers think they are similar to the mental maps we create of terrain—such as mountains and trails—and of indoor physical spaces, such as apartments and offices. Both anecdotally and in published studies, people report that when trying to locate a particular passage in a book, they often remember where in the text it appeared. Much as we might recall that we passed the red farmhouse near the start of a hiking trail before we started climbing uphill through the forest, we remember that we read about Mr. Darcy rebuffing Elizabeth Bennett at a dance on the bottom left corner of the left-hand page in one of the earlier chapters of Jane Austen's Pride and Prejudice .

In most cases, paper books have more obvious topography than on-screen text. An open paper book presents a reader with two clearly defined domains—the left- and right-hand pages—and a total of eight corners with which to orient oneself. You can focus on a single page of a paper book without losing awareness of the whole text. You can even feel the thickness of the pages you have read in one hand and the pages you have yet to read in the other. Turning the pages of a paper book is like leaving one footprint after another on a trail—there is a rhythm to it and a visible record of how far one has traveled. All these features not only make the text in a paper book easily navigable, they also make it easier to form a coherent mental map of that text.

In contrast, most digital devices interfere with intuitive navigation of a text and inhibit people from mapping the journey in their mind. A reader of digital text might scroll through a seamless stream of words, tap forward one page at a time or use the search function to immediately locate a particular phrase—but it is difficult to see any one passage in the context of the entire text. As an analogy, imagine if Google Maps allowed people to navigate street by individual street, as well as to teleport to any specific address, but prevented them from zooming out to see a neighborhood, state or country. Likewise, glancing at a progress bar gives a far more vague sense of place than feeling the weight of read and unread pages. And although e-readers and tablets replicate pagination, the displayed pages are ephemeral. Once read, those pages vanish. Instead of hiking the trail yourself, you watch the trees, rocks and moss pass by in flashes, with no tangible trace of what came before and no easy way to see what lies ahead.

“The implicit feel of where you are in a physical book turns out to be more important than we realized,” says Abigail J. Sellen of Microsoft Research Cambridge in England, who co-authored the 2001 book The Myth of the Paperless Office . “Only when you get an e-book do you start to miss it. I don't think e-book manufacturers have thought enough about how you might visualize where you are in a book.”

Exhaustive Reading At least a few studies suggest that screens sometimes impair comprehension precisely because they distort people's sense of place in a text. In a January 2013 study by Anne Mangen of the University of Stavanger in Norway and her colleagues, 72 10th grade students studied one narrative and one expository text. Half the students read on paper, and half read PDF files on computers. Afterward, students completed reading comprehension tests, during which they had access to the texts. Students who read the texts on computers performed a little worse, most likely because they had to scroll or click through the PDFs one section at a time, whereas students reading on paper held the entire texts in their hands and quickly switched between different pages. “The ease with which you can find out the beginning, end, and everything in between and the constant connection to your path, your progress in the text, might be some way of making it less taxing cognitively,” Mangen says. “You have more free capacity for comprehension.”

Other researchers agree that screen-based reading can dull comprehension because it is more mentally taxing and even physically tiring than reading on paper. E-ink reflects ambient light just like the ink on a paper book, but computer screens, smartphones and tablets shine light directly on people's faces. Today's LCDs are certainly gentler on eyes than their predecessor, cathode-ray tube (CRT) screens, but prolonged reading on glossy, self-illuminated screens can cause eyestrain, headaches and blurred vision. In an experiment by Erik Wästlund, then at Karlstad University in Sweden, people who took a reading comprehension test on a computer scored lower and reported higher levels of stress and tiredness than people who completed it on paper.

In a related set of Wästlund's experiments, 82 volunteers completed the same reading comprehension test on computers, either as a paginated document or as a continuous piece of text. Afterward, researchers assessed the students' attention and working memory—a collection of mental talents allowing people to temporarily store and manipulate information in their mind. Volunteers had to quickly close a series of pop-up windows, for example, or remember digits that flashed on a screen. Like many cognitive abilities, working memory is a finite resource that diminishes with exertion.

Although people in both groups performed equally well, those who had to scroll through the unbroken text did worse on the attention and working memory tests. Wästlund thinks that scrolling—which requires readers to consciously focus on both the text and how they are moving it—drains more mental resources than turning or clicking a page, which are simpler and more automatic gestures. The more attention is diverted to moving through a text, the less is available for understanding it. A 2004 study conducted at the University of Central Florida reached similar conclusions.

An emerging collection of studies emphasizes that in addition to screens possibly leeching more attention than paper, people do not always bring as much mental effort to screens in the first place. Based on a detailed 2005 survey of 113 people in northern California, Ziming Liu of San Jose State University concluded that those reading on screens take a lot of shortcuts—they spend more time browsing, scanning and hunting for keywords compared with people reading on paper and are more likely to read a document once and only once.

When reading on screens, individuals seem less inclined to engage in what psychologists call metacognitive learning regulation—setting specific goals, rereading difficult sections and checking how much one has understood along the way. In a 2011 experiment at the Technion–Israel Institute of Technology, college students took multiple-choice exams about expository texts either on computers or on paper. Researchers limited half the volunteers to a meager seven minutes of study time; the other half could review the text for as long as they liked. When under pressure to read quickly, students using computers and paper performed equally well. When managing their own study time, however, volunteers using paper scored about 10 percentage points higher. Presumably, students using paper approached the exam with a more studious attitude than their screen-reading peers and more effectively directed their attention and working memory.

Even when studies find few differences in reading comprehension between screens and paper, screen readers may not remember a text as thoroughly in the long run. In a 2003 study Kate Garland, then at the University of Leicester in England, and her team asked 50 British college students to read documents from an introductory economics course either on a computer monitor or in a spiral-bound booklet. After 20 minutes of reading, Garland and her colleagues quizzed the students. Participants scored equally well regardless of the medium but differed in how they remembered the information.

Psychologists distinguish between remembering something—a relatively weak form of memory in which someone recalls a piece of information, along with contextual details, such as where and when one learned it—and knowing something: a stronger form of memory defined as certainty that something is true. While taking the quiz, Garland's volunteers marked both their answer and whether they “remembered” or “knew” the answer. Students who had read study material on a screen relied much more on remembering than on knowing, whereas students who read on paper depended equally on the two forms of memory. Garland and her colleagues think that students who read on paper learned the study material more thoroughly more quickly; they did not have to spend a lot of time searching their mind for information from the text—they often just knew the answers.

Perhaps any discrepancies in reading comprehension between paper and screens will shrink as people's attitudes continue to change. Maybe the star of A Magazine Is an iPad That Does Not Work will grow up without the subtle bias against screens that seems to lurk among older generations. The latest research suggests, however, that substituting screens for paper at an early age has disadvantages that we should not write off so easily. A 2012 study at the Joan Ganz Cooney Center in New York City recruited 32 pairs of parents and three- to six-year-old children. Kids remembered more details from stories they read on paper than ones they read in e-books enhanced with interactive animations, videos and games. These bells and whistles deflected attention away from the narrative toward the device itself. In a follow-up survey of 1,226 parents, the majority reported that they and their children prefer print books over e-books when reading together.

Nearly identical results followed two studies, described this past September in Mind, Brain, and Education , by Julia Parrish-Morris, now at the University of Pennsylvania, and her colleagues. When reading paper books to their three- and five-year-old children, parents helpfully related the story to their child's life. But when reading a then popular electric console book with sound effects, parents frequently had to interrupt their usual “dialogic reading” to stop the child from fiddling with buttons and losing track of the narrative. Such distractions ultimately prevented the three-year-olds from understanding even the gist of the stories, but all the children followed the stories in paper books just fine.

Such preliminary research on early readers underscores a quality of paper that may be its greatest strength as a reading medium: its modesty. Admittedly, digital texts offer clear advantages in many different situations. When one is researching under deadline, the convenience of quickly accessing hundreds of keyword-searchable online documents vastly outweighs the benefits in comprehension and retention that come with dutifully locating and rifling through paper books one at a time in a library. And for people with poor vision, adjustable font size and the sharp contrast of an LCD screen are godsends. Yet paper, unlike screens, rarely calls attention to itself or shifts focus away from the text. Because of its simplicity, paper is “a still point, an anchor for the consciousness,” as William Powers writes in his 2006 essay “Hamlet's Blackberry: Why Paper Is Eternal.” People consistently report that when they really want to focus on a text, they read it on paper. In a 2011 survey of graduate students at National Taiwan University, the majority reported browsing a few paragraphs of an item online before printing out the whole text for more in-depth reading. And in a 2003 survey at the National Autonomous University of Mexico, nearly 80 percent of 687 students preferred to read text on paper rather than on a screen to “understand it with clarity.”

Beyond pragmatic considerations, the way we feel about a paper book or an e-reader—and the way it feels in our hands—also determines whether we buy a best-selling book in hardcover at a local bookstore or download it from Amazon. Surveys and consumer reports suggest that the sensory aspects of reading on paper matter to people more than one might assume: the feel of paper and ink; the option to smooth or fold a page with one's fingers; the distinctive sound a page makes when turned. So far digital texts have not satisfyingly replicated such sensations. Paper books also have an immediately discernible size, shape and weight. We might refer to a hardcover edition of Leo Tolstoy's War and Peace as a “hefty tome” or to a paperback of Joseph Conrad's Heart of Darkness as a “slim volume.” In contrast, although a digital text has a length that may be represented with a scroll or progress bar, it has no obvious shape or thickness. An e-reader always weighs the same, regardless of whether you are reading Marcel Proust's magnum opus or one of Ernest Hemingway's short stories. Some researchers have found that these discrepancies create enough so-called haptic dissonance to dissuade some people from using e-readers.

To amend this sensory incongruity, many designers have worked hard to make the e-reader or tablet experience as close to reading on paper as possible. E-ink resembles typical chemical ink, and the simple layout of the Kindle's screen looks remarkably like a page in a paper book. Likewise, Apple's iBooks app attempts to simulate somewhat realistic page turning. So far such gestures have been more aesthetic than pragmatic. E-books still prevent people from quickly scanning ahead on a whim or easily flipping to a previous chapter when a sentence surfaces a memory of something they read earlier.

Some digital innovators are not confining themselves to imitations of paper books. Instead they are evolving screen-based reading into something else entirely. Scrolling may not be the ideal way to navigate a text as long and dense as Herman Melville's Moby Dick , but the New York Times , the Washington Post , ESPN and other media outlets have created beautiful, highly visual articles that could not appear in print because they blend text with movies and embedded sound clips and depend entirely on scrolling to create a cinematic experience. Robin Sloan has pioneered the tap essay, which relies on physical interaction to set the pace and tone, unveiling new words, sentences and images only when someone taps a phone or a tablet's touch screen. And some writers are pairing up with computer programmers to produce ever more sophisticated interactive fiction and nonfiction in which one's choices determine what one reads, hears and sees next.

When it comes to intensively reading long pieces of unembellished text, paper and ink may still have the advantage. But plain text is not the only way to read.

Ferris Jabr is a contributing writer for Scientific American . He has also written for the New York Times Magazine , the New Yorker and Outside .

Scientific American Magazine Vol 309 Issue 5

Home — Essay Samples — Information Science and Technology — Digital Era — The Digital Information Age

test_template

The Digital Information Age

  • Categories: Digital Era Information Age Internet

About this sample

close

Words: 1090 |

Published: Jun 5, 2019

Words: 1090 | Pages: 2 | 6 min read

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Prof. Kifaru

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 580 words

4 pages / 1891 words

5 pages / 2324 words

3 pages / 1474 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Digital Era

In the digital age, we find ourselves immersed in a sea of information and entertainment, bombarded by a constant stream of images, videos, and messages. Neil Postman's prophetic book, "Amusing Ourselves to Death," published in [...]

The question of whether students should have limited access to the internet is a complex and timely one, given the pervasive role of technology in education. While the internet offers a wealth of information and resources, [...]

The experience of nurturing my virtual child illuminated the complexities of parenthood and child development within a digital context. As I guided my virtual child through infancy, childhood, and adolescence, I discovered [...]

The right to privacy is a fundamental human right that has evolved and adapted over time, particularly in the face of rapid technological advancements. In an era where personal information is more accessible than ever before, [...]

The digital era that we live in now has undoubtedly deeply impacted the way marketing functions. Over this essay, I will explore the various ways in which businesses market their products and services using digital media [...]

So, what is cyberbullying? Cyberbullying is essentially a form of bullying that occurs online, often through social networking sites, and involves posting negative words with the intention of humiliating others. Sadly, this form [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

advantages of digital age essay

Logo

Essay on Digital Technology

Students are often asked to write an essay on Digital Technology in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Digital Technology

What is digital technology.

Digital technology refers to any system, device, or process that uses digital information. This includes computers, smartphones, and the internet. It’s a part of our daily lives.

Benefits of Digital Technology

Digital technology makes our lives easier. It helps us communicate, learn, and work. For example, we can send emails, learn online, and create digital art.

Challenges of Digital Technology

However, digital technology also has challenges. It can lead to less physical activity and face-to-face interaction. Plus, it can be hard to protect personal information online.

The Future of Digital Technology

The future of digital technology is exciting. We can expect more advancements that will continue to change our lives.

250 Words Essay on Digital Technology

Introduction.

Digital technology, a term encapsulating a wide array of software, hardware, and services, has revolutionized our world. It has altered how we communicate, learn, work, and entertain ourselves, shaping a new societal landscape.

The Evolution of Digital Technology

Digital technology has evolved exponentially over the past few decades. From the advent of personal computers and the internet, to the ubiquity of smartphones and the rise of artificial intelligence, each wave of technology has brought profound changes. This evolution has led to the digitization of various sectors, including education, healthcare, and commerce, thereby facilitating efficiency and convenience.

Impact on Society

The impact of digital technology on society is significant. It has democratized information, breaking down geographical and socio-economic barriers. Moreover, it has fostered global connectivity, enabling collaboration and interaction on an unprecedented scale. However, it also presents challenges such as privacy concerns and digital divide, necessitating thoughtful policy-making and ethical considerations.

Future Prospects

The future of digital technology is exciting, with emerging fields like quantum computing, virtual reality, and blockchain promising to further transform our lives. Nonetheless, it is crucial to ensure that this digital revolution is inclusive and sustainable, balancing technological advancement with societal well-being.

In conclusion, digital technology, while presenting certain challenges, offers immense potential to reshape our world. As we navigate this digital age, it is incumbent upon us to harness this potential responsibly, ensuring that the benefits of digital technology are accessible to all.

500 Words Essay on Digital Technology

Introduction to digital technology.

Digital technology, an umbrella term encompassing a myriad of devices, systems, and platforms, has revolutionized the world. It has transformed how we communicate, work, learn, and entertain ourselves, influencing every facet of our lives. This essay delves into the essence, benefits, and challenges of digital technology.

Understanding Digital Technology

Digital technology refers to any system, device, or process that uses a binary, numeric, or digital approach to create, store, process, and communicate information. It includes a broad range of technologies, such as computers, smartphones, digital televisions, email, robots, artificial intelligence, the Internet, and more. It is the cornerstone of the Information Age, underpinning the rapid exchange of information globally.

Digital technology has brought about numerous benefits. Firstly, it has enhanced communication. Digital platforms like email, social media, and instant messaging allow for instantaneous, affordable, and efficient communication across the globe. Secondly, digital technology has revolutionized education. Online learning platforms, digital textbooks, and educational apps have made education more accessible and personalized.

Furthermore, digital technology has transformed the business landscape. E-commerce, digital marketing, and remote working tools have opened new avenues for business growth and flexibility. Lastly, digital technology has also made significant strides in healthcare, with telemedicine, electronic health records, and digital diagnostic tools improving healthcare delivery.

Despite its numerous benefits, digital technology also poses significant challenges. Privacy and security concerns are at the forefront, with cybercrime, data breaches, and identity theft becoming increasingly prevalent. Additionally, the digital divide, the gap between those with access to digital technology and those without, exacerbates social and economic inequalities.

Moreover, the over-reliance on digital technology can lead to health issues, including digital eye strain and mental health problems. The rapid pace of technological change also presents challenges, as individuals and businesses struggle to keep up with the latest trends and developments.

Conclusion: A Balanced Perspective on Digital Technology

In conclusion, digital technology, while transformative and beneficial, also presents significant challenges that society must address. It is crucial to approach digital technology with a balanced perspective, acknowledging its immense potential to drive progress and innovation, while also recognizing and mitigating its risks. As digital technology continues to evolve at a rapid pace, fostering digital literacy and promoting responsible digital citizenship will be key to harnessing its potential responsibly and equitably.

In the future, we must strive to create a digital world that is secure, inclusive, and beneficial for all. This will require concerted efforts from all stakeholders, including individuals, businesses, governments, and international organizations. The journey is complex, but the potential rewards are immense, promising a future where digital technology serves as a tool for empowerment, progress, and prosperity.

That’s it! I hope the essay helped you.

If you’re looking for more, here are essays on other interesting topics:

  • Essay on Dependence on Technology
  • Essay on Advantages and Disadvantages of Modern Technology
  • Essay on School Environment

Apart from these, you can look at all the essays by clicking here .

Happy studying!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Privacy in the Digital Age Essay

Introduction, anonymity and the internet, anonymous servers, anonymous users, advantages and disadvantages of anonymity, controversies and responses.

Bibliography

Social, economic, and technological advances have dramatically increased the amount of information any individual can access or possess. Unfortunately, this has also brought about various challenges that must be addressed 1 . Generally, information is a vital treasure in itself, and the more one has the better. Having valuable, intellectual, economic, and social information creates enormous opportunities and advantages for any individual.

Even though information is a treasure, it can also be a liability. Besides constantly seeking ways to acquire, keep, and dispose of it, users of information also want to make sure that what is seen and heard privately does not become public without their consent. In the present technologically advanced society, a number of factors have contributed to the high demand for information and hence the need for anonymity, security, and privacy.

Increased public awareness of the potential abuse of digital communication, especially the Internet is one major concern for all stakeholders. To a large extent, most Internet users are concerned about privacy and do not want all the information they send or receive over the Internet to be connected to them by name 2 .

This paper presents arguments indicating that it is critical for governments to impose restrictions on Internet privacy. According to Kizza 3 anonymity refers to the state of being nameless or having no identity.

Since it is extremely difficult for anybody to live a meaningful life while being totally anonymous, there are different types of anonymity that exist including pseudo anonymity and untraceable identity.

Pseudo anonymity is where one chooses to be identified by a certain pseudonym or code while untraceable identity implies that one is not known by any name.

For many people, anonymity is one of the biggest worries as far as using the Internet’s is concerned. The virtual world may make it easier for dissidents to criticize governments, for alcoholics to talk about their problems and for shy people to find love 4 . However, anonymity also creates room for dishonest people to pose as children in chat rooms and criminals in order to hide from law enforcers.

As such, Internet anonymity seems to cut both ways. According to proponents, preserving anonymity on the Internet may be the cornerstone of safeguarding privacy and a vital part of the constitutionally protected right to free speech. Critics have, however, argued that online anonymity permits people to affect others and not be held responsible or accountable for their actions.

In general, the use of the Internet has created room for individuals to operate in secret, without any one being able to tell who they are. In particular, the Internet provides two channels through which anonymous acts can be carried out. These are anonymous severs and anonymous users.

With advances in software and hardware, anonymity on the Internet has grown through anonymous servers. These may be full anonymity servers or pseudonymous servers. When full anonymity servers are used, it is impossible to identify the packet headers.

In the case of pseudonymous servers, pseudonyms are usually placed inside packet headers to conceal identity. In the process, the actual identity gets hidden behind a pseudonym and any packets received thereafter are relayed to the real server. Anonymity servers are able to accomplish this through the use of encryption 5 .

Other options are also used to allow users to adopt false names to hide their identity as they use the Internet. With false names, they can proceed to use message boards or participate in chat rooms without being recognized by anyone.

This has sometimes led to sensitive or highly personal information being posted to user groups, news groups, and chat rooms. In addition, popular protocols are also used to provide anonymity to the users. Generally, these protocols accept messages relayed to servers with arbitrary field information.

To some extent, anonymity may be used to curb bad behavior and to warn culprits that they are being watched. This contributes greatly to ensuring that everyone in the organization behaves appropriately. Although whistle blowers are sometimes controversial, they are reliable in a number of occasions such as when there is abuse of office and resources. Secondly, anonymity can be useful to those in charge of national security.

It may be used by underground spies to gather useful information for national defense. Where there is intimidation and fear of punishment, anonymity may be used to reveal useful information. Anonymity is also good for strengthening relationships and the security of some people 6 .

One of the disadvantages has to do with the fact that anonymity can make it easy for criminals and fraudsters to commit crime. It can also make it difficult to access information that may be useful for settling disputes.

Anonymity, according to its defenders, is a right protected by the American Constitution. In a notable 1995 case concerned with the distribution of anonymous pamphlets, the Supreme Court noted that anonymity is some form of a shield for individuals. Enshrined in law or not, the power to remain anonymous is often taken for granted by members of democratic societies.

Many authors have written controversial works using pseudonyms, politicians comment confidentially using generic titles like a spokesperson, and one of the first principles of journalism is never to divulge the identity of an anonymous source. It is important to note that anonymity is central to free speech and free speech is central to democracy.

According to Lambert 7 , anonymity can be a weapon that damages or destroys reputations. Defenders of anonymity are always concerned that the idea of anonymity on the Internet is regarded differently from any other kind of anonymity.

If the Supreme Court recognizes that anonymous books and leaflets are a justified form of free speech, the argument goes that Internet communication should be treated the same. Where anonymity is concerned, radio and television are treated differently from books because they are broadcast media.

They are not disseminated the same way and are harder to ignore. Although critics charge that Internet anonymity should be subject to special regulation, one of the basic premises of devising laws for the Internet is that they should be technologically neutral.

According to law enforcers, the Internet’s built in anonymity makes it a safe haven not just for whistle blowers and dissidents but also for criminals and terrorists. In November 2002, newspapers reported that the Pentagon had briefly considered and rejected an idea called e-DNA, which would have tagged natural Internet traffic with personalized makers.

Since human DNA is unique to every individual, DNA samples taken from crime scenes can often be used to trap criminals. In much the same way, the Pentagon’s Defense Advanced Research Projects Agency (DARPA) hoped that Internet traffic tagged with e-DNA makers would be traceable to individuals and their computers. Had the plan not been scuttled, it would have outlawed most forms of Internet anonymity.

However, if anonymity is a cornerstone for democracy, as proponents allege, it would seem to be worth going to some lengths to defend. Apparently, this would require more than passing laws to protect Internet users who want to remain anonymous.

Ultimately, the recognition of the different kinds of anonymity might be necessary, followed by the treatment of the various forms of anonymity in different ways, including legal protection for uses of anonymity that are not connected to criminal behavior.

It may also be necessary to come up with ways to distinguish between those hiding behind their anonymity to commit crime and those using it for whistle blowing purposes. The distinction will help organizations to determine if it is necessary to allow anonymity in a given situation.

Strangely enough, anonymity may be complicated or simplified through the Internet given that communication via the Internet happens secretly and determining a user’s identity can not be done with absolute certainty.

As has been discussed in this paper, anonymity has its good and bad side. If left unchecked, innocent individuals in the society will be subjected to undeserved suffering. In a number of cases, therefore, it is necessary either for a local authority or national legislatures to pass laws that regulate when and who can use anonymity legally.

In the current environment of the Internet, there are serious debates on the freedoms of individuals on the Internet and how these freedoms can be protected when dealing with people on the Internet under the cover of anonymity.

Kizza, Joseph. Ethical and Social Issues in the Information Age . Chattanooga, TN: Springer, 2010.

Lambert, Laura. The Internet: Biographies . Santa Barbara, California: ABC-CLIO, 2005.

Schwabach, Aaron. Internet and the Law: Technology, Society, and Compromises. Santa Barbara, California: ABC-CLIO, 2006.

1 Joseph Kizza, Ethical and Social Issues in the Information Age . (Chattanooga, TN: Springer, 2010), 23.

2 Aaron Schwabach, Internet and the Law: Technology, Society, and Compromises. (Santa Barbara, California: ABC-CLIO, 2006), 45.

3 Joseph Kizza, Ethical and Social Issues in the Information Age . (Chattanooga, TN: Springer, 2010), 24.

4 Laura Lambert, The Internet: Biographies . (Santa Barbara, California: ABC-CLIO, 2005), 53.

5 Joseph Kizza, Ethical and Social Issues in the Information Age . (Chattanooga, TN: Springer, 2010), 31.

6 Laura Lambert, The Internet: Biographies . (Santa Barbara, California: ABC-CLIO, 2005), 61.

7 Laura Lambert, The Internet: Biographies . (Santa Barbara, California: ABC-CLIO, 2005), 65.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2020, January 21). Privacy in the Digital Age. https://ivypanda.com/essays/privacy-in-the-digital-age-essay/

"Privacy in the Digital Age." IvyPanda , 21 Jan. 2020, ivypanda.com/essays/privacy-in-the-digital-age-essay/.

IvyPanda . (2020) 'Privacy in the Digital Age'. 21 January.

IvyPanda . 2020. "Privacy in the Digital Age." January 21, 2020. https://ivypanda.com/essays/privacy-in-the-digital-age-essay/.

1. IvyPanda . "Privacy in the Digital Age." January 21, 2020. https://ivypanda.com/essays/privacy-in-the-digital-age-essay/.

IvyPanda . "Privacy in the Digital Age." January 21, 2020. https://ivypanda.com/essays/privacy-in-the-digital-age-essay/.

  • History of Alcoholics Anonymous (AA)
  • Advantages of Anonymity in New Media World
  • Alcoholic Anonymous Meeting and Impact on Treatment
  • Growing Compatibility Issues: Computers and User Privacy
  • In Defense of Whistle Blowing
  • E-Governments and the Environment of Anonymity
  • Balancing Freedom of Speech and Responsibility in Online Commenting
  • Organizational Ethics: Encouraging Whistle-blowing within Organizations
  • Whistle-Blowing and Leadership
  • Freedom of Expression on the Internet
  • Demonstrations and Protests
  • What Are Human Rights?
  • Media Ethics and Law – Free Expression
  • Immigrants and Human Rights
  • Oppression of Women’s Rights Affects the Economy of the Middle East

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Many Tech Experts Say Digital Disruption Will Hurt Democracy
  • 3. Concerns about democracy in the digital age

Table of Contents

  • 1. Themes about the digital disruption of democracy in the next decade
  • 2. Broader thoughts from key experts on the future of democracy at a time of digital disruption
  • 4. Hopeful themes and suggested solutions
  • 5. Tech will have mixed effects that are not possible to guess now
  • About this canvassing of experts
  • Acknowledgments

About half of the experts responding to this canvassing said people’s uses of technology will mostly weaken core aspects of democracy and democratic representation, but even those who expressed optimism often voiced concerns. This section includes comments about problems that were made by all respondents regardless of their answer to the main question about the impact of technology on democracy by 2030. These worries are organized under seven themes.

Empowering the powerful: Corporate and government agendas generally do not serve democratic goals or achieve democratic outcomes. They serve the goals of those in power

An internet pioneer and technology developer and administrator predicted, “My expectation is that by 2030, as much of 75% of the world’s population will be enslaved by artificial intelligence-based surveillance systems developed in China and exported around the world. These systems will keep every citizen under observation 24 hours a day, seven days a week, monitoring their every action.”

Dan Gillmor, co-founder of the News Co/Lab at Arizona State University’s Walter Cronkite School of Journalism and Mass Communication, and professor of practice in digital media literacy commented, “Governments (and their corporate partners) are broadly using technology to create a surveillance state, and what amounts to law by unaccountable black-box algorithm, far beyond anything Orwell imagined. But this can only happen in a society that can’t be bothered to protect liberty – or is easily led/stampeded into relinquishing it – and that is happening in more and more of the Western democracies. The re-emergence of public bigotry has nothing to do with technology, except to the extent that bigots use it to promote their malignant goals. Meanwhile, the institutions that are supposed to protect liberty – journalism among them – are mostly failing to do so. In a tiny number of jurisdictions, people have persuaded leaders to push back on the encroachments, such as a partial ban on government use of facial recognition in San Francisco. But the encroachments are overwhelming and accelerating.”

Leah Lievrouw, professor of information studies at the University of California-Los Angeles, wrote, “To date, virtually no democratic state or system has sorted out how to deal with this challenge to the fundamental legitimacy of democratic processes, and my guess is that only a deep and destabilizing crisis (perhaps growing out of the rise of authoritarian, ethnic or cultural nationalism) will prompt a serious response.”

Seth Finkelstein, programmer, consultant and EFF Pioneer of the Electronic Frontier Award winner, wrote, “Warren Buffett has said, ‘There’s class warfare, all right, but it’s my class, the rich class, that’s making war, and we’re winning.’ We can examine how this class warfare changes with advances in technology, analogous to how military warfare has been affected by technology. But no weapons technology to date has inevitably produced democracy over dictatorship (or vice-versa). For example, there once was a type of boosterism that talked about how ordinary people could make websites and promoted its very rare cause célèbre success. But that storyline is now going out of fashion. It’s finally getting to be pundit knowledge that there’s a whole system behind which material gets promoted. Paid professional liars can both make websites themselves and work this system better than amateurs. There’s currently a national panic over Russian trolls. But native fiends can do the same thing, with more skill, incentive and opportunities.”

David Bray, executive director for the People-Centered Internet Coalition, commented, “The power of narratives is exactly their ability to shape and institutionalize norms and power distribution in our human communities. … Now, however, our world is much broader than our immediate environment, and this has dangerous side effects, such as challenges in reaching consensus or disputing the relevant facts for a situation. We are seeing increasing polarization in open societies, partly as a result of these questions of where we want to go not being considered in ways that can translate to action. An even larger question is where do different localities want to go in terms of progress in parallel to what values or norms they want to hold dear? This is a question that spans sectors. No one organization or influencer or group with power can either solely answer or execute actions toward that desired future state. In the absence of finding ways to build bridges that span sectors, power – through narratives, laws, or technologies – will be grabbed by whomever aspires to this. An important question for the future is can we build such bridges across sectors? Will our divisions be our undoing as open, pluralistic societies? Can we develop narratives of hope for open, pluralistic societies that bring people together?”

Technology can improve or undermine democracy depending on how it is used and who controls it. Right now, it is controlled by too few. Kevin Gross

Miguel Moreno, professor of philosophy at the University of Granada, Spain, an expert in ethics, epistemology and technology, commented, “There is a clear risk of bias, manipulation, abusive surveillance and authoritarian control over social networks, the internet and any uncensored citizen expression platform, by private or state actors. There are initiatives promoted by state actors to isolate themselves from a common internet and reduce the vulnerability of critical infrastructures to cyberattacks. This has serious democratic and civic implications. In countries with technological capacity and a highly centralized political structure, favorable conditions exist to obtain partisan advantages by limiting social contestation, freedom of expression and eroding civil rights.”

Richard Jones, an entrepreneur based in Europe, said, “Government will lag exploitation of data by state and corporate actors in unforeseen ways. Biased censorship (both well-intentioned and corrupt) and propaganda onslaughts will shape opinions as – combined with an anti-scientific revolution – confidence in the institutions and establishment figures essential to peaceful orderly improvement of societies crumbles further. Hysterical smear attacks will further intensify as attempts to placate minority pressure groups continue. Biased technocratic groupthink will continue its march toward authoritarianism. Charismatic leadership will flourish in truly liberal systems. Authoritarianism will take root elsewhere. Online preference surveys may be developed to guide many choices facing government, but it is not clear that can correct the current democratic deficit in a helpful way. As during the Gutenberg process, accompanying the digestion of ‘free-range’ information will be the reevaluation of secular and religious values and objectives.”

John Sniadowski, a systems architect based in the United Kingdom, wrote, “It is proving very difficult to regulate multinational corporations because of the variety of different national government agendas. A globally enacted set of rules to control multinationals is unlikely to happen because some sovereign states have very illiberal and hierarchical control over agendas and see technology as a way to dominate their citizens with their agendas as well as influence the democratic viewpoints of what they consider to be hostile states. Democracy in technological terms can be weaponized.”

Kevin Gross, an independent technology consultant, commented, “Technology can improve or undermine democracy depending on how it is used and who controls it. Right now, it is controlled by too few. The few are not going to share willingly. I don’t expect this to change significantly by 2030. History knows that when a great deal of power is concentrated in the hands of a few, the outcome is not good for the many, not good for democracy.”

Robert Epstein, senior research psychologist at the American Institute for Behavioral Research and Technology, said, “As of 2015, the outcomes of upward of 25 of the national elections in the world were being determined by Google’s search engine. Democracy as originally conceived cannot survive Big Tech as currently empowered. If authorities do not act to curtail the power of Big Tech companies – Google, Facebook and similar companies that might emerge in coming years – in 2030, democracy might look very much as it does now to the average citizen, but citizens will no longer have much say in who wins elections and how democracies are run. My research – dozens of randomized, controlled experiments involving tens of thousands of participants and five national elections – shows that Google search results alone can easily shift more than 20% of undecided voters – up to 80% in some demographic groups – without people knowing and without leaving a paper trail (see my paper on the search engine manipulation effect ). I’ve also shown that search suggestions can turn a 50/50 split among undecided voters into a 90/10 split – again, without people knowing they have been influenced. The content of answer boxes can increase the impact of the search engine manipulation effect by an additional 10% to 30%. I’ve identified about a dozen largely subliminal effects like these and am currently studying and quantifying seven of them. I’ve also shown that the ‘Go Vote’ prompt that Google posted on its home page on Election Day in 2018 gave one political party at least 800,000 more votes than went to the opposing party – possibly far more if the prompt had been targeted to the favored party.”

A longtime internet-rights activist based in South Africa responded, “Whether the powers of states and tech corporations can be reined in effectively is the current struggle. The genie is out of the bottle and it does not bode well for systems of democracy that have already been undermined in Western states. A state of global cyber war now exists and is likely to persist over the next decade. The oligopoly of state-supported tech companies, whether in the U.S. or China, will be difficult to break. It is trite to differentiate between a Google or an Alibaba – both received substantial state support from their respective governments – the Googles by failure to apply antitrust law to prevent monopolization, the Alibabas by state protection against competition in China.”

David P. Reed, a pioneering architect of the internet expert in networking, spectrum and internet policy, wrote, “‘Democracy’ in 2030 will be democracy in name only. The mechanisms of widespread corporate surveillance of user behavior and modification of user behavior are becoming so sophisticated that the citizen interests of democratic-structured countries will no longer be represented in any meaningful way. That is, by collecting vast amounts of information about user preferences and responses, and the use of highly targeted behavior modification techniques, citizens’ choices will be manipulated more and more in the interests of those who can pay to drive that system. The current forms of democracy limit citizen participation to election events every few years, where issues and candidates are structured by political parties into highly targeted single-vote events that do not represent individuals’ interests. Instead, a small set of provocative ‘wedge’ issues are made the entire focus of the citizen’s choice. This is not representation of interests. It is a managed poll that can easily be manipulated by behavior modification of the sort that technology is moving toward.”

A pioneering technology editor and reporter for one of the world’s foremost global news organizations wrote, “I do not have great faith that the institutions tasked with ensuring that online discourse is civil and adheres to standards of truth and fairness will be able to prevail over tendencies of autocratic governments and powerful private sector actors to use cyberspace for narrow political ends. … The internet has never had an effective governing body with any considerable clout to set policy that might guarantee network neutrality on a global scale, inhibit censorship and apply such conventions as the Universal Bill of Human Rights. Further, a handful of platforms whose moral compass has been questioned have come to dominate the online world. Some are dominated by governments. Others owe allegiance only to shareholders.”

Jerry Michalski, founder of REX, the Relationship Economy eXpedition, wrote, “‘Capital G’ Government has devolved into a phony consumer mass-marketing exercise. ‘Small g’ governance could involve active, ongoing collaboration among citizens, but it won’t as long as the major platforms they use have as their business models to addict them to TikTok videos, and to sell off their private data to companies that want to stalk them.”

Jonathan Kolber, author of “A Celebration Society: Solving the Coming Automation Crisis,” said, “Deepfakes will completely muddy the difference between facts and falsehood, a distinction that few citizens are equipped to make even now. This will have devastating effects upon democratic institutions and processes. … We are increasingly seeing George Orwell’s nightmare unfold as governments learn to use internet-enabled smart devices (televisions, smartphones, etc.) for surveillance. When the Internet of Things extends to smart cars, smart homes and so forth, the surveillance will be universal and unending. Governments are also increasingly redefining facts and history.”

A professor of computer science said, “Artificial intelligence technology, especially machine learning, has a feedback loop that strongly advantages first movers. Google’s advantages in being a better search engine have now been baked in by its ability to accumulate more data about user search behavior. This dynamic is inherently monopolistic, even more so than prior technological advances. Persuasive technologies built using these technologies are capable of refining and shaping public opinion with a reach and power that totalitarian governments of the 20th century could only dream of. We can be sure that today’s regulatory mood will either dissipate with nothing done, or more likely, become a driver that entrenches existing monopolies further by creating technical demands that no competitor can surmount. Democratic institutions will have a very difficult time countering this dynamic. Uber’s ‘greyball’ program, intended to defeat regulation and meaningful audit, is a harbinger of the future.”

Jonathan Taplin, author of “Move Fast and Break Things: How Google, Facebook and Amazon Cornered Culture and Undermined Democracy,” said, “Social media will continue to enable new and more-sophisticated forms of propaganda and disinformation. Artificial intelligence will enable deepfake videos that the average citizen will be taken in by. Facebook, YouTube and Twitter will continue to enable this content in their unending chase for revenue. Politicians will make noises about regulation, but since these platforms will become their primary source of advertising and publicity, they will never commit to the elimination of Safe Harbor and other rules that protect the social networks.”

Bulbul Gupta, founding adviser, Socos Labs, a think tank designing artificial intelligence to maximize human potential, responded, “Given the current state of tech and artificial intelligence ownership, I expect democracy to be even more unequal between the haves and have-nots by 2030, and a major uprising happening from the masses who are being quickly left behind. Tech and AI are owned by their creators, the top 1%, with decisions made about the 100% in every sector of society that have little to no transparency, human judgment or much recourse, and that may not get made the same if they were being forced to happen face to face. People will need their own personal AIs in their corner to protect their basic civil and human rights.”

Carlos Afonso, an internet pioneer and digital rights leader based in Rio de Janeiro, Brazil, wrote, “ Thomas Piketty and others demonstrate that inequality is, if anything, rising everywhere. Democracy understood as pluralist participation in political processes involving the electoral (supposedly unbiased) choices of government representatives, and the decision-making processes in building policies, legislation and regulation, cannot survive in these conditions. … One of the greatest achievements of the UN community was the consensus agreement on trying to reach the 17 sustainable development goals by 2030. However, conflicts of all kinds, internal and inter-country, give us no hope that the essential components of those goals will be achieved worldwide. Also, there is (partly in consequence of the various manifestations of a growing economic crisis with the financial speculators at the head of these processes) little chance that resources will increase to cover the essential needs of the majority.”

Even former pillars of democracy, Britain and France, are challenged by forces misusing digital tools. Norton Gusky

James Sigaru Wahu, assistant professor, media, culture and communication, New York University and fellow at Harvard’s Berkman Klein Center, wrote, “As we have seen across the Global North, tech has only worked to make worse offline tension. This has resulted in multiple challenges toward notions of democracy as shown by the Brexit debacle, 2016 presidential elections and violence against immigrant groups. We have also seen states get in the act through the use of technology to expand their surveillance powers, as is the case in China and in the UK (with its large CCTV camera presence). States in the Global South have also gotten into the surveillance game, which does not bode well for organizations and people advocating for human rights. What we have thus seen is countries like Russia and China growing in strength in tech surveillance and misinformation/disinformation while the United States and several police departments across the country rely on companies such as Palantir to expand their surveillance on citizens. Both of these have led to disastrous results.”

Lokman Tsui, professor at the School of Journalism and Communication of The Chinese University of Hong Kong, formerly Google’s Head of Free Expression in Asia and the Pacific, said, “The political economy of new technologies that are on the horizon leaves me with many concerns for how they will impact democracy and its institutions. First, many of the new technologies, including artificial intelligence, machine learning and big data, are closed and centralized in nature. Unlike the open web before it, these technologies are closed and centralized, both in terms of technical design and also in terms of business model. The technology can indeed be used to improve democratic institutions and processes, but it will be hard and there will be many obstacles to overcome. Second, the new technologies are not only not helping democracies, but they, by their design, are also helping and strengthening non-democracies to further censorship and surveillance. While there are also technologies to counteract these tendencies, the balance tends to tip (heavily) in favor of the other side. Third, I’m concerned there is a global rat race toward the bottom when it comes to the collection of (personal) data, which has the potential to enable the suppression of many other rights.”

Norton Gusky, a futurist and advocate for implementing technology to empower people, commented, “For many years I truly believed that the internet would bring greater access to information that would strengthen democracy. However, in the past four to five years, I’ve witnessed a darker side to the internet. We now see countries like Russia interfering in the elections of not just the United States, but other countries throughout the world. I think there will be a swing, but for the next two to four years, the darker forces will prevail. We’ll see countries like Turkey, China and Egypt limiting the access to the ‘truth.’ Even former pillars of democracy, Britain and France, are challenged by forces misusing digital tools.”

Paola Ricaurte, fellow, Berkman Klein Center for Internet & Society, wrote, “Even after we are aware of the negative implications that technology can have on democratic processes, we have not seen significant actions by the U.S. government to limit the power of tech corporations. The extraterritorial control of technology companies will be further expanded and will continue to have consequences for the democracies of the Global South. The knowledge gap between data-rich countries and data-poor countries will deepen.”

Ian O’Byrne, assistant professor of education at the College of Charleston, wrote, “Power and money ultimately influence decisions made by democratic bodies. With growing unrest, citizens can use social media and current/new digital tools to make themselves heard. Ultimately this will be pushed back again by existing powerholders and nothing may ultimately change. The existing powerholders will continue to exert their influence, and citizens will be left to continue to voice their opinions by shouting into the cyberverse.”

Jeffrey Alexander, senior manager for innovation policy at RTI International, said, “In societies where people are accustomed to power being centralized in a few institutions, and where central governments already exert power through surveillance and state authority, digital technology will facilitate intimidation, disinformation and other mechanisms for reducing individual liberty, suppressing minority opinion and enforcing authoritarian control. This will enable such governments to enhance the appearance of following democratic norms, such as offering ‘free and open’ elections, but use those mechanisms to reinforce their power by suppressing dissent well before voters reach the polls. In societies with strong individual education and a tradition of liberty and citizen-driven initiatives, digital technology could help thwart the rise of authoritarian rule, improve oversight and governance of law enforcement and policy processes, and enhance citizen involvement in government and politics.”

John Pike, director and founder of GlobalSecurity.org, said, “Democracy in 2030 will face the best of times and the worst of times. All the optimistic predictions about social media and other online implementations strengthening citizen participation will be realized. All the pessimistic predictions about the ease with which the surveillance state can manipulate public opinion will also be realized. Autocratic regimes such as Russia and China are skilled at such dark arts at home and will practice them globally. In the old days it was pretty obvious that the Communist Party USA member hawking the Daily Worker was working for Moscow, but now attribution is difficult and contested.”

Shane Kerr, an engineer for an internet security firm, said, “Those with resources will be able to harness technology more effectively to influence opinion and policies, ultimately working against democratic ideals. We already see this in a nascent form today, but it will likely evolve into such a pervasive narrative that the average citizen will not even be aware of it, unless they study history (assuming that ‘1984’-style revisionist history does not become the norm).”

[the fact that they]

Sasha Costanza-Chock, associate professor of civic media at Massachusetts Institute of Technology, wrote, “Core aspects of the democratic process are deeply stressed or broken. In the United States, we need significant reforms to enable broader and more meaningful participation in democratic decision-making, such as instant runoff or rank-order voting, expansion of voting days and times, expanded voting rights for formerly incarcerated people, campaign finance reform, rethinking the electoral college and much more. Unfortunately, most of these are extremely unlikely. Instead, we seem locked into an elitist and extremely expensive electoral system where the players with the most money and connection to wealthy backers rig the system to their advantage. In this context, many technological tools primarily advance those who can develop and customize them for their own ends – again, the biggest players. There are some countervailing forces such as the ability of insurgent candidates to leverage social media.”

Denise N. Rall, academic researcher of popular culture, Southern Cross University, New South Wales, Australia, said, “I believe technology will help the dictators that we now have stay on top and control more aspects of all of our lives, worsening the prospects for democracy as has already happened in most economic powerhouses of the world (U.S., Russia, China, and right-wing elections in Europe, the absurdity of Brexit in the UK, North Korea, etc.). I think environmental degradation will increase exponentially and people will be fighting over resources like energy, water and food quite soon. I do not think technology will have the power to change these outcomes without real desire by governments to reduce resource consumption and a global birth control program of some kind.”

An anonymous respondent commented, “China has the potential to stall trends toward democracy and regime change through increased monitoring of their citizenry and refinement of their ‘social credit’ legislation/monetization of following the whims of their single party. There is a potential for China to help prop up regimes in developing countries where they have vested interests by distributing such technologies to undemocratic regimes that want to remain in power. I think that India could go either way depending on whether or not widespread corruptions in their political environment exploit or are thwarted by increased access to technology and information by their citizenry.”

Technologies of identification and surveillance will expand in usage, eating away at the private sphere of social life. Retired professor

Richard Lachmann, professor of political sociology at the State University of New York-Albany, said, “Democracy will continue to weaken but technology is only a secondary factor. More important in the decline of democracy are the disappearance or weakening of labor unions, the growing power of corporations in all sectors due to mergers, extreme levels of inequality and the ability of the rich and of political actors to manipulate ‘veto points’ to paralyze government initiatives, which then increases citizens’ cynicism about politicians and lessens their participation. All of these preceded the expansion of the internet and will not be significantly lessened by citizens’ online activities.”

Vince Carducci, researcher of new uses of communication to mobilize civil society and dean at the College of Creative Studies, wrote, “Institutional changes are occurring more as a function of power and money rather than technology, particularly in the selection of candidates and in the judicial system. Those are more of threat than technology.”

A cofounder of one of the internet’s first and best-known online communities wrote, “Democracy is under threat. The blame can’t ultimately go to the internet or to computer-aided automation or to artificial intelligence. The vast power of personal and corporate wealth to wield these technologies in support of their selfish interests will increasingly suppress egalitarian and democratic values.”

 A research scientist for a U.S. federal agency wrote, “We are in a period of growing isolationism, nativism and backlash that will weaken democracies around the world, and it will probably have reached a peak by 2030. Although technology and online dissemination of information will be a tool of information and disinformation, and it will be a tool of policing populations, the underlying economic and environmental shifts are mostly responsible for changes resulting in weaker democracies.”

A retired professor commented, “Corporations will have more power over employees and customers. This will be achieved as part of the ongoing corporate takeover of democratic institutions, which U.S. President Eisenhower warned of long ago. Technologies of identification and surveillance will expand in usage, eating away at the private sphere of social life. Social media will continue to reinforce strong social ties among family and friends while reducing the formation of the weak social ties among acquaintances that support intergroup cooperation necessary in a diverse society. Worsening climate and its consequences for health, agriculture and infrastructure will create increasing irrational forms of blame and global conflict. Global conflicts will include electronic and biological forms of aggression against the militarily powerful countries. More citizen backlash is to be expected, but will likely be directed against inappropriate targets. Societies as we know them will stumble from disaster to disaster, toward a massive die-off of our species. I hope I’m wrong. I would like to see our species survive with its democratic values intact. I have grandchildren. I would like their grandchildren to inherit a better world than the one that our present technocratic capitalist economy is racing toward.”

Anonymous respondents commented:

  • “The internet under capitalism will only serve the few, not the many, and democracy will weaken as a result. The problem is about competitive economic imperatives rather than technological affordances.”
  • “It’s not the technology that will cause the changes, but the systems and structures that create various tech.”
  • “The loudest voices will continue to be those that are heard. While the media may change, the elite will still run everything.”
  • “Technology companies and governments have incentives to avoid doing things to address the damaging ways in which internet platforms damage democratic institutions.”
  • “Power corrupts. Look at the tech giants today – manipulation and propaganda. They are elitists who think they know best.”
  • “The combination of big data and supercomputing power seems to be having a negative effect on democracy, and I see no signs that that can be effectively policed or regulated, particularly given the power (and data troves) of very large internet companies and of governments.”
  • “I do not believe that governments understand the tools, and they will fail repeatedly to regulate or organize them properly; I also do not have faith the private companies are democratic, and therefore they are apt to reinforce capitalism alone, not democracy.”

Diminishing the governed: Digitally networked surveillance capitalism creates an undemocratic class system pitting the controllers against the controlled

Charles Ess, professor of digital ethics, at the University of Oslo, said, “Democracy – its foundational norms and principles, including basic rights to privacy, freedom of expression and rights to contest and conscientiously disobey – may survive in some form and in some places by 2030; but there are many strong reasons, alas, to think that it will be pushed to the margins in even traditionally democratic countries by the forces of surveillance capitalism, coupled with increasing citizen feelings of powerlessness against these forces, along with manipulation of information and elections, etc. Not to mention China’s increasingly extensive exports of the technologies of ‘digital authoritarianism’ modelled on their emerging Social Credit System.”

There is simply no reason to believe that technology can strengthen democracy. Gina Neff

Rob Frieden, a professor of telecommunications law at Penn State who previously worked with Motorola and has held senior policy positions at the Federal Communications Commission and the National Telecommunications and Information Administration, said, “Technological innovations appear better suited for expanding government power versus improving the ability of individuals to evade surveillance. Across the entire spectrum of political ideology, national governments can justify increased budgets for ever-more-sophisticated surveillance technologies based on noble-sounding rationales, such as national security. Governments have little incentives and incur even fewer penalties when they fail to calibrate surveillance technology for lawful reasons. Innocent people will have reasonable privacy expectations eroded, particularly with technologies that have massive processing power and range coupled with an ambiguous mandate. Unless and until citizens push back, governments will use surveillance technologies to achieve goals beyond promoting national security. We risk becoming inured and numbed by ubiquitous surveillance, so much so that pushback seems too difficult and unproductive.”

Gina Neff, senior research fellow, Oxford Internet Institute, studying innovation and digital transformation, wrote, “There is simply no reason to believe that technology can strengthen democracy. Western democracies are grappling with the power from the increased concentration of financial capital and its response in the form of the rise of populism. Without attention to strengthening our core technology and communications infrastructure, those forces will continue to damage how people participate in – and indeed make – democracy.”

Zizi Papacharissi, professor of communication and political science, University of Illinois-Chicago, responded, “Our present system of governance supports strong capitalism/soft democracy. Until this balance is reorganized, to support soft capitalism/strong democracy, any technology we create will continue to underserve democracy. In short, the technology we have created was designed to generate profit, not to support democracy. It is possible to do both. We just have not designed it that way, however. By 2030, we will see a weakening of democratic and political processes facilitated by technology. This will happen not because there is something inherently bad or undemocratic about technology. It is because most technology is designed, implemented and/or deployed through mechanisms that support a strong capitalist model that was created centuries ago and needs to be updated in order to be compatible with contemporary societies, democratic and non.”

John Harlow, smart-city research specialist in the Engagement Lab at Emerson College, said, “Although there is rising anti-monopoly sentiment, 2030 is soon, and the dominant digital commons for speech (Facebook, Twitter, YouTube) are likely to draw out (in the courts) any regulatory action to change their business models and/or practices. Currently, they are governed by algorithms designed to maximize ‘engagement’ time and thereby advertising revenue, and those algorithms have prioritized extreme content over accurate content (among other problems). This has enabled and supported the rise of the authoritarian far right the world over, and has destabilized faith and participation in democratic institutions and processes.”

An expert on online trust and identity active in the multistakeholder organizations that build and maintain the internet said, “Uses are shaped by social and economic factors that drive toward consolidation and control. Having created a prefect panopticon that maps every endpoint and every device on the network, and with the rise of middle-box collectors that use massive computing power to correlate identifiers, the end result will tilt toward command and control.”

An expert in socio-technical systems wrote, “Social media tech firms will continue to resist control and meaningful regulation in order to preserve their core business, aptly described by Shoshana Zuboff as ‘surveillance capitalism.’ The oligarchs, perhaps still aided by foreign interests, will continue to manipulate public opinion for their own benefit. Economic inequality will continue to increase, as will resentment, misdirected toward immigrants and the ‘elites.’”

An expert in human-computer design wrote, “The decay of democracy should be attributed foremost to capitalism itself, and thus only in a secondary way to technology. Capitalism seems overdue for major shock, enough so that predicting much of anything so far ahead as 2030 seems foolish. The present moment witnesses the close of a decade of ever-intensified distraction engineering.”

An expert in the law who previously worked for a U.S. government agency wrote, “Increasingly sophisticated marketing based on data and inferred data on every individual threatens to cross the line between persuasion and manipulation and coercion, and the First Amendment restraints on government will require a substantial degree of proof of coercion before the government will be able to intervene to safeguard individuals from clear overreaching. The threat of manipulation – and we saw the first signs of that in 2018 with the Cambridge Analytica fiasco – is real and growing. Whether industry or government can curb it is an open question. Industry of course has a conflict of interest – the more successful its manipulation is, the more money industry makes. And government has the restraints of the First Amendment that limit its role.”

[cyberspace as a venue for war, along with land, sea, air, space]

The problem with everyone having a megaphone is that we get drowned in more noise than useful information. Sam Adams

Emilio Velis, executive director, Appropedia Foundation, said, “The way user participation has been shaped by technological platforms for the past 10 years turned the power of decentralized information back to the big corporations, platforms and stakeholders. Or, even worse, it has weakened the capacity of individuals of action while maintaining a false perception that they have control.”

Peter Lunenfeld, professor of design, media arts and digital humanities, University of California-Los Angeles, and author of “Tales of the Computer as Culture Machine,” wrote, “Commercial platform-driven communication technologies like Facebook, Twitter and their eventual successors are unlikely to strengthen representative democracy in the coming decades of the 21st century. They may add ‘voices’ to the conversation, but they will be unlikely to support and sustain the 20th century’s dominant forms of successful democracies – those that designated representatives to debate and legislate on their behalf, from coherent parties that had established ideologies and platforms. What we are starting to see is the development of dialoguing ‘communities’ that mimic the give and take of true democratic action without offering actual power to its participants, like the Italian Five Star Movement, or the emergence of personality-driven, single-issue pop-ups like Nigel Farage’s Brexit Party. Like Five Star and the Brexit Party, future political movements will use social media to offer the affordances of democratic dialogue without actually empowering participants to control or direct the movements. Social media technologies are creating skeuomorphs of democracies; they will have design attributes that look and feel democratic, but they will be authoritarian to the core.”

An anonymous responden t commented, “The degree of tracking of comments by individuals will increase dramatically in the future as DeepMind-style algorithms are applied to internet-based material. It will become much harder for people to make comments without knowing that their attitudes are being logged and accumulated by organisations of all manner, so there will be a reluctance to speak one’s mind. Hence ‘free speech’ will be constrained and thus the democratic process hindered.”

A distinguished professor of electrical engineering and computer science who is an expert in the future of communications networks at a U.S. university wrote, “Social media makes it possible to reach voters in targeted ways and deliver information from a distance that is tailored to specific goals, rather than fostering local community discussion and participation. The lack of privacy in internet service platforms, along with artificial intelligence and big data, now make it possible for candidates to identify and influence voters in ways that could not have been imagined only a few years ago. Without corrective action (such as new election rules limiting the use of private citizen information), these new capabilities could lead to increased political instability and possibly the breakdown of entire democratic systems. The U.S. appears to be the first such casualty in the Western world.”

Sam Adams, a 24-year veteran of IBM now working as a senior research scientist in artificial intelligence for RTI International, architecting national-scale knowledge graphs for global good, said, “The internet provides a global megaphone to everyone in that anyone can publish their opinions and views instantly and essentially for free. The problem with everyone having a megaphone is that we get drowned in more noise than useful information. This is even more problematic since interest groups from all sides have used their power and resources to amplify their own voices far above the average citizen, even to the point of effectively silencing the average citizen by burying their smaller voice under a landslide of blaring voices controlled by wealthy interest groups. Given the interest-driven news cycles and echo chambers of social media, only the loudest or most extreme voices get repeated. This further exacerbates the level of emotion in the public discussion and drives listeners to the extremes instead of more common ground. A democracy must fairly represent its people’s views if it is to succeed. And part of that fairness in this technology-dominant world must include balancing the volume of the voices.”

Philip Rhoades, a business futurist and consultant based in Australia, wrote, “The neoliberal, developed Western world is sliding into fascism as the world’s sixth mass extinction reaches its inevitable conclusion. As this ecological collapse and political regression proceeds, modern technology will mostly be used for suppression of the great majority of people/citizens. Some technology may help defend the populations against state suppression and terror, but its effectiveness will be minor in the greater scheme of things.”

David Noelle, professor and researcher into computational cognitive neuroscience, University of California-Merced, wrote, “In the U.S., policy and public opinion have been increasingly shaped so as to support powered interests rather than the interests of the people. Regulation is dismissed as a threat to our troubled economy, encouraging corporate powers to pursue dangerous short-sighted strategies for producing return for investors. The unrepresented have been all but muted by electoral processes designed to sustain those in power. The most influential technologies of our times have been designed to depend on large centralized infrastructure. Data drives many new innovations, and few are in a position to collect and aggregate extensive data on the people. The focus on technologies that depend on controllable infrastructure, whether privately held or manipulated by political powers, will strengthen the positions of those currently in power, increasingly limiting the ability of the people to demand democratic representation. Note that this opinion is not intended as a call to limit technology but as a cry to radically alter political and economic institutions so as to provide representation to all of the people. A more democratic system will produce more democratic technologies.”

Deirdre Williams, an independent internet activist based in the Caribbean, commented, “We are being taught that convenience is the most important priority. ‘Innovation’ is killing ingenuity. I would expect that over the next 10 years the pendulum will swing in the opposite direction, but it will take a while to repair the divide that has been (deliberately?) introduced between citizen and government, and to remind governments of their duty of care to all of the citizens.”

Giacomo Mazzone, head of institutional relations, European Broadcasting Union and Eurovision, wrote, “I don’t believe that internet platforms will be able to self-reform, despite all announcements and efforts shown. And so only a break-up solution or ‘publicization’ of the internet giants could change the future. The amount of power that has been transferred by citizens and by states to these actors that are not accountable to anybody (even to the U.S. government) is too big to think that they could renounce voluntarily. Do you remember ‘Sliding Doors’ – the 1998 movie with Gwyneth Paltrow as leading actor? The future could (in a 50/50 chance) go totally wrong or fantastically well. A digital interconnected society based on trust and respect of individual and human rights could be the next arcadia. A digital interconnected and mass-surveillance-oriented society based on exploitation of human weakness and on polarization of society could be the perfect implementation of the Orwell dystopia of ‘1984.’ The two futures are equally possible. It’s up to government and civil society to decide in which direction we shall go.”

Scott B. MacDonald, an experienced chief economist and international economic adviser, said, “The future has a very real potential to be a dark Orwellian place, transfixed between strong technology under the control of a few wealthy and powerful and the great unwashed masses made economically redundant by machines and waiting for their daily dose of Soylent Green. One big change is that people may no longer have to go and vote but vote from hand-held or implanted communications devices. If we are not careful technology will be a device for greater control, not democracy, much as in China. Facial recognition anyone?”

Estee Beck, author of “A Theory of Persuasive Computer Algorithms for Rhetorical Code Studies,” commented, “Unless Congress takes action and passes protective consumer legislation to limit private industry powers with technological growth, i.e., surveillance and privacy erosion, democratic institutions will face greater dangers from domestic and foreign threats, loss of trust among the American public and devaluation of private technological companies among the marketplace. The infrastructure of technology, with faulty programming that allows for penetration and deep hacks, the decisions made now with select leaders in technology companies driving pro-China surveillance growth, anti-U.S. and Mexico relations via border surveillance, marketing of biosecurity technologies and the eventual promotion of artificial intelligence consumer goods and services will divide the faith of the nation and leave the American public ill-trusting of Congress to take action for the public good.”

Matt Colborn, a freelance writer and futurist based in Europe, said, “I do not deny the potential for technology to strengthen or even revolutionise democracy. In fact, this is what I hoped for at the beginning of the revolution in the 1990s. However, from a citizen perspective, the new technology seems to me to have already reduced mental autonomy and the capacity for intelligent choice. Why? 1) Platforms like YouTube seem to be more appropriate for distributing propaganda and for involuntary brainwashing because of the algorithms used. 2) Extreme tribalism has also increased because of the ‘echo chamber’ nature of personalised media. 3) Government and corporations are demolishing any kind of privacy. Neurotech, where thoughts are read, is the ‘final frontier’ of this. The problem, too, is the toxic interaction between archaic authoritarian institutions, right-wing populism and new tech. These effects mean that democracy is diluted whilst a ‘surveillance’ state is strengthened and while deep tribal divisions are exacerbated. Although there are certainly counter movements to this, economic inequality is such that basically the rich and powerful are in a position to cash in on these developments and the rest of us are not. Those who want political innovation will find it tough in this environment.”

Democratic regimes could become less democratic from the misuse of surveillance systems with the justification of national security. Anonymous respondent

An artificial intelligence expert predicted, “‘Democracy’ is likely to be even more of an elitist endeavor by 2030 than it is now. Life is good if you’re a big corporation, but not if you’re an ordinary working-class citizen. Who has a voice in this world will depend even more on money and power. Civic technologists will first promise to save democracy with technology but then start charging for it after five years because ‘someone has to pay for maintenance.’ And they will get away with it, because no one will remember that political rights are a basic right and not a commodity.”

An anonymous respondent wrote, “Recently Hong Kong protesters had to buy single-trip transit cards with cash to be able to exercise democratic power; this will be impossible when mass face-recognition technology is implemented. Essentially, it is becoming almost impossible to behave democratically.”

  • “Technology is going to aggregate people’s individual voices and remove individual democracy.”
  • “Democratic regimes could become less democratic from the misuse of surveillance systems with the justification of national security.”
  • “I am sadly confident that democratic institutions will not be affected in any positive way in future by citizen’s perspectives; instead, technology will continue to create disenfranchised, disempowered citizens.”

Exploiting digital illiteracy: Citizens’ lack of digital fluency and their apathy produce an ill-informed and/or dispassionate public, weakening democracy and the fabric of society

James S. O’Rourke IV, a University of Notre Dame professor whose research specialty is reputation management, said, “As Neil Postman wrote in 1985, ‘We no longer engage in civil public discourse. We are simply amusing ourselves to death.’ Among the more insidious effects of digital life has been a reduction in tolerance for long-form text. People, particularly the young, will read, but not if it involves more than a few paragraphs. Few among them will buy and read a book. News sites have discovered that more people will click on the video than scroll through the text of a story. Given how easy it now is to manipulate digital video images, given how easy it is to play to people’s preconceptions and prejudice, and given how indolent most in our society have become in seeking out news, opinion and analysis, those who seek to deceive, distract or bully now have the upper hand. Jesuits have long cautioned that ‘No man can understand his own argument until he has visited the position of a man who disagrees.’ Such visits are increasingly rare. The long-predicted ‘filter bubble’ effect is increasingly visible. People will simply not seek out, read or take time to understand positions they do not understand or do not agree with. A sizeable majority now live with a thin collection of facts, distorted information and an insufficient cognitive base from which to make a thoughtful decision. Accurate information is no longer driving out false ideas, propaganda, innuendo or deceit.”

Bernie Hogan, senior research fellow, Oxford Internet Institute, said, “Technology without civics is capitalism with crystallised logic and unbounded scope. Democratic institutions and civic societies are premised on boundaries and intelligible scales, like the ‘local paper’ or the ‘provincial radio.’ Technology is allowing for the transcendence of scale, which we might think is great. Certainly, from a logistics and delivery side it is very impressive. But social cohesion requires levels of understanding that there’s a coherent bounded population to care about and define one’s identity through and against. It requires people seeing and doing things as more than consumers and occasional partisan voters.”

People don’t know what to believe, so they often choose either to believe nothing or to believe whatever their gut tells them. Research scientist

Larry Rosen, a professor emeritus of psychology at California State University-Dominguez Hills, known as an international expert on the psychology of technology, wrote, “I worry that many in the public will and do not have the skills to determine truth from fiction, and twisted truth can and does lead to misunderstanding the content.”

Carolyn Heinrich, professor of education and public policy at Vanderbilt University, said, “As internet content is increasingly customized for us by who we know and where we click, the range of information and perspectives we are exposed to will narrow unless we make the effort to read more widely ourselves. To minimize the negative effects, we have to proactively make the effort to broaden our circles of communication and sources of information/knowledge. As technology increasingly pervades our K-12 school curricula, we also need to examine exactly what technology vendors are conveying in their content, and who is the ‘face’ of that content in instructional videos. That is something we are currently investigating in our research .”

Cliff Zukin, professor of public policy and political science, Rutgers University, responded, “In the U.S. anyway, increasing political apathy has accompanied increasing use of technology. It has, on the one hand, been diversional from attention to matters of governance and citizenship. On the other, the centrifugal forces of interests made more available by increasing technology has eroded the core knowledge base of citizens, as well as the norms of citizenship. It does allow for mass movements to organize more quickly and put pressure on leaders, but the right-wing, post-recession populism and withdrawal from globalism is not, in my judgment, a good thing.”

An anonymous respondent said, “Unfortunately, fundamentally undemocratic processes in the United States, like the electoral college, will continue to be undermined by fake news and technology-backed manipulation of rural states, which have outsized electoral college voting power but typically lack education and will likely remain vulnerable to such exploits.”

A fellow at a major university’s center for internet and society wrote, “I am worried that the ease with which hostile powers and trolls can manipulate public opinion will only increase and become more sophisticated, leading to voters having increasingly lower levels of factual information at their disposal or, worse yet, increasing apathy toward or cynicism about voting and the democratic process entirely.”

Eric Royer, assistant professor of political science, Saint Louis University, said, “The breakdown of norms creates an environment of false truths that is directly tied to political polarization, especially among the fringes, and citizen mistrust and apathy with anything ‘government.’ Technology, especially in social media platforms, holds unlimited potential to make the world less of an unfamiliar place, however, its manipulation and influence in our daily lives is truly misunderstood at the current expense of democratic processes and institutions globally and domestically.”

A research scientist focused on fairness, transparency and accountability in artificial intelligence said, “The rise of fake news and manipulated media like deepfakes has sown a greater distrust of media and institutions that is undermining democracy, leading to a less-informed and less civically engaged population. People don’t know what to believe, so they often choose either to believe nothing or to believe whatever their gut tells them. Moreover, foreign actors that use social media manipulation tactics to sway elections further undermine democracy’s legitimacy.”

Continuous media weakens people’s ability to seek information and form their own opinion. Gretchen Steenstra

Mark Andrejevic, associate professor of communications, University of Iowa, wrote, “Much of my career has been built around my profound concerns about the impact that technology is having on democratic processes of deliberation, public accountability and representation. This is because technology needs to be understood within the context of the social relations within which it is deployed, and these have been conducive to privileging an abstract consumerist individualism that suppresses the underlying commitment to a sense of common, shared or overlapping interests necessary to participation in democratic society. I see the forms of hyper-customization and targeting that characterize our contemporary information environment (and our devices and mode of information ‘consumption’) as fitting within a broader pattern of the systematic dismantling of social and political institutions (including public education, labor unions and social services) that build upon and help reproduce an understanding of interdependence that make the individual freedoms we treasure possible. Like many, I share concerns about rising political polarization and the way this feeds upon the weaponization of false and misleading information via automated curation systems that privilege commercial over civic imperatives. These trends predate the rise of social media and would not have the purchase they do without the underlying forms of social and civic de-skilling that result from the offloading of inherently social functions and practices onto automated systems in ways that allow us to suppress and misrecognize underlying forms of interdependence, commonality and public good. I am not optimistic that anything short of a social/political/economic disaster will divert our course.”

Carlos Afonso, an internet pioneer and digital rights leader based in Rio de Janeiro, Brazil, wrote, “Thinking here of a planet with 7 billion-plus persons, most of them (including many of the supposedly ‘connected’) are unable to discern the many aspects of disinformation that reaches them through traditional (entrepreneurial) media, social networking apps and local political influences.”

A longtime CEO and internet and telecommunications expert commented, “Citizens will increasingly act absent of any understanding of critical analysis and reasoning, fact-checking or even rule of law. Under the guise of ‘acting out against injustice’ we will continue to see cyber vigilantism, whereby social media firestorms effectively ‘try and convict’ anyone accused of word or deed not supportive of their values.”

Gretchen Steenstra , a technology consultant for associations and nonprofit organizations, wrote, “I am concerned about higher velocity of information that does not include all critical and supporting information. Data is used to inform one view without context. Consumers do not fact-check (on many issues regardless of party). Americans are not focused on social responsibility or downstream impacts – they only want instant results. Continuous media weakens people’s ability to seek information and form their own opinion. Constant connectedness prevents reflection and allows your brain to relax. No one can argue with the desire for understanding.”

A fellow at a think tank’s center for technology and innovation wrote, “Democracy will be driven by more artificial intelligence systems, which will automate a range of decisions. Consequently, individuals may have limited input into their own decisions because data will be extrapolated from machines. What this will mean is a looser connection to democratic processes or connections driven by what one sees, hears and senses through dominant platforms. Without some level of policy restraint when it comes to specific use cases, such as voting, technology may serve to erode public trust, while simultaneously relying less on actual public input due to the level of sophistication that emerging technologies offer.”

Ayden Férdeline, technology policy fellow, Mozilla Foundation, responded, “Technology will continue to be exploited by those who seek to increase political apathy and undermine our trust in established institutions. This may happen more subtly than in the past, but the corrosive effect on democracy will be just the same.”

The internet amplifies trends that have been with us for a while – extremism and apathy. Pamela McCorduck

Philip J. Salem, professor emeritus, Texas State University, expert in complexity of organizational change, said, “People will become increasingly more careful about how they use the internet. Each person must be more mindful of use. My concern is that reflexive, non-mindful reactions can spread so fast and have more tragic consequences with the speed of the internet.”

Jeff Johnson, a professor of computer science, University of San Francisco, who previously worked at Xerox, HP Labs and Sun Microsystems, said, “Today’s social media encourages the spread of unverified information, which can skew policymaking and elections. People tend to be lazy and do not even read most of the articles they comment on, much less check the truth of the articles. In the TV era, before social media, putting out false information about a political opponent or ballot measure was expensive and subject to laws against ‘false advertising.’ Political hit pieces had to be well-funded, vaguely worded and carefully timed (to just before the election) in order to sway elections. That is no longer true. Strong regulation of social media could perhaps mitigate this, but such regulation seems unlikely in the foreseeable future.”

Pamela McCorduck, writer, consultant and author of several books, including “Machines Who Think,” said, “I am not sanguine about democracy right now. The internet amplifies trends that have been with us for a while – extremism and apathy. Our proportion of potential voters who actually vote only rose once or twice in the past few elections. Mostly it is dismal. Partly this is a result of voter suppression (not just removing voters from the rolls, but also making the process of voting far more cumbersome than it needs to be). Partly this is the realization by voters that elected officials are more beholden to dark money than to the people who elected them. I hope I am wrong about the future of this country I love.”

Luis German Rodriguez, researcher and consultant on knowledge society and sociotechnical impact based at Universidad Central de Venezuela, commented, “Democracy is likely to be weakened by 2030. … Authoritarian rule seems to be growing stronger wherever you look, supported by the emerging technologies.”

  • “People will not use the internet to research the issue, rather, they will simply go with whatever biased opinion is put in front of them.”
  • “The problem is that with the erosion of critical-thinking skills, true journalism versus opinion journalism (and the prevalence of ‘sound bites’ in lieu of serious debate based on facts) lack of proper policy and governance principles, these tools are being used to spread false information.”
  • “The public made more gullible by a short attention spans, eroding reasoning skills, becomes a malleable target for those who seek to erode the fundamental institutions of our democracy.”
  • “I’m less concerned about technology than I am the ability and willingness of my fellow citizens to educate themselves about the sources of information they consult.”
  • “The biggest threat to democracy is people’s lack of critical-thinking skills to be able to distinguish between information and misinformation.”

Waging info-wars: Technology can be weaponized by anyone, anywhere, anytime to target vulnerable populations and engineer elections

Richard Bennett, founder of the High-Tech Forum and ethernet and Wi-Fi standards co-creator, wrote, “The economic model of social media platforms makes it inevitable that these tools will do more harm than good. As long as spreading outrage and false information generates more profits than dealing in facts, reason, science and evidence, the bad guys will continue to win. Until we devise a model where doing the right thing is more profitable than exploiting the public’s ignorance, the good guys will keep losing. … One hypothetical change that I would like to see would be the emergence of social media platforms that moderate less for tone and emotion and more for adherence to standards of truthfulness and evidence. Making this approach succeed financially is the major obstacle.”

Mutale Nkonde, adviser on artificial intelligence at Data & Society and fellow at Harvard’s Berkman Klein Center for Internet and Society, wrote, “Without significant regulation, our future elections will be ruled by the parties that can optimize social media recommendation algorithms most effectively. In the present moment, those are parties like Cambridge Analytica who used fear, racism and xenophobia to influence elections across the world.”

Eduardo Villanueva-Mansilla, associate professor of communications at Pontificia Universidad Catolica, Peru, and editor of the Journal of Community Informatics, said, “The lack of agreement about how to deal with these issues among governments is a serious threat to democracy, as much as the potential for misuse of technological innovations. In the next decade, the complete control by a few multinational firms will be completely outside of regulatory and policy reach of developing countries’ governments. This will increase the instability that has been normalized as a feature of governance in these countries.”

This is not like armed revolution; this is small numbers of employees able to affect what thousands, if not millions, see. Rich Salz

An expert in the ethics of autonomous systems based in Europe said, “Digital devices provide more and more new means to enhance the power of leaders to control people and to manipulate an inferior substitute for democracy to their benefit. They simulate and broadcast false flavours of democratic representations to the population. Decisions that restrict people’s rights, autonomy and freedom are promoted as necessary for enhancing the security, care and well-being of the population, while in fact the purpose is to protect the interests of those who seek power and influence. New digital means (biometrics, facial recognition, big data, deep learning, artificial intelligence) allow those in power to recognize and to profile people (position, behavior, location, ways of thinking, ideas, political opinions, level of life, health, origins, money, social relationships and so on). Stakeholders can use these devices to make appropriate decisions concerning what they consider subversive people and moreover to fight them if necessary. Robots and autonomous AI systems will be very efficient slaves to help to educate people who will not fit the requirements and rules imposed by the dominant class. This model will be developed in more and more states in the world and will progressively narrow freedom and decrease the quality of life of ordinary people belonging to medium and low social classes. At the same time, the field of available jobs will be more and more narrow because AI and robots will replace human beings in most areas and lead the majority of people to be unable to find means to work to support and fulfill themselves.”

Larry Masinter, internet pioneer, formerly with Adobe, ATT Labs, Xerox PARC, who helped create internet and web standards with IETF and W3C, said, “Traditional democracy and democratic institutions rely on geographically defined boundaries for constituencies. Enabling technology will accelerate the rise of cross-jurisdictional malfeasance, whether it’s called collusion or something else.”

An anonymous respondent warned, “Authoritarians will weaken checks and balances, turn courts into extensions of those in power and thus undermine representative democracy – enabled by the manipulation of digital media to stoke fear and mask inconvenient truths. … Extreme partisanship is putting all of our democratic institutions at risk to the point that shared power and orderly transitions may not exist in 10 years. Civil unrest seems inevitable.”

Rich Salz, senior architect, Akamai Technologies, wrote, “Individual citizens cannot stand up to the organized ‘power’ of other countries. This is not like armed revolution; this is small numbers of employees able to affect what thousands, if not millions, see.”

Heywood Sloane, entrepreneur and banking and securities consultant, said, “The current U.S. administration is leading the way to misuse technology. It permeates the public air with disinformation and lies, while putting a heavy hand on the scale in the background. It welcomes trolls to conferences in the White House and encourages them. Even if the administration changes it will take time and work to undo the damage. Media technology corporations have lost control of their platforms and marketing staffs – witness Facebook and Cambridge Analytica. Already we have rogue state sponsors altering our dialogues, yet we ignore them and chortle away with their leaders.”

An associate dean of research for science and engineering said, “Over the next 10 years, we will see an increase in the current trend of using technology to further engineer elections (including gerrymandering) and to target those most vulnerable to manipulation (on all political sides). A result is overrepresentation in elected government of self-interested minority points of view (extremes on many sides), increased obstacles to ousting parties from power (especially in two-party systems like the U.S.), and, for a while at least, the continued divisiveness of political discourse.”

A consultant who works for U.S. government agencies said, “The biggest fear of technology will be the use of artificial intelligence. While at present we have control of AI, in time we will lose that control. As systems are augmented with AI, it will remove the human element over time. We can say what we like about technology and our control of technology, but in time external forces will replace the human element. This will happen in all areas of technology, including the governmental technology world. At some point it will go beyond its own programing doing what it believes is in our best interest.”

Sowing confusion: Tech-borne reality distortion is crushing the already-shaky public trust in the institutions of democracy

The leader of a technology innovation group at one of the world’s top five technology organizations wrote, “Technology has already and will continue to place huge strains on democracy. First, digital technology makes it immensely easy for a small number of leveraged actors to exercise great control over our public discourse. We see this as they exercise control over the information made available and presented to citizens. Second, digital technology makes it immensely easy for actors to hide or obscure their involvement and their intent. Third, digital technology makes it immensely easy to erode truth through fabrications or amplifications.”

Hate, polarization, oversimplification and lack of well-considered thought are and will be on the increase. Alejandro Pisanty

Nigel Cameron, president emeritus, Center for Policy on Emerging Technologies, said, “I fear deepening distortions in public perception by the leveraging of digital media on the part of governments (our own and foreign), tech corporations and other actors – as new technologies like fake video make it even easier to shape opinion. It will be some time before (assuming it happens) we have the will and the tech to rein in these abuses. As things stand, partisanship by politicians and the ‘sorry, not sorry’ approach of Mark Zuckerberg and the other tech leaders portend deepening problems.”

[Technology]

Alejandro Pisanty, professor at UNAM, the National University of Mexico, and an activist in multistakeholder internet governance, wrote, “Hate, polarization, oversimplification and lack of well-considered thought are and will be on the increase. They are orders of magnitude easier to construct and propagate than the ways of countering them (the ‘bullshit asymmetry’ principle, on steroids). Manipulation of elections and other processes will continue to be rife as long as there exist those who want to do it and those susceptible to manipulation. Among the hardest hit will be the U.S., which has a gullible population unable to see the meta-layers of attack they are subjected to. There is hope for improvement in a smaller, smarter, more-democratic sector of society fighting the acritical reactions of the naive and uneducated. Better information, resilient systems (by design) and deliberations nested at all levels from the ultra-local to the global, an architecture of multistakeholder deliberations and decisions, and a lot of luck, may lead to improvement. Otherwise splintering and other forms of dark days loom.”

Rich Ling, professor, Nanyang Technological University, Singapore; expert on the social consequences of mobile communication, said, “The forces that want to confuse/undercut legitimate information are learning how to best use these systems. They are also learning how to calibrate the messages they send so as to enhance their divisiveness. This division plays on confirmation bias and, in turn, undercuts the common ground that is needed for effective governing and democracy.”

Karl Auerbach, chief technology officer, InterWorking Labs, active in internet design since the early 1970s, had less faith in multistakeholder organizations, writing, “Democracy is dying at the hands of a concept called ‘stakeholder.’ This has little to do with technology except that people are being led to believe that they are not skilled enough or smart enough to decide for themselves, that technological experts ought to decide on their behalf. We are moving toward not improved democracy (direct or indirect) but closer to an oligarchy of ‘stakeholders.’”

Glyn Moody, a prolific technology journalist, blogger and speaker based in Europe, said, “Lies propagate more easily than truth. It is proving far easier to use the latest technology to undermine the things we thought were safe and stable. It is proving very hard to counter that abuse of technology.”

A computing science professor emeritus from a top U.S. technological university wrote, “As artificial intelligence technologies are employed to create ever-more-realistic disinformation videos and as multiplication of software AI disinformation bots can be replicated and spread easily by individuals or small groups, more and more people will be fooled by disinformation, thus weakening our democracy.”

A professor of sociology at a major California university said, “Powerful governments and their allies are using technology to destroy the concept of a single, accepted truth. While not always succeeding in implanting particular beliefs in the minds of citizens and residents, the constant assault on truth leads to fatigue and resignation, that the actual truth cannot be known, or that all political actors are equally bad. This resignation, moving into apathy, allows those in power to behave badly and centralize their power. The wild card is whether new technologies can detect bots and fake video/audio, and whether mainstream media and social media companies behave responsibly to bring an accepted truth back to life.” Alan Honick, project director for PROSOCIAL, said, “My work is focused on the need to make the internet and associated information technologies trustworthy and reliable. … The most important variable for the question at hand is whether or not information technology can move in the direction of becoming a trusted and reliable source of information, and at present the trend seems to indicate not.”

Annemarie Bridy, professor of law specializing in the impact of new technologies on existing legal frameworks, said, “Social media platforms have a steep hill to climb over the coming years when it comes to dealing effectively with disinformation and coordinated inauthentic behavior aimed at manipulating voters and electoral outcomes. Viral disinformation online will continue to be a serious threat to democratic institutions and the integrity of elections.”

Garth Graham, a longtime leader of Telecommunities Canada, said, “The digital age is characterised by a disintermediation of authority. Authority as a principle for structural organization is disappearing. Democracy is predicated by the agreement to accept authority to represent. Most people are no longer willing to accept that anyone else can represent them.”

Stephanie Fierman, partner, Futureproof Strategies, said, “Many parties have an incentive to issue false and damaging statements and content that people believe. Until we return to a world in which a fact is a fact is a fact, we will see a continuing degradation of truth and the existence of checks and balances, both of which being so vital to the presence of democracy.”

Stuart Umpleby, retired professor of management and director of research at George Washington University, commented, “The operators of social media platforms, such as Facebook, need to take responsibility for content. Otherwise they benefit by distributing falsehoods.”

Viral disinformation online will continue to be a serious threat to democratic institutions and the integrity of elections. Annemarie Bridy

Satish Babu, founding director of the International Centre for Free and Open Source Software, said, “If the world does not recognize the pitfalls and take corrective action, technology is likely to adversely impact the quality and practice of democracy. In particular, the pragmatics of democracy will deteriorate into an ‘anything goes,’ free-for-all fight where artificial intelligence will be used to dig up or magnify or even create antecedents of candidates from historical records and social media will be used to push such ‘facts’ to every citizen.”

A professor of sociology and public policy wrote, “Bot armies and databases of persuadable people that include information on what sets them off empower the worst nationalistic and international actors to tear down democracies. Via technology, people can enter alternate realities where others reinforce their fantasies and strengthen them – flat earthers, those who believe in vaccine and climate conspiracies, moon landing hoaxers and so forth. These are problematic in their own right, but also lend themselves to further manipulation, destruction of trust in institutions, scapegoat seeking, and the rejection of science.”

Filippo Menczer, a grantee in the Knight Foundation’s Democracy Project and professor of informatics and computer science at Indiana University, said, “Technology … mediates our access to information and opinions. This will in part strengthen democracy, for example making it easier to check facts. It will also weaken democracy, as vulnerabilities due to the interplay of cognitive, social and algorithmic biases continue to be exploited and new ones are discovered. On balance, my prediction is that things will get worse before they get better. We are only just beginning discussions about the legal implications of countermeasures, for example the issues related to social bots, disinformation campaigns, suppression of speech and the First Amendment in the U.S.”

Nancy Heltman , manager of a state agency based in the U.S., wrote, “The negative aspects of bots and influencers driving opinions are likely to outweigh the positive aspects of increasing involvement in the political process.”

David Gans, musician, songwriter and journalist, said, “I fear that deliberate falsehoods will continue to crowd objective reality out of the discourse. The social networks seem neither able nor particularly willing to intervene on behalf of the truth, and there are powerful and well-funded entities with a strong interest in misinforming the public.”

A research leader for a U.S. federal agency said, “Working to be respectful of First Amendment rights while not allowing the perpetuation of mis- or disinformation is of critical concern. I don’t expect that to be resolved within the next 10 years. We are living in the times of 50 shades of gray. In many cases, the determination is not black and white. The headline may be misleading, but not entirely untrue. I think that’s appealing to the media right now.”

Kenneth R. Fleischmann, associate professor at the School of Information at the University of Texas-Austin, wrote, “Technology will have complex effects on society that will be difficult to predict, that depend on the decisions of tech companies, governments, the press and citizens. … Trust will be key, not just blind trust, but trust based on transparent provenance of information that can help users exercise their autonomy and agency.”

  • “Technology will weaken our ability to come to consensus; by nurturing smaller communities and fringe ideas, it will make compromise and finding a modus vivendi much more difficult.”
  • “Social media will continue to erode faith in facts and reason; echo chambers and emotion-driven communications plus security problems in voting will undermine public discourse and faith in elections.”
  • “There seems to be no realistic way to check the effects of IT on polarization and misinformation. The true beliefs and actions of political leaders will continue to have decreasing influence on voting.”
  • “Foreign countries and hate groups will grow more sophisticated in their ability to infiltrate the web with biased stories and ads designed to suppress or sway voters and negatively impact public opinion.”
  • “While it enables voices to be heard, tech has already weakened democracy by enabling governments and corporations to erode privacy and silence those who might otherwise speak out.”
  • “We don’t need mass armies anymore. New technology enables centralized control to a degree never imagined before.”
  • “In 2030, there will still be splintering and increased political polarization as individuals are able to challenge democratic ideals and influence political processes through anonymous activities.”
  • “Democracy is, and will always be, filled with fake news and preposterous bloviation.”

Weakening journalism: There seems to be no solution for problems caused by the rise of social media-abetted tribalism and the decline of trusted, independent journalism

Christopher Mondini, vice president of business engagement for ICANN, commented, “The decline of independent journalism and critical thinking and research skills resulting from easy reliance on the internet make citizens more susceptible to manipulation and demagoguery. A growing proportion of politically active citizens are digital natives with no recollection of life before social media became the primary medium for debate and influence. The pursuit of clicks, retweets and page views encourages extremist or provocative rhetoric. Viral memes and soundbites distract from thoughtful analysis, deliberation and debate. Of course, the vast majority of citizens are not politically active, but they increasingly consume news and adopt a worldview shaped by their online communities. Participation in political processes may rise because of newly inflamed passions brought about by online discourse, but they may crowd out more measured voices.”

Yaakov J. Stein, CTO, RAD Data Communications, based in Israel, responded, “Social media as they are at present have a polarizing effect that destabilizes democracy. The reason is that advertising (and disinformation) is targeted at and tailored to people according to their preexisting views (as predicted based on their social media behavior). This strengthens these preexisting views, reinforces disparagement of those with opposing views and weakens the possibility of being exposed to opposing views. The result is that free press no longer encourages democracy by enabling people to select from a marketplace of ideas. Instead the right to free press is being used to protect the distribution of disinformation and being manipulated to ensure that people are not exposed to the full spectrum of viewpoints. Perhaps an even more insidious result is that people attempting to keep open minds can no longer trust information being offered online, but that free information online has led to the bankruptcy of traditional news outlets that spend resources on fact-checking.”

The decline of independent journalism and critical thinking and research skills resulting from easy reliance on the internet make citizens more susceptible to manipulation and demagoguery. Christopher Mondini

Rey Junco, director of research at CIRCLE in the Tisch College of Civic Life, Tufts University, said, “We can expect that attempts to influence public perceptions of candidates and elections are not only ongoing, but that they will continue to be successful. Technology use by citizens, civil society and governments will first weaken core aspects of democracy and democratic representation before there is a restructuring of technological systems and processes that will then help strengthen core aspects of democracy. There are two issues at play: 1) Ideological self-sorting in online spaces that is bolstered by algorithmic polarization and 2) The relative unwillingness of technology companies to address misinformation on their platforms. Individuals who get their news online (a larger proportion who are young – Pew Research ) choose media outlets that are ideologically similar and rarely read news from the opposing side (Flaxman, Goel, & Rao, 2018). In fact, these individuals are rarely exposed to moderate viewpoints (Flaxman, Goel, & Rao, 2018). Social media, in turn, allow for not just informational self-sorting as with online news, but such self-sorting is bolstered through algorithmic curation of feeds that promotes ideological separation. … Although major technology companies are aware of how misinformation was promoted and propagated through their networks during the 2016 elections and resultant congressional hearings on the topic, little has been done to mitigate the impact of such deliberate spreading of misinformation. Analyses from the security and intelligence communities show that state actors continue their attempts to manipulate public sentiment in social spaces, while the increased polarization of traditional outlets has minimized the impact of these reports. State actors are emboldened by the fact that the United States has not addressed the spread of misinformation through technological change or through public education.”

An associate professor of computer science who previously worked with Microsoft, said, “I worry about three related trends: 1) the increasing decentralization of news generation, 2) the lack of easy-to-use, citizen-facing mechanisms for determining the validity of digital media objects like videos and 3) personalization ecosystems that increase the tendency toward confirmation bias and intellectual narrowing. All three trends decrease the number of informed voters and increase social division. Governments will eventually become less averse to regulating platforms for news generation and news dissemination, but a key challenge for the government will be attracting top tech talent; currently, that talent is mostly lured to industry due to higher salaries and the perception of more interesting work. Increasing the number of technologists in government (both as civil servants and as politicians) is crucial for enabling the government to proactively address the negative societal impacts of technology.”

Kenneth Sherrill, professor emeritus of political science, Hunter College, said, “When I’m pessimistic, I believe that the fragmentation of information sources will interact with selective attention – the tendency only to follow news sources that one expects to agree with. This will generate even greater polarization without any of the moderating effects and respect for democratic processes that come from genuine participation. This can lead to the collapse of democratic processes. Right now, I’m pessimistic. The 2020 election may be the test.”

Eric Keller, lecturer in international relations and U.S. foreign policy, University of Tennessee-Knoxville, wrote, “Social media will heighten the current strong polarization that we already have. This is mainly from ‘information stovepipes’ and mutually reinforcing narratives that demonize the opposition. This creates the danger of democratic institutions being degraded in the name of ‘saving’ them from the opposing political party.”

A Europe-based internet governance advocate and activist said, “If current trends continue, there won’t be a real democracy in most countries by 2030. The internet’s funding model based on targeted advertising is destroying investigative journalism and serious reporting. More and more of what is published is fake news. Citizens cannot make informed decisions in the absence of reliable information.”

The coordinator of a public-good program in Bulgaria wrote, “By 2030 we will still see fighting between small groups and communities that leads to extremes. This will give ground to governments to become more authoritative and build up even stronger control via the internet.”

Bill D. Herman, researcher working at the intersection of human rights and technology said, “The combination of news fragmentation, systematic disinformation and motivated reasoning will continue to spiral outward. We’re headed for a civil war, and the hydra-headed right-wing hate machine is the root of the problem.”

An internet pioneer and technology developer and administrator said, “The foundation of democracy is an informed public. By undermining the economic foundation of journalism and enabling the distribution of disinformation on a mass scale, social media has unleashed an unprecedented assault on the foundation of democracy. The decline of newspapers, to just highlight one downside, has had a quantifiable effect (as measured in bond prices) on governmental oversight and investor trust.”

A professor and expert in learning in 3D environments said, “The explosion in the volume of information has led to the majority of people tending to rely on or trust the major platforms to filter and distribute information rather than managing their own personal learning environments with feeds from trusted independent sources. … As the filtering mechanisms become more sophisticated and more personalized to the individual, the opportunities for the wealthy to manipulate opinion will become even greater. The democratic system depends fundamentally on free access to reliable information, and once this is gone the system will effectively become less and less democratic.”

Mike Douglass, an independent developer, wrote, “Facebook sold people on the idea that a race to accumulate ‘friends’ was a good thing – then people paid attention to what those ‘friends’ said. As we now know, many of those ‘friends’ were bots or malicious actors. If we continue in this manner, then things can only get worse. We need to reestablish the real-life approach to gaining friends and acquaintances. Why should we pay any attention to people we don’t know? Unfortunately, technology allows mis/disinformation to spread at an alarming rate.”

Eric Goldman, professor and director of the High-Tech Law Institute at the Santa Clara University School of Law, commented, “Our politicians have embraced internet communications as a direct channel to lie to their constituents without the fact-checking of traditional media gatekeepers. So long as technology helps politicians lie without accountability, we have little hope of good governance.”

Janet Salmons, consultant with Vision2Lead, said, “The internet, with unregulated power in the hands of commercial entities that have little sense of social responsibility, will continue to unravel Western-style democracies and civic institutions. Companies profiting from sales of personal data or on risky practices have little self-interest in promoting the kinds of digital and advanced literacy people need to discern between fact and fiction. In the U.S., the free press and educational systems that can potentially illuminate this distinction are under siege. As a result, even when presented with the opportunity to vote or otherwise inveigh on decision-making, they do so from weak and uninformed positions. The lowest common denominator, the mass views based on big data, win.”

A researcher and teacher of digital literacies and technologies said, “In the early internet days, there was a claim it would bring a democratization of power. What we’re seeing now is the powerful having larger and more overwhelming voices, taking up more of the space rather than less. This leads to polarization, rather than a free-flowing exchange of ideas. Anyone falling within the middle of a hot issue is declared a traitor by both sides of that issue and is shamed and/or pushed aside.”

An anonymous respondent commented, “Increased engagement is largely a product of the media environment, and – in places where the press is absent, restricted or has become blatantly politicized – that engagement will bear the marks of a distorted information environment.”

Responding too slowly: The speed, scope and impact of the technologies of manipulation may be difficult to overcome as the pace of change accelerates

The core concepts of democracy, representation, elections and tenure of government will be greatly undermined by artificial intelligence. Emmanuel Edet

Kathleen M. Carley, director of the Center for Computational Analysis of Social and Organizational Systems at Carnegie Mellon University, said, “Disinformation and deepfakes in social media as well as the ability of individuals and media-propaganda teams to manipulate both who is and can communicate with whom and who and what they are talking about are undermining democratic principles and practice. Technological assistants such as bots, and information tools such as memes, are being used in ways that exploit features of the social media and web platforms, such as their prioritization rules, to get certain actors and information in front of people. Human cognitive biases, and our cognitive tendencies to view the world from a social or group perspective, are exploited by social media-based information maneuvers. The upshot is that traditional methods for recognizing disinformation no longer work. Strategies for mitigating disinformation campaigns as they play out across multiple media are not well understood. Global policies for 1) responding to disinformation and its creators, and 2) technical infrastructure that forces information to carry its provenance and robust scalable tools for detecting that an information campaign is underway, who is conducting it and why do not exist.”

Jason Hong, professor of Human-Computer Interaction Institute, Carnegie-Mellon University, said, “Basically, it’s 1) easier for small groups of people to cause lots of damage (e.g., disinformation, deepfakes), and 2) easier for those already in power to use these technologies than those who need to organize. In the early days of the internet, new technologies empowered new voices, which led to a lot of utopian views. However, we’ve seen in recent years that these same technologies are now being used to entrench those already in power. We see this in the form of targeted advertising (being used for highly targeted political campaigns), analytics (being used for gerrymandering), disinformation and fake news (being used both domestically and by foreign powers, both unintentionally and intentionally) and filter bubbles where people can seek out just the information that they want to hear. All of this was possible before the internet, but it was harder because of natural barriers. We also haven’t seen the political effects of deepfakes and are just starting to see the effects of widespread surveillance by police forces.”

Mark Raymond, assistant professor of international security, University of Oklahoma, wrote, “Over the next 30 years, democracy faces at least three kinds of technology-based risks. First, actual or apparent manipulation of voting data and systems by state actors will likely undermine trust in democratic processes. Second, social media manipulation (by states and by political campaigns and other nonstate actors) will compound echo chamber effects and increase societal polarization. Decreased trust will heighten social conflict, including, but not limited to, conflict over elections. Third, ‘deepfakes’ will undermine confidence even in video-based media reports. Taken together, there is the risk that these trends could increase the willingness of voters to accept fundamentally authoritarian shifts in their politics. Absent that, it is still likely that increased polarization will make the operation of democratic systems (which are heavily dependent on mutual acceptance of informal norms) incredibly difficult.”

Emmanuel Edet, legal adviser, National Information Technology Development Agency, Nigeria, said, “The core concepts of democracy, representation, elections and tenure of government will be greatly undermined by artificial intelligence. The use of social media coupled with faceless artificial intelligence-driven opinions can manipulate popular opinion that will deny people the right to express their choice for fear of going against the crowd.”

Matt Moore, innovation manager at Disruptor’s Handbook, Sydney, Australia, said, “The issue is not that essential democratic institutions will change, it is that they will not change enough. Elections, voting, representatives, parties – none of these things will go away. They may mean more or less (likely less) than they used to. The number of democracies in the world is likely to decrease as weak or destabilised states fall into authoritarian populism. Western democracies will continue to age and grow more economically unequal. States like China will continue to grow in power, often using new technologies to control their populations. Everyone is talking up the potential of blockchain for democracy. This is mostly nonsense. The issue is not that people do not have the opportunity to vote enough. It is that no one really knows what that vote means. Many of those who vote – or rather, who do not vote – have no sense of what their vote means. Many of those who are voted for, also do not know what that vote means – which is why they rely on polling and focus groups. Deliberative democracy offers a potential new form of political engagement and decision-making – if (and this is a big ‘if’) it can be made to work beyond isolated experiments.”

Mike O’Connor, retired, a former member of the ICANN policy development community, said, “There is cause for hope – but it’s such a fragile flower compared to the relative ease with which the negative forces prevail. ‘A lie can get around the world while truth is getting its boots on’ – pick your attribution.”

A longtime technology journalist for a major U.S. news organization commented, “Our laws and Constitution are largely designed for a world that existed before the industrial age, not to mention the information age. These technologies have made the nation-state obsolete and we have not yet grasped the ways they facilitate antidemocratic forces.”

Hume Winzar, associate professor and director of the business analytics undergraduate program at Macquarie University, Sydney, Australia, said, “Corporations and government have the information and the technology to create highly targeted messages designed to favour their own agendas. We, as citizens, have demonstrated that we rarely look beyond our regular news sources, and often use easily digested surrogates for news (comedy shows, social media). We also seem to have very short memories, so what was presented as a scandal only a year ago is usual, even laudable, now. … None of this is new. The British and the U.S. have been manipulating foreign news and propaganda for many decades with great success, and the church before them. But now the scale and the speed of that manipulation is perhaps too great to combat.”

Ian Fish, ICT professional and specialist in information security based in Europe, said, “I expect the imbalance of power between the major global corporations and democratic national governments will increase to the detriment of democracy. I also expect non-democratic governments’ disruption of democratic norms to increase faster than the democracies can react.”

Puruesh Chaudhary, a futurist based in Pakistan, said, “Democracy needs to develop the capacity to negotiate in the interest of an ordinary citizen, who may not have direct influence on how key decisions play out in geopolitics but is invariably affected by it. The democratic institutions have to have systems that operate at the pace of technological advancements that have an impact on the society.”

Trust suffers when people’s infatuation with technology entices them away from human-to-human encounters

Several respondents argued there were circumstances when humans’ “slowness” was an advantage, but that technology was thwarting that side of life. They believe that a major cause of the loss of trust is the fact that many people are spending more time online in often-toxic environments than they spend in face-t0-face, empathy-enabling non-digital social situations.

Angela Campbell, professor of law and co-director, Institute for Public Representation at Georgetown University, said, “We are just seeing the beginning of how technology is undercutting democracy and social relations necessary to a democratic society. We don’t have good ways of telling what is true and what is false, what is opinion and what is fact. Most people do not yet understand how power technologies (especially combined with a lack of privacy protections) allow them to be manipulated. In addition, as people spend more time using technology, they spend less time interacting with other people (in person) and learning important social skills like respect and empathy.”

Yves Mathieu, co-director at Missions Publiques, Paris, France, responded, “Technology creates new forms of communications and messaging that can be very rough and divisive. Some contributors are rude, violent, expressing very poor comments, insulting or threatening elected citizens. There will be a strong need for face-to-face format, as the technologies will not allow process of deliberation. There will be need for regular meetings with voters, in meetings where people will have the time and the possibility to exchange arguments and increase their understanding of each other’s position. Being associated with media, this will reduce the divide that we know today, as it will increase mutual understanding.”

An anonymous respondent commented, “The expanded use of technology with respect to the democratic processes will tend to weaken one of the most important aspects of democracy and the democratic processes – the use of technology instead of person-to-person dialogue seriously degrades (or removes altogether) meaningful dialogue and exchange of ideas between individuals. When individuals use technology to express their political views/opinions instead of having direct human interactions, these views tend to be more extremely stated than if that person is speaking a view/opinion to another person. Also, in many cases, if someone else expresses a different view from what the original individual expressed, the first person is much less likely to pay any attention to a view expressed using technology than if that view were expressed in a person-to-person discussion. Additionally, the increased use of technology for analyzing segments of society to ‘shape’ delivery of messages for particular segments will result in an increase of messages that distort the reality of the message or distort the results of what the message is describing.”

The future will include a complex interplay of increased online activity but also increased skepticism of those virtual interactions and an enhanced appreciation of offline information and conversations. Melissa Michelson

A futurist and consultant said, “Democracy currently has a crisis in global leadership. Without significant change in 2020, for which I am hopeful, I can’t hold a lot of hope for democracy in 2030. I’m afraid the question is not what will change, but what must change. Without changes in democratic institutions, the future of democracy itself is in question. There is an urban/rural split at work in tandem with a severe disparity in the distribution of wealth – with climate change overshadowing it all. Technology will have a hand in providing as well as impeding solutions.”

Arthur Asa Berger, professor emeritus of communications, San Francisco State University, commented, “People who use Facebook are affected in negative ways by a ‘net effect,’ in which they exhibit impulsivity, grandiosity, etc., as explained in my book, ‘Media and Communication Research Methods’ (Sage). Some young people text 100 times a day and never talk on the phone with others, leading to a radical estrangement from others and themselves. The internet is used by hate groups, neofascists, right-wing ideologues, terrorist organizations and so on.”

An anonymous U.S. policy and strategy professional said, “Technology allows the creation of a bullying environment that polarizes people to the point at which they do not attempt to understand other opinions or views, weakening public discourse and driving outrage and attacks on minority views.”

Japheth Cleaver, a systems engineer, commented, “At the moment, the major social media networks function not by neutrally and dispassionately connecting disparate communicators (like the phone system), but are designed reinforce engagement to sell as many targeted ads as possible. This reinforcement creates resonant effects throughout a society’s culture, and in-person contextual interaction drops away in favor of the efficiencies that electronic communication offers, but without any of the risk of the ‘bubble’ of the like-minded being dropped, as that would hurt engagement. Internet as communications overlay is fine. Internet as a replacement for public space seems detrimental.”

Melissa Michelson, professor of political science, Menlo College, and author, “Mobilizing Inclusion: Redefining Citizenship Through Get-Out-the-Vote Campaigns,” said, “The future will include a complex interplay of increased online activity but also increased skepticism of those virtual interactions and an enhanced appreciation of offline information and conversations. As more adults are digital natives and the role of technology in society expands and becomes more interconnected, more and more aspects of democracy and political participation will take place online. At the same time, the increasing sophistication of deepfakes, including fake video, will enhance the value of face-to-face interactions as unfiltered and trustworthy sources of information.”

  • “Unless there is transparency, tech will be the new digital atomic bomb – it has moved faster than individuals’ or the law’s understanding of its unintended consequences and nefarious uses.”
  • “At the current rate of disregard and lack of responsibility by those who own and run large tech companies, we are headed toward a complete lack of trust in what is factual information and what is not.”
  • “Public institutions move slowly and thoughtfully. People doing nefarious things move more quickly, and with the internet, this will continue to challenge us.”
  • “It is the personal and social norms that we’re losing, not the technology itself, that is at the heart of much of our problems. People are a lot less civil to each other in person now than they were just a few decades ago.”
  • “More access to data and records more quickly can help citizens be informed and engaged, however more information can flood the market, and people have limited capacity/time/energy to digest information.”

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

Most Popular

Report materials.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

Cookies on GOV.UK

We use some essential cookies to make this website work.

We’d like to set additional cookies to understand how you use GOV.UK, remember your settings and improve government services.

We also use cookies set by other sites to help us deliver content from their services.

You have accepted additional cookies. You can change your cookie settings at any time.

You have rejected additional cookies. You can change your cookie settings at any time.

advantages of digital age essay

  • Carers and disability benefits

Disability Benefits system to be overhauled as consultation launched on Personal Independence Payment

Government to reform disability benefits system to ensure they’re targeted at those most in need.

advantages of digital age essay

  • Consultation to be published today on proposals to move away from fixed cash benefit system towards tailored support
  • Comes as over 2.6 million people of working age now receiving  PIP  with monthly new claims almost doubling since 2019

Plans to make the disability benefits system fit for the future and overhaul the “one size fits all” approach are set to be published today (Monday 29 April), following the Prime Minister’s speech which set out the government’s wide-ranging ambitions for welfare reform.   

The Modernising Support Green Paper will explore how our welfare system could be redesigned to ensure people with disabilities and long-term health conditions get the support they need to achieve the best outcomes, with an approach that focuses support on those with the greatest needs and extra costs.

The UK’s health landscape has changed since Personal Independence Payment ( PIP ) was introduced in 2013 with the intention that it would be a more sustainable benefit that would support disabled people to live independently by helping with the extra costs they face. 

However, the caseload and costs are now spiralling. There are now 2.6 million people of working age claiming  PIP  and  DLA  – with 33,000 new awards for  PIP  each month which is more than double the rate before the pandemic. This is expected to cost the taxpayer £28 billion a year by 2028/29 – a 110% increase in spending since 2019.

This is in part fuelled by the rise in people receiving  PIP  for mental health conditions such as mixed anxiety and depressive disorders, with monthly awards doubling from 2,200 to 5,300 a month since 2019. 

Since 2015, the proportion of the caseload receiving the highest rate of PIPhas increased from 25% to 36%. And many more people being awarded PIPnow have mental health conditions than when it was first introduced.  

In line with the wider reforms to ensure the welfare system is fair and compassionate, the Modernising Support Green Paper proposals centre on targeting and improving the support for those who need it most.

These ideas include removing the  PIP  assessment altogether for people with certain long term health conditions or disabilities, including those with terminal illnesses to reduce bureaucracy and make life easier for those most in need of support.

By more accurately targeting support, we will ensure the large scale of government expenditure on  PIP  translates into better outcomes for disabled people and those with health conditions. 

Prime Minister Rishi Sunak said:

It’s clear that our disability benefits system isn’t working in the way it was intended, and we’re determined to reform it to ensure it’s sustainable for the future, so we can continue delivering support to those who genuinely need it most.
Today’s Green Paper marks the next chapter of our welfare reforms and is part of our plan to make the benefits system fairer to the taxpayer, better targeted to individual needs and harder to exploit by those who are trying to game the system.
We’re inviting views from across society to ensure everyone has a chance to make their voices heard and shape our welfare reforms.

Work and Pensions Secretary Mel Stride said:   

We’re making the biggest welfare reforms in a generation – protecting those most in need while supporting thousands into work as we modernise our benefit system to reflect the changing health landscape.
A decade on from the introduction of  PIP , this Green Paper opens the next chapter of reform, enhancing the support for people with health conditions and disabilities while ensuring the system is fair to the taxpayer.

The Green Paper sets out proposals across three key priorities to fundamentally reform the system:

Making changes to the eligibility criteria for  PIP , so it is fairer and better targeted

Through previous consultations, we have been told that the criteria currently used in assessments do not always fully reflect how a disability or health condition impacts on a person’s daily life. The criteria have changed over time and no longer capture these different impacts as originally intended.

We will consider whether the current thresholds for entitlement correctly reflect the need for ongoing financial support. This includes considering if current descriptors - such as the need for aids and appliances - are good indicators of extra costs.

We will also look at changing the qualifying period for  PIP  in order to ensure the impact that people’s conditions will have on them over time is fully understood and consider whether we should change the test used to determine if a condition is likely to continue long-term.

Reforming the  PIP  assessment so that it is more closely linked to a person’s condition and exploring removing assessment entirely for those most in need.

PIP  is over a decade old and a lot has changed since the assessment was developed. We know some people continue to find  PIP  assessments difficult and repetitive, and view the assessment as too subjective.

We will consider whether some people could receive  PIP  without needing an assessment by basing entitlement on specific health conditions or disabilities supported by medical evidence.

This includes looking at whether evidence of a formal diagnosis by a medical expert should be a requirement to be assessed as eligible for  PIP . This will make it easier and quicker for people with severe or terminal conditions to get the vital support they need.

We will explore alternative approaches to ensure people are given the right help to fulfil their potential and live independently. The UK has used a fixed cash transfer system since the 1970s but there are a number of international systems that look at the specific extra costs people have and provide more tailored support instead.

For example, in New Zealand, the amount of Disability Allowance is based on a person’s extra costs which are verified by a health practitioner. Norway’s Basic Benefit requires people to provide a letter from a GP outlining the nature of their condition and the associated extra costs. 

We are considering options including one-off grants to better help people with significant costs such as home adaptations or expensive equipment, as well as giving vouchers to contribute towards specific costs, or reimbursing claimants who provide receipts for purchases of aids, appliances or services.

This reflects the fact that some claimants will have significant extra costs related to their disability, and others will have minimal or specific costs.

While these alternative models help people with the extra costs of their disability or health condition, we know other forms of support including health care, social services care provision and respite are also important to help people to realise their full potential and live independently.

We are also considering whether some people receiving  PIP  who have lower, or no extra costs, may have better outcomes from improved access to treatment and support than from a cash payment.

Andy Cook, Chief Executive of the Centre for Social Justice, said:

Our landmark Two Nations report laid bare the lasting impact of the pandemic on our nation’s most vulnerable communities.
With the welfare system now grappling with the combined challenges of economic inactivity, school absence and mental health, this consultation provides a meaningful opportunity to shape the future of Britain’s welfare state.
We owe it to those most struggling to make sure the benefit system provides the best support to those who need it. And with costs skyrocketing, it is time to bring the welfare system into the post-lockdown age.

The Green Paper is the latest of the government’s welfare reforms to ensure disabled people and people with long-term health conditions can live full and independent lives. It builds on last year’s Health and Disability White Paper and the £2.5 billion Back to Work Plan which will break down barriers to work for over one million people.  

The Government is also delivering the largest expansion in mental health services in a generation, with almost £5 billion of extra funding over the past five years, and a near doubling of mental health training places.

Our reforms to the Work Capability Assessment are expected to reduce the number of people put onto the highest tier of incapacity benefits by 424,000, people who will now receive personalised support to prepare for work, while our Chance to Work Guarantee will mean people can try work without fear of losing their benefits. 

Further Information

  • The consultation can be found here: Modernising support for independent living: the health and disability green paper - GOV.UK (www.gov.uk)
  • This consultation will be open for 12 weeks and we are inviting views from across society to ensure everyone has a chance to shape the modernisation of the welfare system. The findings of the consultation, which closes on Tuesday 23 July, will inform future reforms.
  • The UK Government is committed to improving the lives of disabled people and people with long-term health conditions in all parts of the UK.
  • In Wales, Personal Independence Payment ( PIP ) is the responsibility of the UK Government.
  • In Northern Ireland,  PIP  is transferred and is the responsibility of the Department for Communities.
  • In Scotland, Adult Disability Payment ( ADP ) has replaced  PIP  and is the responsibility of the Scottish Government. The transfer of existing Scottish  PIP  claimants from  DWP  to Social Security Scotland started in summer 2022 and will continue until 2025.
  • We will continue to work with the Devolved Administrations to consider the implications of the proposals in this Green Paper in Scotland, Wales and Northern Ireland.

Share this page

The following links open in a new tab

  • Share on Facebook (opens in new tab)
  • Share on Twitter (opens in new tab)

Is this page useful?

  • Yes this page is useful
  • No this page is not useful

Help us improve GOV.UK

Don’t include personal or financial information like your National Insurance number or credit card details.

To help us improve GOV.UK, we’d like to know more about your visit today. We’ll send you a link to a feedback form. It will take only 2 minutes to fill in. Don’t worry we won’t send you spam or share your email address with anyone.

COMMENTS

  1. Opinion: A decade of positive change in the digital age

    Through major reforms, the world is now on track to reduce carbon emissions by 90% by the year 2050. The new digital age enabled billions of people to collaborate and mobilize to fight climate change. This included not just governments but businesses large and small, commuters, vacationers, employees, students, consumers - everyone - from ...

  2. Growing up in a digital world: benefits and risks

    UNICEF's State of the World's Children 2017: Children in a Digital World report reveals that one in three internet users is younger than 18 years and 71% of 15-24-year-olds are online, making them the most connected age group worldwide. However, the so-called digital divide is substantial: 346 million youth are not online, with African ...

  3. What Is The Digital Age And What Does It Mean?

    The Differences (And Benefits) Of Long-Term And One-Off Collaborations. Apr 29, 2024, 05:29am EDT. ... it can seem odd that no one seems to know what exactly is "the digital age", ...

  4. Positive Effects of Digital Technology Use by Adolescents: A Scoping

    The discourse on the impact of digital media on youth is an extension of an age-old cultural concern and debate over the impact of new forms of technology on youth . As Orben has traced, concern and, at times even panic, over the influence of technology on youth has a long history. For example, in the Phaedrus, written circa 370 BCE, Plato ...

  5. The Reading Brain in the Digital Age: The Science of Paper versus

    People who took the test on a computer scored lower and reported higher levels of stress and tiredness than people who completed it on paper. In another set of experiments 82 volunteers completed ...

  6. Connecting in the Digital Age: Navigating Technology and social media

    As we navigate the digital age, it is essential not to lose sight of the value that face-to-face communication brings to our personal and professional relationships (Hall, Baym, & Miltner, 2019). Balancing Technology And Real-life Connections: In today's digital age, it is easy to get lost in the world of technology and social media.

  7. Stories From Experts About the Impact of Digital Life

    1. The positives of digital life. 2. The negatives of digital life. 3. Fifty-fifty anecdotes: How digital life has been both positive and negative. A number of these experts wrote about both sides of the story, taking the time to point out some of the ways in which digital life is a blessing and a curse. A selection of these mixed-response ...

  8. Student Writing in the Digital Age

    The average college essay in 2006 was more than double the length of the average 1986 paper, which was itself much longer than the average length of papers written earlier in the century. In 1917, student papers averaged 162 words; in 1930, the average was 231 words. By 1986, the average grew to 422 words.

  9. (PDF) TRANSFORMING EDUCATION IN THE DIGITAL AGE: A ...

    Some advantages of online learning in the digital age that can be explored in the research paper are: (1) Accessibility: Online learning provides access to education for people who may not have ...

  10. Privacy in the digital age: comparing and contrasting ...

    This paper takes as a starting point a recent development in privacy-debates: the emphasis on social and institutional environments in the definition and the defence of privacy. Recognizing the merits of this approach I supplement it in two respects. First, an analysis of the relation between privacy and autonomy teaches that in the digital age more than ever individual autonomy is threatened ...

  11. How the Digital Age Is Affecting Students

    How the Digital Age Is Affecting Students. Five books that give insight into how social media and technology are shaping today's students and their learning. Teachers don't have to look far to see how changes in technology and social media are shaping students and influencing classrooms. We watch kids obsess over the latest apps as they ...

  12. The Reading Brain in the Digital Age: Why Paper Still Beats Screens

    In the U.S., e-books currently make up more than 20 percent of all books sold to the general public. Despite all the increasingly user-friendly and popular technology, most studies published since ...

  13. The Digital Information Age: [Essay Example], 1090 words

    Words: 1090 | Pages: 2 | 6 min read. Published: Jun 5, 2019. The digital information age has been slowly but progressively coming to fruition over the past few decades. It has begun altering the fundamental aspects of how contemporary society functions and its effect is now more prevalent than ever. A new age in telecommunications has emerged.

  14. Internet And Education: Transforming Learning In The Digital Age

    5 Ways The Internet Has Impacted Education. 1. Access To Knowledge. The internet's unique access to knowledge is among its many significant educational contributions. The day when students were restricted to wisdom in their classrooms or libraries has long since passed. The internet makes a multitude of information available.

  15. Improving Older People's Lives Through Digital Technology and Practices

    Digital technology may be beneficial in improving people's cognitive ability as suggested by Wu et al. (2019).In the first paper of the special issue, Wu et al. (2019) explore the use of technology in memory clinics suggesting how the use of technology can be beneficial for people's mental and physical health, through stimulating cognitive abilities including executive functioning, memory ...

  16. Essay on Digital Technology

    In conclusion, digital technology, while presenting certain challenges, offers immense potential to reshape our world. As we navigate this digital age, it is incumbent upon us to harness this potential responsibly, ensuring that the benefits of digital technology are accessible to all. 500 Words Essay on Digital Technology

  17. The Digital Age Essay

    The Digital Age Essay. The digital age is staring us in the face from the near future. We already see countless instances of digital technology emerging more and more in our every day lives. Cell phones are equipped with voice recognition software, and are able to take photographs and send them wirelessly across the globe, almost instantaneously.

  18. Privacy in the Digital Age

    Unfortunately, this has also brought about various challenges that must be addressed 1. Generally, information is a vital treasure in itself, and the more one has the better. Having valuable, intellectual, economic, and social information creates enormous opportunities and advantages for any individual. We will write a custom essay on your topic.

  19. Conerns about democracy in the digital age

    2. Broader thoughts from key experts on the future of democracy at a time of digital disruption. 3. Concerns about democracy in the digital age. 4. Hopeful themes and suggested solutions. 5. Tech will have mixed effects that are not possible to guess now. About this canvassing of experts.

  20. The Importance of Reading Books in the Digital Age: Why ...

    Reading a book, on the other hand, provides a quiet and focused escape from the distractions of the digital world. It allows us to slow down, relax, and engage with a narrative or idea in a way ...

  21. Digital Age Essays: Examples, Topics, & Outlines

    WORDS 667. Yes, there are several news topics related to book notes that could make good essay subjects. Here are a few suggestions: 1. The rise of e-books and its impact on book notes: You can explore how the increasing popularity of electronic books has affected the way people take notes and analyze texts.

  22. Impact of the Digital Age

    The digital age refers to present time use of machines and computers to present information. The digital age had an overall impact on our societies and day to day activities. It has a lot of advantages and disadvantages i.e. it came with so many opportunities as well as costs. We are living in the age in which professionals in digital ...

  23. Using the Digital Trust Ecosystem Framework to Achieve ...

    White Paper | 30 April 2024 Using the Digital Trust Ecosystem Framework to Achieve Trustworthy AI. Abstract: This white paper explores the benefits of using ISACA's Digital Trust Ecosystem Framework (DTEF) for enterprises adopting artificial intelligence (AI)-enabled technologies and services. It provides an understanding of how the DTEF supports the evaluation of emerging technology risk ...

  24. Disability Benefits system to be overhauled as consultation launched on

    There are now 2.6 million people of working age claiming PIP and DLA - with 33,000 new awards for PIP each month which is more than double the rate before the pandemic. This is expected to cost ...