Artificial Intelligence Essay

500+ words essay on artificial intelligence.

Artificial intelligence (AI) has come into our daily lives through mobile devices and the Internet. Governments and businesses are increasingly making use of AI tools and techniques to solve business problems and improve many business processes, especially online ones. Such developments bring about new realities to social life that may not have been experienced before. This essay on Artificial Intelligence will help students to know the various advantages of using AI and how it has made our lives easier and simpler. Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics.

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and deep learning to solve problems in particular domains without hard coding all possibilities (i.e. algorithmic steps) in software. Due to this, AI started showing promising solutions for industry and businesses as well as our daily lives.

Importance and Advantages of Artificial Intelligence

Advances in computing and digital technologies have a direct influence on our lives, businesses and social life. This has influenced our daily routines, such as using mobile devices and active involvement on social media. AI systems are the most influential digital technologies. With AI systems, businesses are able to handle large data sets and provide speedy essential input to operations. Moreover, businesses are able to adapt to constant changes and are becoming more flexible.

By introducing Artificial Intelligence systems into devices, new business processes are opting for the automated process. A new paradigm emerges as a result of such intelligent automation, which now dictates not only how businesses operate but also who does the job. Many manufacturing sites can now operate fully automated with robots and without any human workers. Artificial Intelligence now brings unheard and unexpected innovations to the business world that many organizations will need to integrate to remain competitive and move further to lead the competitors.

Artificial Intelligence shapes our lives and social interactions through technological advancement. There are many AI applications which are specifically developed for providing better services to individuals, such as mobile phones, electronic gadgets, social media platforms etc. We are delegating our activities through intelligent applications, such as personal assistants, intelligent wearable devices and other applications. AI systems that operate household apparatus help us at home with cooking or cleaning.

Future Scope of Artificial Intelligence

In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is becoming a popular field in computer science as it has enhanced humans. Application areas of artificial intelligence are having a huge impact on various fields of life to solve complex problems in various areas such as education, engineering, business, medicine, weather forecasting etc. Many labourers’ work can be done by a single machine. But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any work by ourselves and get lazy. Another disadvantage is that it cannot give a human-like feeling. So machines should be used only where they are actually required.

Students must have found this essay on “Artificial Intelligence” useful for improving their essay writing skills. They can get the study material and the latest updates on CBSE/ICSE/State Board/Competitive Exams, at BYJU’S.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

ai is the future of technology essay

  • Share Share

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

close

Counselling

The Future of AI: How Artificial Intelligence Will Change the World

ai is the future of technology essay

Innovations in the field of  artificial intelligence continue to shape the future of humanity across nearly every industry. AI is already the main driver of emerging technologies like big data, robotics and IoT, and  generative AI has further expanded the possibilities and popularity of AI. 

According to a 2023 IBM survey , 42 percent of enterprise-scale businesses integrated AI into their operations, and 40 percent are considering AI for their organizations. In addition, 38 percent of organizations have implemented generative AI into their workflows while 42 percent are considering doing so.

With so many changes coming at such a rapid pace, here’s what shifts in AI could mean for various industries and society at large.

More on the Future of AI Can AI Make Art More Human?

The Evolution of AI

AI has come a long way since 1951, when the  first documented success of an AI computer program was written by Christopher Strachey, whose checkers program completed a whole game on the Ferranti Mark I computer at the University of Manchester. Thanks to developments in machine learning and deep learning , IBM’s Deep Blue defeated chess grandmaster Garry Kasparov in 1997, and the company’s IBM Watson won Jeopardy! in 2011.  

Since then, generative AI has spearheaded the latest chapter in AI’s evolution, with OpenAI releasing its first GPT models in 2018. This has culminated in OpenAI developing its GPT-4 model and ChatGPT , leading to a proliferation of AI generators that can process queries to produce relevant text, audio, images and other types of content.   

AI has also been used to help  sequence RNA for vaccines and  model human speech , technologies that rely on model- and algorithm-based  machine learning and increasingly focus on perception, reasoning and generalization. 

How AI Will Impact the Future

Improved business automation .

About 55 percent of organizations have adopted AI to varying degrees, suggesting increased automation for many businesses in the near future. With the rise of chatbots and digital assistants, companies can rely on AI to handle simple conversations with customers and answer basic queries from employees.

AI’s ability to analyze massive amounts of data and convert its findings into convenient visual formats can also accelerate the decision-making process . Company leaders don’t have to spend time parsing through the data themselves, instead using instant insights to make informed decisions .

“If [developers] understand what the technology is capable of and they understand the domain very well, they start to make connections and say, ‘Maybe this is an AI problem, maybe that’s an AI problem,’” said Mike Mendelson, a learner experience designer for NVIDIA . “That’s more often the case than, ‘I have a specific problem I want to solve.’”

More on AI 75 Artificial Intelligence (AI) Companies to Know

Job Disruption

Business automation has naturally led to fears over job losses . In fact, employees believe almost one-third of their tasks could be performed by AI. Although AI has made gains in the workplace, it’s had an unequal impact on different industries and professions. For example, manual jobs like secretaries are at risk of being automated, but the demand for other jobs like machine learning specialists and information security analysts has risen.

Workers in more skilled or creative positions are more likely to have their jobs augmented by AI , rather than be replaced. Whether forcing employees to learn new tools or taking over their roles, AI is set to spur upskilling efforts at both the individual and company level .     

“One of the absolute prerequisites for AI to be successful in many [areas] is that we invest tremendously in education to retrain people for new jobs,” said Klara Nahrstedt, a computer science professor at the University of Illinois at Urbana–Champaign and director of the school’s Coordinated Science Laboratory.

Data Privacy Issues

Companies require large volumes of data to train the models that power generative AI tools, and this process has come under intense scrutiny. Concerns over companies collecting consumers’ personal data have led the FTC to open an investigation into whether OpenAI has negatively impacted consumers through its data collection methods after the company potentially violated European data protection laws . 

In response, the Biden-Harris administration developed an AI Bill of Rights that lists data privacy as one of its core principles. Although this legislation doesn’t carry much legal weight, it reflects the growing push to prioritize data privacy and compel AI companies to be more transparent and cautious about how they compile training data.      

Increased Regulation

AI could shift the perspective on certain legal questions, depending on how generative AI lawsuits unfold in 2024. For example, the issue of intellectual property has come to the forefront in light of copyright lawsuits filed against OpenAI by writers, musicians and companies like The New York Times . These lawsuits affect how the U.S. legal system interprets what is private and public property, and a loss could spell major setbacks for OpenAI and its competitors. 

Ethical issues that have surfaced in connection to generative AI have placed more pressure on the U.S. government to take a stronger stance. The Biden-Harris administration has maintained its moderate position with its latest executive order , creating rough guidelines around data privacy, civil liberties, responsible AI and other aspects of AI. However, the government could lean toward stricter regulations, depending on  changes in the political climate .  

Climate Change Concerns

On a far grander scale, AI is poised to have a major effect on sustainability, climate change and environmental issues. Optimists can view AI as a way to make supply chains more efficient, carrying out predictive maintenance and other procedures to reduce carbon emissions . 

At the same time, AI could be seen as a key culprit in climate change . The energy and resources required to create and maintain AI models could raise carbon emissions by as much as 80 percent, dealing a devastating blow to any sustainability efforts within tech. Even if AI is applied to climate-conscious technology , the costs of building and training models could leave society in a worse environmental situation than before.   

What Industries Will AI Impact the Most?  

There’s virtually no major industry that modern AI hasn’t already affected. Here are a few of the industries undergoing the greatest changes as a result of AI.  

AI in Manufacturing

Manufacturing has been benefiting from AI for years. With AI-enabled robotic arms and other manufacturing bots dating back to the 1960s and 1970s, the industry has adapted well to the powers of AI. These  industrial robots typically work alongside humans to perform a limited range of tasks like assembly and stacking, and predictive analysis sensors keep equipment running smoothly. 

AI in Healthcare

It may seem unlikely, but  AI healthcare is already changing the way humans interact with medical providers. Thanks to its  big data analysis capabilities, AI helps identify diseases more quickly and accurately, speed up and streamline drug discovery and even monitor patients through virtual nursing assistants. 

AI in Finance

Banks, insurers and financial institutions leverage AI for a range of applications like detecting fraud, conducting audits and evaluating customers for loans. Traders have also used machine learning’s ability to assess millions of data points at once, so they can quickly gauge risk and make smart investing decisions . 

AI in Education

AI in education will change the way humans of all ages learn. AI’s use of machine learning,  natural language processing and  facial recognition help digitize textbooks, detect plagiarism and gauge the emotions of students to help determine who’s struggling or bored. Both presently and in the future, AI tailors the experience of learning to student’s individual needs.

AI in Media

Journalism is harnessing AI too, and will continue to benefit from it. One example can be seen in The Associated Press’ use of  Automated Insights , which produces thousands of earning reports stories per year. But as generative  AI writing tools , such as ChatGPT, enter the market,  questions about their use in journalism abound.

AI in Customer Service

Most people dread getting a  robocall , but  AI in customer service can provide the industry with data-driven tools that bring meaningful insights to both the customer and the provider. AI tools powering the customer service industry come in the form of  chatbots and  virtual assistants .

AI in Transportation

Transportation is one industry that is certainly teed up to be drastically changed by AI.  Self-driving cars and  AI travel planners are just a couple of facets of how we get from point A to point B that will be influenced by AI. Even though autonomous vehicles are far from perfect, they will one day ferry us from place to place.

Risks and Dangers of AI

Despite reshaping numerous industries in positive ways, AI still has flaws that leave room for concern. Here are a few potential risks of artificial intelligence.  

Job Losses 

Between 2023 and 2028, 44 percent of workers’ skills will be disrupted . Not all workers will be affected equally — women are more likely than men to be exposed to AI in their jobs. Combine this with the fact that there is a gaping AI skills gap between men and women, and women seem much more susceptible to losing their jobs. If companies don’t have steps in place to upskill their workforces, the proliferation of AI could result in higher unemployment and decreased opportunities for those of marginalized backgrounds to break into tech.

Human Biases 

The reputation of AI has been tainted with a habit of reflecting the biases of the people who train the algorithmic models. For example, facial recognition technology has been known to favor lighter-skinned individuals , discriminating against people of color with darker complexions. If researchers aren’t careful in  rooting out these biases early on, AI tools could reinforce these biases in the minds of users and perpetuate social inequalities.

Deepfakes and Misinformation

The spread of deepfakes threatens to blur the lines between fiction and reality, leading the general public to  question what’s real and what isn’t. And if people are unable to identify deepfakes, the impact of  misinformation could be dangerous to individuals and entire countries alike. Deepfakes have been used to promote political propaganda, commit financial fraud and place students in compromising positions, among other use cases. 

Data Privacy

Training AI models on public data increases the chances of data security breaches that could expose consumers’ personal information. Companies contribute to these risks by adding their own data as well. A  2024 Cisco survey found that 48 percent of businesses have entered non-public company information into  generative AI tools and 69 percent are worried these tools could damage their intellectual property and legal rights. A single breach could expose the information of millions of consumers and leave organizations vulnerable as a result.  

Automated Weapons

The use of AI in automated weapons poses a major threat to countries and their general populations. While automated weapons systems are already deadly, they also fail to discriminate between soldiers and civilians . Letting artificial intelligence fall into the wrong hands could lead to irresponsible use and the deployment of weapons that put larger groups of people at risk.  

Superior Intelligence

Nightmare scenarios depict what’s known as the technological singularity , where superintelligent machines take over and permanently alter human existence through enslavement or eradication. Even if AI systems never reach this level, they can become more complex to the point where it’s difficult to determine how AI makes decisions at times. This can lead to a lack of transparency around how to fix algorithms when mistakes or unintended behaviors occur. 

“I don’t think the methods we use currently in these areas will lead to machines that decide to kill us,” said Marc Gyongyosi, founder of  Onetrack.AI . “I think that maybe five or 10 years from now, I’ll have to reevaluate that statement because we’ll have different methods available and different ways to go about these things.”

Frequently Asked Questions

What does the future of ai look like.

AI is expected to improve industries like healthcare, manufacturing and customer service, leading to higher-quality experiences for both workers and customers. However, it does face challenges like increased regulation, data privacy concerns and worries over job losses.

What will AI look like in 10 years?

AI is on pace to become a more integral part of people’s everyday lives. The technology could be used to provide elderly care and help out in the home. In addition, workers could collaborate with AI in different settings to enhance the efficiency and safety of workplaces.

Is AI a threat to humanity?

It depends on how people in control of AI decide to use the technology. If it falls into the wrong hands, AI could be used to expose people’s personal information, spread misinformation and perpetuate social inequalities, among other malicious use cases.

Great Companies Need Great People. That's Where We Come In.

Oxford Martin School logo

Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well

How ai gets built is currently decided by a small group of technologists. as this technology is transforming our lives, it should be in all of our interest to become informed and engaged..

Why should you care about the development of artificial intelligence?

Think about what the alternative would look like. If you and the wider public do not get informed and engaged, then we leave it to a few entrepreneurs and engineers to decide how this technology will transform our world.

That is the status quo. This small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is becoming . If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives.

To change this status quo, I want to answer three questions in this article: Why is it hard to take the prospect of a world transformed by AI seriously? How can we imagine such a world? And what is at stake as this technology becomes more powerful?

Why is it hard to take the prospect of a world transformed by artificial intelligence seriously?

In some way, it should be obvious how technology can fundamentally transform the world. We just have to look at how much the world has already changed. If you could invite a family of hunter-gatherers from 20,000 years ago on your next flight, they would be pretty surprised. Technology has changed our world already, so we should expect that it can happen again.

But while we have seen the world transform before, we have seen these transformations play out over the course of generations. What is different now is how very rapid these technological changes have become. In the past, the technologies that our ancestors used in their childhood were still central to their lives in their old age. This has not been the case anymore for recent generations. Instead, it has become common that technologies unimaginable in one's youth become ordinary in later life.

This is the first reason we might not take the prospect seriously: it is easy to underestimate the speed at which technology can change the world.

The second reason why it is difficult to take the possibility of transformative AI – potentially even AI as intelligent as humans – seriously is that it is an idea that we first heard in the cinema. It is not surprising that for many of us, the first reaction to a scenario in which machines have human-like capabilities is the same as if you had asked us to take seriously a future in which vampires, werewolves, or zombies roam the planet. 1

But, it is plausible that it is both the stuff of sci-fi fantasy and the central invention that could arrive in our, or our children’s, lifetimes.

The third reason why it is difficult to take this prospect seriously is by failing to see that powerful AI could lead to very large changes. This is also understandable. It is difficult to form an idea of a future that is very different from our own time. There are two concepts that I find helpful in imagining a very different future with artificial intelligence. Let’s look at both of them.

How to develop an idea of what the future of artificial intelligence might look like?

When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI’s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world.

From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

The advantages and disadvantages of comparing machine and human intelligence

One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today’s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same range of intellectual tasks that we humans are capable of. 3 It is a machine that would be “able to learn to do anything that a human can do,” as Norvig and Russell put it in their textbook on AI. 4

Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world’s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that.

The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology.

However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them.

Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems reached superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently. 5

These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first “human-level AI” would therefore be quite superhuman in many ways. 6

Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us.

Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do – produce an image of anything, in any style (here photorealistic), in mere seconds – but on the other hand it can fail in ways that no human would fail. 7 No human would make the mistake of drawing a horse with five legs. 8

Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems “human-level.”

AI-generated image of a horse 9

A brown horse running in a grassy field. The horse appears to have five legs.

Transformative artificial intelligence is defined by the impact this technology would have on the world

In contrast, the concept of transformative AI is not based on a comparison with human intelligence. This has the advantage of sidestepping the problems that the comparisons with our own mind bring. But it has the disadvantage that it is harder to imagine what such a system would look like and be capable of. It requires more from us. It requires us to imagine a world with intelligent actors that are potentially very different from ourselves.

Transformative AI is not defined by any specific capabilities, but by the real-world impact that the AI would have. To qualify as transformative, researchers think of it as AI that is “powerful enough to bring us into a new, qualitatively different future.” 10

In humanity’s history, there have been two cases of such major transformations, the agricultural and the industrial revolutions.

Transformative AI becoming a reality would be an event on that scale. Like the arrival of agriculture 10,000 years ago, or the transition from hand- to machine-manufacturing, it would be an event that would change the world for billions of people around the globe and for the entire trajectory of humanity’s future .

Technologies that fundamentally change how a wide range of goods or services are produced are called ‘general-purpose technologies’. The two previous transformative events were caused by the discovery of two particularly significant general-purpose technologies: the change in food production as humanity transitioned from hunting and gathering to farming, and the rise of machine manufacturing in the industrial revolution. Based on the evidence and arguments presented in this series on AI development, I believe it is plausible that powerful AI could represent the introduction of a similarly significant general-purpose technology.

Timeline of the three transformative events in world history

ai is the future of technology essay

A future of human-level or transformative AI?

The two concepts are closely related, but they are not the same. The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change. 11

The opposite, however, is not true: we might see transformative AI without developing human-level AI. Since the human mind is in many ways a poor metaphor for the intelligence of machines, we might plausibly develop transformative AI before we develop human-level AI. Depending on how this goes, this might mean that we will never see any machine intelligence for which human intelligence is a helpful comparison.

When and if AI systems might reach either of these levels is of course difficult to predict. In my companion article on this question, I give an overview of what researchers in this field currently believe. Many AI experts believe there is a real chance that such systems will be developed within the next decades, and some believe that they will exist much sooner.

What is at stake as artificial intelligence becomes more powerful?

All major technological innovations lead to a range of positive and negative consequences. For AI, the spectrum of possible outcomes – from the most negative to the most positive – is extraordinarily wide.

That the use of AI technology can cause harm is clear, because it is already happening.

AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance. 12

But AI systems can also cause unintended harm, when they act differently than intended or fail. For example, in the Netherlands the authorities used an AI system which falsely claimed that an estimated 26,000 parents made fraudulent claims for child care benefits. The false allegations led to hardship for many poor families, and also resulted in the resignation of the Dutch government in 2021. 13

As AI becomes more powerful, the possible negative impacts could become much larger. Many of these risks have rightfully received public attention: more powerful AI could lead to mass labor displacement, or extreme concentrations of power and wealth. In the hands of autocrats, it could empower totalitarianism through its suitability for mass surveillance and control.

The so-called alignment problem of AI is another extreme risk. This is the concern that nobody would be able to control a powerful AI system, even if the AI takes actions that harm us humans, or humanity as a whole. This risk is unfortunately receiving little attention from the wider public, but it is seen as an extremely large risk by many leading AI researchers. 14

How could an AI possibly escape human control and end up harming humans?

The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to do this. The risk is that we try to instruct the AI to pursue some specific goal – even a very worthwhile one – and in the pursuit of that goal it ends up harming humans. It is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.

Can’t we just tell the AI to not do those things? It is definitely possible to build an AI that avoids any particular problem we foresee, but it is hard to foresee all the possible harmful unintended consequences. The alignment problem arises because of “the impossibility of defining true human purposes correctly and completely,” as AI researcher Stuart Russell puts it. 15

Can’t we then just switch off the AI? This might also not be possible. That is because a powerful AI would know two things: it faces a risk that humans could turn it off, and it can’t achieve its goals once it has been turned off. As a consequence, the AI will pursue a very fundamental goal of ensuring that it won’t be switched off. This is why, once we realize that an extremely intelligent AI is causing unintended harm in the pursuit of some specific goal, it might not be possible to turn it off or change what the system does. 16

This risk – that humanity might not be able to stay in control once AI becomes very powerful, and that this might lead to an extreme catastrophe – has been recognized right from the early days of AI research more than 70 years ago. 17 The very rapid development of AI in recent years has made a solution to this problem much more urgent.

I have tried to summarize some of the risks of AI, but a short article is not enough space to address all possible questions. Especially on the very worst risks of AI systems, and what we can do now to reduce them, I recommend reading the book The Alignment Problem by Brian Christian and Benjamin Hilton’s article ‘Preventing an AI-related catastrophe’ .

If we manage to avoid these risks, transformative AI could also lead to very positive consequences. Advances in science and technology were crucial to the many positive developments in humanity’s history. If artificial ingenuity can augment our own, it could help us make progress on the many large problems we face: from cleaner energy, to the replacement of unpleasant work, to much better healthcare.

This extremely large contrast between the possible positives and negatives makes clear that the stakes are unusually high with this technology. Reducing the negative risks and solving the alignment problem could mean the difference between a healthy, flourishing, and wealthy future for humanity – and the destruction of the same.

How can we make sure that the development of AI goes well?

Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources – public funding, public attention, and public engagement.

Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. Researcher Toby Ord estimated that in 2020 between $10 to $50 million was spent on work to address the alignment problem. 18 Corporate AI investment in the same year was more than 2000-times larger, it summed up to $153 billion.

This is not only the case for the AI alignment problem. The work on the entire range of negative social consequences from AI is under-resourced compared to the large investments to increase the power and use of AI systems.

It is frustrating and concerning for society as a whole that AI safety work is extremely neglected and that little public funding is dedicated to this crucial field of research. On the other hand, for each individual person this neglect means that they have a good chance to actually make a positive difference, if they dedicate themselves to this problem now. And while the field of AI safety is small, it does provide good resources on what you can do concretely if you want to work on this problem.

I hope that more people dedicate their individual careers to this cause, but it needs more than individual efforts. A technology that is transforming our society needs to be a central interest of all of us. As a society we have to think more about the societal impact of AI, become knowledgeable about the technology, and understand what is at stake.

When our children look back at today, I imagine that they will find it difficult to understand how little attention and resources we dedicated to the development of safe AI. I hope that this changes in the coming years, and that we begin to dedicate more resources to making sure that powerful AI gets developed in a way that benefits us and the next generations.

If we fail to develop this broad-based understanding, then it will remain the small elite that finances and builds this technology that will determine how one of the – or plausibly the – most powerful technology in human history will transform our world.

If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future — the future of humanity — will be.

With our work at Our World in Data we want to do our small part to enable a better informed public conversation on AI and the future we want to live in. You can find these resources on OurWorldinData.org/artificial-intelligence

Acknowledgements: I would like to thank my colleagues Daniel Bachler, Charlie Giattino, and Edouard Mathieu for their helpful comments to drafts of this essay.

This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any particular scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound contrived, bizarre or even silly.

Both of these concepts are widely used in the scientific literature on artificial intelligence. For example, questions about the timelines for the development of future AI are often framed using these terms. See my article on this topic .

The fact that humans are capable of a range of intellectual tasks means that you arrive at different definitions of intelligence depending on which aspect within that range you focus on (the Wikipedia entry on intelligence , for example, lists a number of definitions from various researchers and different disciplines). As a consequence there are also various definitions of ‘human-level AI’.

There are also several closely related terms: Artificial General Intelligence, High-Level Machine Intelligence, Strong AI, or Full AI are sometimes synonymously used, and sometimes defined in similar, yet different ways. In specific discussions, it is necessary to define this concept more narrowly; for example, in studies on AI timelines researchers offer more precise definitions of what human-level AI refers to in their particular study.

Peter Norvig and Stuart Russell (2021) — Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.

The AI system AlphaGo , and its various successors, won against Go masters. The AI system Pluribus beat humans at no-limit Texas hold 'em poker. The AI system Cicero can strategize and use human language to win the strategy game Diplomacy. See: Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. (2022) – ‘Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning’. In Science 0, no. 0 (22 November 2022): eade9097. https://doi.org/10.1126/science.ade9097 .

This also poses a problem when we evaluate how the intelligence of a machine compares with the intelligence of humans. If intelligence was a general ability, a single capacity, then we could easily compare and evaluate it, but the fact that it is a range of skills makes it much more difficult to compare across machine and human intelligence. Tests for AI systems are therefore comprising a wide range of tasks. See for example Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt (2020) –  Measuring Massive Multitask Language Understanding or the definition of what would qualify as artificial general intelligence in this Metaculus prediction .

An overview of how AI systems can fail can be found in Charles Choi – 7 Revealing Ways AIs Fail . It is also worth reading through the AIAAIC Repository which “details recent incidents and controversies driven by or relating to AI, algorithms, and automation."

I have taken this example from AI researcher François Chollet , who published it here .

Via François Chollet , who published it here . Based on Chollet’s comments it seems that this image was created by the AI system ‘Stable Diffusion’.

This quote is from Holden Karnofsky (2021) – AI Timelines: Where the Arguments, and the "Experts," Stand . For Holden Karnofsky’s earlier thinking on this conceptualization of AI see his 2016 article ‘Some Background on Our Views Regarding Advanced Artificial Intelligence’ .

Ajeya Cotra, whose research on AI timelines I discuss in other articles of this series, attempts to give a quantitative definition of what would qualify as transformative AI. in her widely cited report on AI timelines she defines it as a change in software technology that brings the growth rate of gross world product "to 20%-30% per year". Several other researchers define TAI in similar terms.

Human-level AI is typically defined as a software system that can carry out at least 90% or 99% of all economically relevant tasks that humans carry out. A lower-bar definition would be an AI system that can carry out all those tasks that can currently be done by another human who is working remotely on a computer.

On the use of AI in politically-motivated disinformation campaigns see for example John Villasenor (November 2020) – How to deal with AI-enabled disinformation . More generally on this topic see Brundage and Avin et al. (2018) – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, published at maliciousaireport.com . A starting point for literature and reporting on mass surveillance by governments is the relevant Wikipedia entry .

See for example the Wikipedia entry on the ‘Dutch childcare benefits scandal’ and Melissa Heikkilä (2022) – ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’ , in Politico. The technology can also reinforce discrimination in terms of race and gender. See Brian Christian’s book The Alignment Problem and the reports of the AI Now Institute .

Overviews are provided in Stuart Russell (2019) – Human Compatible (especially chapter 5) and Brian Christian’s 2020 book The Alignment Problem . Christian presents the thinking of many leading AI researchers from the earliest days up to now and presents an excellent overview of this problem. It is also seen as a large risk by some of the leading private firms who work towards powerful AI – see OpenAI's article " Our approach to alignment research " from August 2022.

Stuart Russell (2019) – Human Compatible

A question that follows from this is, why build such a powerful AI in the first place?

The incentives are very high. As I emphasize below, this innovation has the potential to lead to very positive developments. In addition to the large social benefits there are also large incentives for those who develop it – the governments that can use it for their goals, the individuals who can use it to become more powerful and wealthy. Additionally, it is of scientific interest and might help us to understand our own mind and intelligence better. And lastly, even if we wanted to stop building powerful AIs, it is likely very hard to actually achieve it. It is very hard to coordinate across the whole world and agree to stop building more advanced AI – countries around the world would have to agree and then find ways to actually implement it.

In 1950 the computer science pioneer Alan Turing put it like this: “If a machine can think, it might think more intelligently than we do, and then where should we be? … [T]his new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. … I cannot offer any such comfort, for I believe that no such bounds can be set.” Alan. M. Turing (1950) – Computing Machinery and Intelligence , In Mind, Volume LIX, Issue 236, October 1950, Pages 433–460.

Norbert Wiener is another pioneer who saw the alignment problem very early. One way he put it was “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” quoted from Norbert Wiener (1960) – Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. In Science.

In 1950 – the same year in which Turing published the cited article – Wiener published his book The Human Use of Human Beings, whose front-cover blurb reads: “The ‘mechanical brain’ and similar machines can destroy human values or enable us to realize them as never before.”

Toby Ord – The Precipice . He makes this projection in footnote 55 of chapter 2. It is based on the 2017 estimate by Farquhar.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

Harvard SEAS Ph.D. student Lucas Monteiro Paes wearing a white shirt and black glasses

Ph.D. student Monteiro Paes named Apple Scholar in AI/ML

Monteiro Paes studies fairness and arbitrariness in machine learning models

AI / Machine Learning , Applied Mathematics , Awards , Graduate Student Profile

Four people standing in a line, one wearing a Harvard sweatshirt, two holding second-place statues

A new phase for Harvard Quantum Computing Club

SEAS students place second at MIT quantum hackathon

Computer Science , Quantum Engineering , Undergraduate Student Profile

The AI Anthology: 20 Essays You Should Read About Our Future With AI

Chris McKay

Microsoft's Chief Scientific Officer, Eric Horvitz, has spearheaded an initiative aimed at stimulating an enriching and multidimensional conversation on the future of AI. Dubbed " AI Anthology ," the project features 20 op-ed essays from an eclectic mix of scholars and professionals providing their diverse perspectives on the transformative potential of AI.

With the backdrop of impressive leaps in AI capabilities, notably OpenAI's GPT-4, the anthology is a collaborative effort aimed at elucidating the profound ways AI can benefit humanity while exploring potential challenges. While many fear the unknowns of AI advancement, the anthology is grounded in an optimistic view of the future of AI, aiming to catalyze thought-provoking dialogue and collaborative exploration.

The anthology is a remarkable testament to the multi-faceted nature of AI implications, ranging from the arts to education, science, medicine, and the economy. Horvitz's own journey with AI began with an early glimpse into the transformative capabilities of GPT-4. His awe-inspiring experience with the AI highlighted its potential to redefine disciplinary boundaries and ignite novel integrations of traditionally disparate concepts and methodologies. Yet, it also underscored the need for careful, thoughtful exploration of potential disruptions and adverse consequences.

Four essays will be published to the AI Anthology each week, with the complete collection available on June 26, 2023. Here are the first four essays:

  • A Thinking Evolution by Alec Gallimore, a rocket scientist and Dean of Engineering at the University of Michigan, gets curious about the odyssey of AI.
  • Eradicating Inequality by Gillian Hadfield, Professor of Law and Economics at the University of Toronto, champions legal access for all.
  • Empowering Creation by Ada Palmer, Professor of History at University of Chicago, explores the possibilities of the information revolution.
  • Accessible Healthcare by Robert Wachter, Chair of the Department of Medicine at the University of California, San Francisco, examines how AI could reshape clinical care.

The contributors to the anthology represent a broad spectrum of experts. Each provides a unique perspective on the potentials and challenges of AI, covering a range of sectors, from education and healthcare to the creative arts. They were all granted early confidential access to GPT-4 and were encouraged to reflect upon two crucial questions: How might this technology and its successors contribute to human flourishing? And, how might society best guide the technology to achieve maximal benefits for humanity? These two questions, designed to explore the potential positive impact of AI, are central to the AI Anthology .

The resulting collection of essays are well worth the read! It offers an optimistic lens through which to view the future of AI and serves as a call to action for us all to join the conversation and contribute to the development of AI that promotes human flourishing.

Let’s stay in touch. Get the latest AI news from Maginative in your inbox.

A business journal from the Wharton School of the University of Pennsylvania

What Is the Future of AI?

November 9, 2023 • 26 min read.

If we want to coexist with AI, it’s time to stop viewing it as a threat, Wharton professors say.

Businessman thinking about the future of AI as he looks hopefully at a cityscape with a text overlay that reads "AI in Focus"

AI is here and it’s not going away. Wharton professors Kartik Hosanagar and Stefano Puntoni join Eric Bradlow, vice dean of Analytics at Wharton, to discuss how AI will affect business and society as adoption continues to grow. How can humans work together with AI to boost productivity and flourish? This interview is part of a special 10-part series called “AI in Focus.”

Watch the video or read the full transcript below.

Eric Bradlow: Welcome, everyone, to the first episode of the Analytics at Wharton and AI at Wharton podcast series on artificial intelligence. My name’s Eric Bradlow. I’m a professor of marketing and statistics here at the Wharton School. I’m also vice dean of Analytics at Wharton, and I will be the host for this multi-part series on artificial intelligence.

I can think of no better way to start that series, with two of my friends and colleagues who actually run our Center on Artificial Intelligence. The title of this episode is “Artificial Intelligence is Here.” As you will hear, we’ll do episodes on artificial intelligence in sports, artificial intelligence in real estate, artificial intelligence in health care. But I think it’s best to start just with the basics.

I’m very happy to have join with me today, first, my colleague Kartik Hosanagar. Kartik is the John C. Hower Professor at the Wharton School. He’s also, as I mentioned, the co-director of our Center on Artificial Intelligence at Wharton. And normally, I don’t read someone’s bio. First of all, it’s only a few sentences. But I think this actually is important for our listeners to understand the breadth and also the practicality of Kartik’s work. His research examines how AI impacts business and society, and something you’ll hear about is, that is what our center does. There’s kind of two prongs. Second, he was a founder of Yodle, where he applied AI to online advertising. And more recently and currently, to Jumpcut Media, a company applying AI to democratize Hollywood. He also teaches our courses on enabling technologies and AI business and society. Kartik, welcome.

Kartik Hosanagar: Thanks for having me, Eric.

Bradlow: I’m also happy to have my colleague, Stefano Puntoni. Stefano is the Sebastian S. Kresge Professor of Marketing here at the Wharton School. He’s also, along with Kartik, the co-Director of our Center on AI at Wharton. And his research examines how artificial intelligence and automation are changing consumption and society. And similar to Kartik, he also teaches our courses on artificial intelligence, brand management, and marketing strategies. Stefano, welcome.

Stefano Puntoni: Thank you very much.

Bradlow: It’s great to be with both of you. So maybe, Kartik, I’ll throw the first question out to you. While artificial intelligence is now the big thing that every company is thinking about, what do you see as— well, first of all, maybe even before what are challenges facing companies, how would we even define what artificial intelligence is? Because it can mean lots of things. It could mean everything from taking texts and images and stuff like that, and kind of quantifying it, or it could be generative AI, which is the same side of the coin, but a different part. How do you even view, what does it mean to say “artificial intelligence”?

Hosanagar: Yeah. Artificial Intelligence is a field of computer science which is focused on getting computers to do the kinds of things that traditionally requires human intelligence. What that is, is a moving target. When computers couldn’t play, say, a very simple game like— well, chess is not simple, but maybe even simpler board games. Maybe that’s the target. And then when you say computers can play chess, and when that’s easy for computers, we no longer think of that as AI.

But really, today, when we think about what is AI, it’s again, getting computers to do the kinds of things that require human intelligence. Like understand language. Like navigate the physical world. Like being able to learn from experiences, from data. So, all of that really is included in AI.

Bradlow: Do you put any separation between what I call— maybe I’m not even using the right words — traditional AI, which again back in my old days, we’ve had AI around, “How do you take an image, and turn it into something?” “How do we take video, how do we take text?” That’s one form of AI versus what’s got everybody excited today, which is ChatGPT, which is a form of large language model. Do you put any differentiation there? Or that’s just a way for us to understand. One is creation of data, and the other one is using it in an application of forecast and language.

Hosanagar: Yeah, I feel there is some distinction. But ultimately, they’re closely related. Because what we think of as the more traditional AI, or predictive AI, it’s all about taking data and understanding the landscape of the data. And to be able to say, “In this region of the data,” let’s say you’re predicting whether an image is about Bob, or is it about Lisa? And so you kind of say, “In the image space, this region, if the shape of the colors are like this, the shape of the eyes are like this, then it’s Bob. In that area, it’s Lisa.” And so on. So, it’s mostly understanding the space of data, and being able to say, with emails, is it fraudulent or not? And saying which portion of the space does it have one value versus the other.

Now, once you started getting really good at predicting that, then you can start to use those predictions to create. And that’s where it’s the next step, where it becomes generative AI. Where now you are predicting, what’s the next word? You might as well use it to start generating text, and start generating sentences, essays and novels, and so on.

Bradlow: Stefano, let me ask you a question. If one went to your web site on the Wharton web site — and by the way. Just for our listeners, Stefano has a lot of deep training in statistics. But most people would say, “You’re not a computer scientist. You’re not a mathematician. What the hell do you have to do with artificial intelligence?” Like, “What role does consumer psychology play in artificial intelligence today? Isn’t it just for us math types?”

Puntoni: If you talk to companies and you ask them why did your analytics program fail, you almost never hear the answer, “Because the models don’t work. Because the techniques didn’t deliver.” It’s never about the technical stuff. It’s always about people. It’s about lack of vision. It’s about the lack of alignment between decision makers and analysts. It’s about the lack of clarity about why we do analytics. So, I think that a behavioral science perspective on analytics can bring a lot of benefits to try to understand how do we connect decisions in companies to the data that we have? That takes both the technical skills and the human insights, the psychology insights. I think bringing those together, I find that has a lot of value and a lot of potential insights. A lot of low-hanging fruits, in fact, in companies, I think.

Bradlow: As a follow-up question, we all read these articles that say 70% of the jobs are going to go away, and robots or automation or AI is going to put me out of business. Should employees be happy with what’s going on in AI? Or the answer is, it depends who you are and what you’re doing? What are your thoughts? And then Kartik, I’d love to get your thoughts on that, including the work you’re doing at Jumpcut. Because we all know one of the biggest issues in the current writer’s strike was actually what’s going to happen with artificial intelligence? I’d love to hear your thoughts from the psychology or the employee motivation perspective, and then, what are you seeing actually out in the real world?

Puntoni: The academic answer to any question would be, “It depends. It depends.” But in my research, what I’ve been looking at is the extent to which people perceive automation as a threat. And what we find is that oftentimes when tasks are being automated by AI, for example, our tasks have to have some kind of meaning to the person. That they are essential to the way that they see themselves, for example, in their professional identity. That can create a lot of threat.

So, you have psychological threats, and then you have these objective threats of maybe jobs on the line. And maybe you’ll feel happy about knowing that I try out the professor job on some of these scoring algorithms, and we are fairly safe from our own replacement.

Bradlow: Kartik, let me ask you. And let me just preface this with saying, you probably don’t even know about this. Fifteen years ago, I wrote a paper with a former colleague and a doctoral student about how to use— I didn’t call it AI back then. But how to, basically, in large scale, compute features of advertisements and optimally design advertisements based on a massive number of features. And I remember the reaction. I first thought I was going to get rich. I went to every big media agency and said, “You can fire all your creative people. I know how to create these ads using mathematics.” And I was looked at like I had four heads. So, can you bring us up to the year 2023? Can you tell us what you’re doing at Jumpcut, and what role AI machine learning plays in your company, and just what you see going on in the creative world?

Hosanagar: Yeah. And I’ll connect that to, also, what you and Stefano just brought up about AI and jobs and exposure to AI and so on. I just came from a real estate conference. And the panel before I spoke was talking about, “Hey, this artificial intelligence, it’s not really intelligence. It just replicates whatever in some data. The true human intelligence is creative, problem-solving, and so on.” And I was sharing over there that there are multiple studies now that talk about what can AI do, and cannot do. For example, my colleague, Daniel Rock, has a study where he shows that just LLMs, meaning large language models like ChatGPT, and before the advances of the last six months— this is as of early 2023— they found that 50% of jobs have at least 10% of their tasks exposed to LLMs. And 20% of jobs have more than 50% of their tasks exposed to LLM. And that’s not all of AI, that’s just large language models. And that’s also 10 months ago.

And people also underestimate the nature of exponential change. I’ve been working with GPT2, GPT3, the earlier models of this. And I can say every year the change is order of magnitude. And so, you know, it’s coming. And it’s going to affect all kinds of jobs. Now, as of today, I can say that multiple research studies— and I don’t mean two, three, four— but several dozen research studies that have looked at AI’s use in multiple settings, including creative settings like writing poems or problem-solving or so on— find that AI today already can match humans. But human plus AI today beats both human alone and AI alone.

For me, the big opportunity with AI is we are going to see productivity boost like we’ve never seen before in the history of humanity. And that kind of productivity boost allows us to outsource the grunt work to AI, and do the most creative things, and derive joy from our work. Now, does that mean it’s all going to be beautiful for all of us? No. There are going to be some of us who, if we don’t reskill — if we don’t focus on having skills that require creativity, empathy, teamwork, leadership, those kinds of skills — then a lot of the other jobs are going away, including knowledge work. Consulting, software development. It’s coming into all of these.

Bradlow: Stefano, something Kartik mentioned in his last thing was about humans and AI. As a matter of fact, one of the things I heard you say from the beginning is, it’s not humans or AI. It’s humans and AI. How do you really see that interface going forward? Is it up to the individual worker to decide what part of his/her/their tasks to outsource? Is it up to management? How do you see people being even willing to skill themselves up in artificial intelligence? How do you see this?

Puntoni: I think this is the biggest question that any company should be asking, not just about AI right now. Frankly, I think the biggest question of all in business — how do we use these tools? How do we learn how to use them? There is no template. Nobody really knows how, for example, generative AI is going to impact different functions. We’re just learning about these tools, and these tools are still getting better.

What we need to do is to have some deliberate experimentation. We need to build processes for learning such that we have individuals within the organizations tasked with just understanding what this can do. And there’s going to be an impact on individuals. It’s going to be an impact on teams, on work flows. How do we bring this in, in a way that we just maybe don’t simply think of re-engineering a task to get a human out of the picture. But how do we re-engineer new ways of working such that we can get the most out of people? The point shouldn’t be human replacement and obsolescence. It should be human flourishing. How do we take this amazing technology to make our work more productive, more meaningful, more impactful, and ultimately make society better?

Bradlow: Kartik, let me take what Stefano said and combine it with something that you said earlier, which was about the exponential growth rate. My biggest fear if I were working at a company today — and please, I’d love your thoughts— is that someone’s using a version of ChatGPT, or some large language model, or even predictive model. Some transformer model. And they fit it today, and they say, “See? The model can’t do this.” And then two weeks later, the model can do this. Companies, in some sense, create these absolutes. Like, you just mentioned you were at a real estate company. “Well, ChatGPT or large language models, AI, can’t sell homes. They can build massive predictive models using satellite data.” Maybe they can’t today, but maybe they can tomorrow. How do you, in some sense, try to help both researchers and companies move away from absolutes in a time of exponential growth of these methods?

Hosanagar: Yeah. I think our brains fundamentally struggle with exponential change. And probably, there is some basis to this in studies people have done on neuroscience or human evolution and so on. But we struggle with it. And I see this all the time, because I’ve been part of that. My work has been part of that exponential change from the very beginning. When I started my Ph.D., it was about the internet. And I can’t tell you the number of people who looked at the internet at any given point of time and said, “Nobody will buy clothing online. Nobody will buy eyeglasses online. Nobody would do this. Nobody would do that.” And I’m like, “No, no. It’s all happening. Just wait to see what’s coming.”

I think it’s hard for people to fathom. I think leadership, as well as regulators, need to realize what’s coming, understand what exponential change is, and start to work. You brought up previously, and I forgot to address it, about the Hollywood writer’s strike. Now, it is true that today, ChatGPT cannot write a great model. However, when we work with writers, we are already seeing how they can increase the productivity for writers. And in Hollywood, for example, writers are notorious because writing is driven by inspiration. You’re expecting the draft today. And what’s the excuse? “Oh, I’m just stuck at this point. And when I get unstuck, I’ll write again.” You can wait months and sometimes years for the writer to get unstuck.

Now, you give them a brainstorming buddy, and they start getting unstuck and it increases productivity. And yes, they’re right in fearing that at some point they’re going to keep interacting with the AI, and keep training the AI, and someday the AI is going to say, “You know what? I’m going to try to write the script myself.” And when I say the AI is going to say that, I mean the AI is going to be good enough, and some executive is going to say, “Why deal with humans?” And do that.

I think we need to both recognize that change is that fast and start experimenting and start learning. And people need to start upping their game and reskilling and get really good at using AI to do what they do. That reskilling is important. Stop viewing this as a threat. Because what’s happening is, you’re standing somewhere and there’s a fast bullet train coming at you. And you’re saying, “That train is going to stop on its own.” No, it’s going to run over you. And the only thing you can do and you have to do is get to the station, board the train, and be part of that train and help shape where it goes. All of us need to help shape where it goes.

Bradlow: Yeah. One example I like to give is that for 25-plus years I’ve been doing statistical analysis in R. And of course, for the last five to seven years, Python’s taken a much larger role. And I always promised myself I was going to learn Python. Well, I’ve learned Python now. I stick my R code into ChatGPT, and I tell it to convert it to Python. And I’m actually a damn good Python programmer now, because ChatGPT has helped me take structured R code and turn it into Python code.

Hosanagar: That’s a great example. And I’ll give you two more examples like that. The head of product at my company, Jumpcut Media, had this idea for a script summarization tool. What happens in Hollywood is the vast majority of scripts written are never read because every executive gets so many scripts. And you have no time to read anything. And you end up prioritizing based on gut and relationships. “Eric’s my buddy. I’ll read his script, but not this guy, Stefano, who just sent me a script. I don’t know him.” And that’s how decision-making works in Hollywood.

So, the head of product, who’s not a coder — he’s actually a Wharton alumnus — had this idea for a great script summarization tool that would summarize things using the language and parlance of Hollywood. And he had the idea to build the tool, but he’s not a coder. Our engineers were too busy with other efforts, so he said, “While they’re doing that, let me try it on ChatGPT.” And he built the entire minimal viable product, a demo version of it, on his own, using ChatGPT. And it’s actually on our web site on Jumpcut Media, where our clients can try it. And that’s how it got built. A guy with no development skills.

I actually demonstrated, during this real estate conference, this idea that you post a video on YouTube, you’ve got 30,000 comments on YouTube, and you want to analyze those comments and figure out, what are people saying? You want to summarize it. I went to ChatGPT, and I said, “Six steps. First step, go to a YouTube URL I’ll share, download all the comments. Second step, do sentiment analysis of that. Third step, find the comments which are positive and send it to OpenAI and give me the summary of all the positive comments. Fourth step, negative comments, send it to OpenAI, give the summary. Fifth step, tell the marketing manager what you should do, and give me the code for all this.” It gave me the code in the conference with all these people. I put it in Google Collab, ran it, and now we’ve got the summary. And this is me writing not a single line of code, with ChatGPT. It’s not the most complex code, but this is something that previously would have taken me days and I would have had to involve RAs and so on. And I can get that done.

Bradlow: Imagine in real estate doing that about a property, or a developer. And you say it doesn’t affect real estate. Of course it does! Absolutely, it could.

Hosanagar: It does. I also showed them, I uploaded four photographs of my home. Nothing else. Four photographs. And I said, “I’m planning to list this home for sale. Give me a real estate listing to post on Zillow that will make people read it and get excited to come and tour this house.” And it gave a great, beautiful description. There’s no way I could have written that. I challenged them, how many of you could have written this? And everyone at the end was like, “Wow. I was blown away.” And that is something that is doable today. I’m not even talking where this is coming soon.

Bradlow: Stefano, I’m going to ask you and then I’ll ask Kartik as well, what’s at the leading edge of the research you’re doing right now? I want to ask each of you about your own research, and then I’ll spend the last few minutes that we have talking about AI at Wharton and what you guys are doing and hoping to accomplish. Let’s start with our own personal research. What are you doing right now? Another way I like to frame it is, if we’re sitting here five years from now and you have a bunch of published papers and you’ve given a lot of big podium talks, which I know you do, what are you talking about that you had worked on?

Puntoni: Working on a lot of projects, all in the area of AI. And there are so many exciting questions. Because we never had a machine like this, a machine that can do the stuff that we think is crucial to defining what a human is. This is actually an interesting thing to consider. When you went back in time a few years and you asked, “What makes humans special?” people were thinking, maybe compared to other animals, “We can think.” And now you ask, “What makes a human special?” and people think, “Oh, we have emotions, or we feel.

Basically now, what makes us special is what makes us the same as other animals, to some extent. You see how the world is really deeply changing. And I’m interested in, for example, the impact of AI for the pursuit of relational goals, or social goals, or emotionally heavy types of tasks, where previously we never had an option of engaging with a machine, but now we do. What does that mean? What are the benefits that this technology can bring, but also, what might be the dangers? For example, for consumer safety, as people might interact with these tools while experiencing mental health issues or other problems. To me, that’s a very exciting and important area.

I just want to make a point that this technology doesn’t have to be any better than it is today for it to change many, many things. I mean, Kartik was saying, rightly, this is still improving exponentially. And companies are just starting to experiment with it. But the tools are there. This is not a technology around the corner. It’s in front of us.

Bradlow: Kartik, what are the big open issues that you’re thinking about and working on today?

Hosanagar: Eric, there are two aspects to my work. One is slightly more technical, and the other is focused more on humans and societal interactions with AI. On the former side, I’m spending a lot of time thinking about biases in machine-learning models, in particular a few studies related to biases in text-to-image models. For example, you go in and you write a prompt, “Generate an image of a child studying astronomy.” If all 100 images are of a boy studying astronomy, then you know there’s an issue. And these models do have these biases, just because the training data sets have that. But if I get an individual image, how do I know it’s OK or not? We’re doing some work on detecting bias, debiasing, on automated prompt engineering as well. So, you state what you want, and we’ll figure out how to structure the prompt for a machine learning model to get the kind of output you want. That’s a bit on the technical side.

On the human and AI side, most of my interest is around two themes. One is human-AI collaboration. So, if you look at any workflow in any organization where AI now can touch that workflow, we do not understand today what is ideally done by humans and what is done by AI. In terms of organization design and process design, we understand historically, for example, how to structure teams, how to build team dynamics. But if the team is AI and humans, how do we structure that? What should be done by whom? I have some work going on there.

And the other one is around trust. AI has a huge trust problem today. We were just talking about the writers’ strike. There’s an actors’ strike, and many more issues coming up. So, what does it take to drive human trust and engagement with AI is another theme I’m looking at.

Bradlow: Maybe in the last few minutes or so, Stefano, can you tell us a little bit, and our listeners here on Sirius XM and on our podcast, about AI at Wharton and what you’re hoping to study and accomplish through a center on artificial intelligence here at Wharton? And then we’ll get Kartik’s thoughts as well.

Puntoni: Thank you for organizing this podcast, and Sirius for having us. I think it’s a great opportunity to get the word out. The initiative AI at Wharton is just starting out. We are a bunch of academics working on AI, tackling AI from different angles for the purpose of understanding what it can do for companies, how it can improve decision-making in companies. But also, what are the implications for all of us? As workers, as consumers, and society broadly?

We’re going to try initiatives around education, around research, around dissemination of research findings, and generally, try to create a community of people who are interested in these topics. They’re asking similar questions, maybe in very different ways, and can learn from one another.

Bradlow: And Kartik, what are your thoughts? You’ve been involved with lots of centers over the years. What makes AI at Wharton special, and why are you so excited to be in one of the leadership positions of it?

Hosanagar: Yeah. I think, first of all, to me, AI is maybe not even a once-a-generation, but once-several-generation kind of technologies. And it’s going to open up so many questions that will not be answered unless we create initiatives like ours. For example, today, computer scientists are focused on creating new and better models. But they’re focused on assessing these models somewhat narrowly, in terms of accuracy of the model, and so on, and not necessarily human impact, societal impact, some of these other questions.

At the same time, industry is affected by a lot of this. But they’re trying to put the fire out, and they’re focused on, what do they need to get done this week, next week? They’re very interested in the questions of, where will this take us three, four years later? But they have to focus quarter by quarter.

I think we are uniquely positioned, here at Wharton, in terms of having both the technical chops to understand those computer science models and what they’re doing, as well as people like Stefano and others who understand the psychological and the social science frameworks, who can bring in that perspective and really take a five, 10, 15, 25-year timeline on this and figure out, what does this mean for how organizations need to be redesigned? What does this mean in terms of how people need to be reskilled? How do our own college students need to be reskilled?

What does this mean for regulation? Because, man, regulators are going to struggle with this. And while the technology is moving exponentially, regulators are moving linearly. They will need that thought leadership as well. So, I think we fill that gap uniquely in terms of those kinds of problems. Big, open issues that are going to hit us in five, 10 years, but we are currently too busy putting out the fires to worry about the big avalanche coming our way.

Bradlow: Well, I think anybody that has listened to this episode will agree, artificial intelligence is here — which is what the title of this episode was. Again, I’m Eric Bradlow, professor of marketing and statistics here at the Wharton School, and vice dean of analytics. I’d like to think my colleagues, Stefano Puntoni and Kartik Hosanagar. Thank you for joining us on this episode.

Hosanagar: Thank you, Eric.

Puntoni: Thank you.

More From Knowledge at Wharton

ai is the future of technology essay

How Will AI Impact the Supply Chain?

ai is the future of technology essay

The YouTube Algorithm Isn’t Radicalizing People: Why User Choice Matters on Social Media

ai is the future of technology essay

AI and Innovation: A Question of Quantity vs. Quality

Looking for more insights.

Sign up to stay informed about our latest article releases.

  • Skip to main content
  • Keyboard shortcuts for audio player

What is AI and how will it change our lives? NPR Explains.

Danny Hajek at NPR West in Culver City, California, September 25, 2018. (photo by Allison Shelley)

Danny Hajek

Bobby Allyn

Bobby Allyn

Ashley Montgomery

ai is the future of technology essay

AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human. Yuichiro Chino hide caption

AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human.

Artificial intelligence is changing our lives – from education and politics to art and healthcare. The AI industry continues to develop at rapid pace. But what exactly is it? Should we be optimistic or worried about our future with this ever-evolving technology? Join host and tech reporter Bobby Allyn in NPR Explains: AI, a podcast series exclusively on the NPR App, which is available on the App Store or Google Play .

NPR Explains: AI answers your most pressing questions about artificial intelligence:

  • What is AI? - Artificial intelligence is a multi-billion dollar industry. Tons of AI tools are suddenly available to the public. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot like a human. AI scientist Gary Marcus says there is no one definition of artificial intelligence. It's about building machines that do smart things. Listen here.
  • Can AI be regulated? - As technology gets better at faking reality, there are big questions about regulation. In the U.S., Congress has never been bold about regulating the tech industry and it's no different with the advancements in AI. Listen here.
  • Can AI replace creativity? - AI tools used to generate artwork can give users the chance to create stunning images. Language tools can generate poetry through algorithms. AI is blurring the lines of what it means to be an artist. Now, some artists are arguing that these AI models breach copyright law. Listen here.
  • Does AI have common sense? - Earlier this year, Microsoft's chatbot went rogue. It professed love to some users. It called people ugly. It spread false information. The chatbot's strange behavior brought up an interesting question: Does AI have common sense? Listen here.
  • How can AI help productivity? - From hiring practices to medical insurance paperwork, many big businesses are using AI to work faster and more efficiently. But that's raising urgent questions about discrimination and equity in the workplace. Listen here.
  • What are the dangers of AI? - Geoffrey Hinton, known as the "godfather of AI," spent decades advancing artificial intelligence. Now he says he believes the AI arms race among tech giants is actually a race towards danger. Listen here.

Learn more about artificial intelligence. Listen to NPR Explains: AI, a podcast series available exclusively in the NPR app. Download it on the App Store or Google Play .

Artificial Intelligence: History, Challenges, and Future Essay

In the editorial “A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence” by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired, humanized AI, and artificial narrow, general, and superintelligent AI. They address the AI effect, which is the phenomenon in which observers disregard AI behavior by claiming that it does not represent true intelligence. The article also uses the analogy of the four seasons (spring, summer, fall, and winter) to describe the history of AI.

The article provides a useful overview of the history of AI and its current state. The authors provide a useful framework for understanding AI by dividing it into categories based on the types of intelligence it exhibits or its evolutionary stage. It addresses the concept of the AI effect, which is the phenomenon where observers disregard AI behavior by claiming that it does not represent true intelligence.

The central claim made by Michael Haenlein and Andreas Kaplan is that AI can be classified into different types based on the types of intelligence it exhibits or its evolutionary stage. The authors argue that AI has evolved significantly since its birth in the 1940s, but there have also been ups and downs in the field (Haenlein). The evidence used to support this claim is the historical overview of AI. The authors also discuss the current challenges faced by firms today and the future of AI. They make qualifications by acknowledging that only time will tell whether AI will reach Artificial General Intelligence and that early systems, such as expert systems had limitations. If one takes their claims to be true, it suggests that AI has the potential to transform various industries, but there may also be ethical and social implications to consider. Overall, the argument is well-supported with evidence, and the authors acknowledge the limitations of AI. As an AI language model, I cannot take a stance on whether the argument is persuasive, but it is an informative overview of the history and potential of AI.

The article can be beneficial for the research on the ethical and social implications of AI in society. It offers a historical overview of AI, and this can help me understand how AI has evolved and what developments have occurred in the field. Additionally, the article highlights the potential of AI and the challenges that firms face today, and this can help me understand the practical implications of AI. The authors also classify AI into three categories, and this can help me understand the types of AI that exist and how they can be used in different contexts.

The article raises several questions that I would like to explore further, such as the impact of AI on the workforce and job displacement. The article also provides a new framework for looking at AI, and this can help me understand the potential of AI and its implications for society. However, I do not disagree with the author’s ideas, and I do not see myself working against the ideas presented.

Personally, I find the topic of AI fascinating, and I believe that it has the potential to transform society in numerous ways. However, I also believe that we need to approach AI with caution and be mindful of its potential negative impacts. As the editorial suggests, we need to develop clear AI strategies and ensure that ethical considerations are taken into account. In this way, we can guarantee that the benefits of AI are maximized while minimizing its negative impacts.

Haenlein, Michael, and Andreas Kaplan. “ A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence .” California Management Review , vol. 61, no. 4, 2019, pp. 5–14, Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2024, February 25). Artificial Intelligence: History, Challenges, and Future. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/

"Artificial Intelligence: History, Challenges, and Future." IvyPanda , 25 Feb. 2024, ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

IvyPanda . (2024) 'Artificial Intelligence: History, Challenges, and Future'. 25 February.

IvyPanda . 2024. "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

1. IvyPanda . "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

Bibliography

IvyPanda . "Artificial Intelligence: History, Challenges, and Future." February 25, 2024. https://ivypanda.com/essays/artificial-intelligence-history-challenges-and-future/.

  • Geology Issues: the San Andreas Fault
  • San Andreas Fault and Devil’s Punchbowl Natural Area
  • San Andreas Geological Phenomena
  • California, US: San Andreas Fault and Coso Volcanic Field
  • The Consumption Pattern in Grand Theft Auto: San Andreas
  • Andreas Gursky's "The Rhine II" Photography
  • International HR Management: Global Assignment
  • The Kaplan-Meier Method: NG’s Article
  • The Lost Leonardo Film by Andreas Koefoed
  • Editorial Independence in Kuwaiti Legislation
  • Artificial Intelligence in the Field of Copywriting
  • Artificial Intelligence for Recruitment and Selection
  • Artificial Intelligence and Gamification in Hiring
  • Open-Source Intelligence and Deep Fakes
  • Artificial Intelligence and Frankenstein's Monster: Article Review

MIT Technology Review

  • Newsletters

What’s next for AI in 2024

Our writers look at the four hot trends to watch out for this year

  • Melissa Heikkilä archive page
  • Will Douglas Heaven archive page

man with pocketwatches dangling over his eyes

MIT Technology Review’s What’s Next series looks across industries, trends, and technologies to give you a first look at the future. You can read the rest of our series here .

This time last year we did something reckless. In an industry where nothing stands still, we had a go at predicting the future. 

How did we do? Our four big bets for 2023 were that the next big thing in chatbots would be multimodal (check: the most powerful large language models out there, OpenAI’s GPT-4 and Google DeepMind’s Gemini, work with text, images and audio); that policymakers would draw up tough new regulations (check: Biden’s executive order came out in October and the European Union’s AI Act was finally agreed in December ); Big Tech would feel pressure from open-source startups (half right: the open-source boom continues, but AI companies like OpenAI and Google DeepMind still stole the limelight); and that AI would change big pharma for good (too soon to tell: the AI revolution in drug discovery is in full swing , but the first drugs developed using AI are still some years from market).

Now we’re doing it again.

We decided to ignore the obvious. We know that large language models will continue to dominate. Regulators will grow bolder. AI’s problems—from bias to copyright to doomerism—will shape the agenda for researchers, regulators, and the public, not just in 2024 but for years to come. (Read more about our six big questions for generative AI here .)

Instead, we’ve picked a few more specific trends. Here’s what to watch out for in 2024. (Come back next year and check how we did.)

Customized chatbots

You get a chatbot! And you get a chatbot! In 2024, tech companies that invested heavily in generative AI will be under pressure to prove that they can make money off their products. To do this, AI giants Google and OpenAI are betting big on going small: both are developing user-friendly platforms that allow people to customize powerful language models and make their own mini chatbots that cater to their specific needs—no coding skills required. Both have launched web-based tools that allow anyone to become a generative-AI app developer. 

In 2024, generative AI might actually become useful for the regular, non-tech person, and we are going to see more people tinkering with a million little AI models. State-of-the-art AI models, such as GPT-4 and Gemini , are multimodal, meaning they can process not only text but images and even videos. This new capability could unlock a whole bunch of new apps. For example, a real estate agent can upload text from previous listings, fine-tune a powerful model to generate similar text with just a click of a button, upload videos and photos of new listings, and simply ask the customized AI to generate a description of the property. 

But of course, the success of this plan hinges on whether these models work reliably. Language models often make stuff up, and generative models are riddled with biases . They are also easy to hack , especially if they are allowed to browse the web. Tech companies have not solved any of these problems. When the novelty wears off, they’ll have to offer their customers ways to deal with these problems. 

—Melissa Heikkil ä

a film clapper with digital patter where the production info should be

Generative AI’s second wave will be video

It’s amazing how fast the fantastic becomes familiar. The first generative models to produce photorealistic images exploded into the mainstream in 2022 —and soon became commonplace. Tools like OpenAI’s DALL-E, Stability AI’s Stable Diffusion, and Adobe’s Firefly flooded the internet with jaw-dropping images of everything from the pope in Balenciaga to prize-winning art . But it’s not all good fun: for every pug waving pompoms , there’s another piece of knock-off fantasy art or sexist sexual stereotyping .

The new frontier is text-to-video. Expect it to take everything that was good, bad, or ugly about text-to-image and supersize it.

A year ago we got the first glimpse of what generative models could do when they were trained to stitch together multiple still images into clips a few seconds long. The results were distorted and jerky. But the tech has rapidly improved.

Runway , a startup that makes generative video models (and the company that co-created Stable Diffusion), is dropping new versions of its tools every few months. Its latest model, called Gen-2 , still generates video just a few seconds long, but the quality is striking. The best clips aren’t far off what Pixar might put out.

Runway has set up an annual AI film festival that showcases experimental movies made with a range of AI tools. This year’s festival has a $60,000 prize pot, and the 10 best films will be screened in New York and Los Angeles.

It’s no surprise that top studios are taking notice. Movie giants, including Paramount and Disney, are now exploring the use of generative AI throughout their production pipeline. The tech is being used to lip-sync actors’ performances to multiple foreign-language overdubs. And it is reinventing what’s possible with special effects. In 2023, Indiana Jones and the Dial of Destiny starred a de-aged deepfake Harrison Ford. This is just the start.  

Away from the big screen, deepfake tech for marketing or training purposes is taking off too. For example, UK-based Synthesia makes tools that can turn a one-off performance by an actor into an endless stream of deepfake avatars, reciting whatever script you give them at the push of a button. According to the company, its tech is now used by 44% of Fortune 100 companies. 

The ability to do so much with so little raises serious questions for actors . Concerns about studios’ use and misuse of AI were at the heart of the SAG-AFTRA strikes last year. But the true impact of the tech is only just becoming apparent. “The craft of filmmaking is fundamentally changing,” says Souki Mehdaoui, an independent filmmaker and cofounder of Bell & Whistle, a consultancy specializing in creative technologies.

—Will Douglas Heaven

AI-generated election disinformation will be everywhere 

If recent elections are anything to go by, AI-generated election disinformation and deepfakes are going to be a huge problem as a record number of people march to the polls in 2024. We’re already seeing politicians weaponizing these tools. In Argentina , two presidential candidates created AI-generated images and videos of their opponents to attack them. In Slovakia , deepfakes of a liberal pro-European party leader threatening to raise the price of beer and making jokes about child pornography spread like wildfire during the country’s elections. And in the US, Donald Trump has cheered on a group that uses AI to generate memes with racist and sexist tropes. 

While it’s hard to say how much these examples have influenced the outcomes of elections, their proliferation is a worrying trend. It will become harder than ever to recognize what is real online. In an already inflamed and polarized political climate, this could have severe consequences.

Just a few years ago creating a deepfake would have required advanced technical skills, but generative AI has made it stupidly easy and accessible, and the outputs are looking increasingly realistic. Even reputable sources might be fooled by AI-generated content. For example, users-submitted AI-generated images purporting to depict the Israel-Gaza crisis have flooded stock image marketplaces like Adobe’s. 

The coming year will be pivotal for those fighting against the proliferation of such content. Techniques to track and mitigate it content are still in early days of development. Watermarks, such as Google DeepMind’s SynthID , are still mostly voluntary and not completely foolproof. And social media platforms are notoriously slow in taking down misinformation. Get ready for a massive real-time experiment in busting AI-generated fake news. 

robot hands flipping pancakes and holding a tube of lipstick

Robots that multitask

Inspired by some of the core techniques behind generative AI’s current boom, roboticists are starting to build more general-purpose robots that can do a wider range of tasks.

The last few years in AI have seen a shift away from using multiple small models, each trained to do different tasks—identifying images, drawing them, captioning them—toward single, monolithic models trained to do all these things and more. By showing OpenAI’s GPT-3 a few additional examples (known as fine-tuning), researchers can train it to solve coding problems, write movie scripts, pass high school biology exams, and so on. Multimodal models, like GPT-4 and Google DeepMind’s Gemini, can solve visual tasks as well as linguistic ones.

The same approach can work for robots, so it wouldn’t be necessary to train one to flip pancakes and another to open doors: a one-size-fits-all model could give robots the ability to multitask. Several examples of work in this area emerged in 2023.

In June, DeepMind released Robocat (an update on last year’s Gato ), which generates its own data from trial and error to learn how to control many different robot arms (instead of one specific arm, which is more typical). 

In October, the company put out yet another general-purpose model for robots, called RT-X, and a big new general-purpose training data set , in collaboration with 33 university labs. Other top research teams, such as RAIL (Robotic Artificial Intelligence and Learning) at the University of California, Berkeley, are looking at similar tech.

The problem is a lack of data. Generative AI draws on an internet-size data set of text and images. In comparison, robots have very few good sources of data to help them learn how to do many of the industrial or domestic tasks we want them to.

Lerrel Pinto at New York University leads one team addressing that. He and his colleagues are developing techniques that let robots learn by trial and error, coming up with their own training data as they go. In an even more low-key project, Pinto has recruited volunteers to collect video data from around their homes using an iPhone camera mounted to a trash picker . Big companies have also started to release large data sets for training robots in the last couple of years, such as Meta’s Ego4D .

This approach is already showing promise in driverless cars. Startups such as Wayve, Waabi, and Ghost are pioneering a new wave of self-driving AI that uses a single large model to control a vehicle rather than multiple smaller models to control specific driving tasks. This has let small companies catch up with giants like Cruise and Waymo. Wayve is now testing its driverless cars on the narrow, busy streets of London. Robots everywhere are set to get a similar boost.

Artificial intelligence

Large language models can do jaw-dropping things. but nobody knows exactly why..

And that's a problem. Figuring it out is one of the biggest scientific puzzles of our time and a crucial step towards controlling more powerful future models.

Google DeepMind’s new generative model makes Super Mario–like games from scratch

Genie learns how to control games by watching hours and hours of video. It could help train next-gen robots too.

What’s next for generative video

OpenAI's Sora has raised the bar for AI moviemaking. Here are four things to bear in mind as we wrap our heads around what's coming.

The AI Act is done. Here’s what will (and won’t) change

The hard work starts now.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

ai is the future of technology essay

Reflections on AI and the future of human flourishing

May 30, 2023 | Eric Horvitz - Chief Scientific Officer, Microsoft

  • Share on Facebook (opens new window)
  • Share on Twitter (opens new window)
  • Share on LinkedIn (opens new window)

AI anthology illustration

Recent advances in artificial intelligence have sparked both wonder and anxiety as we contemplate its transformative potential. AI holds enormous promise to enrich our lives, but this anticipation comes intertwined with apprehensions about the challenges and risks that may emerge. To nurture a future where AI is leveraged to the benefit of people and society, it is crucial to bring together a wide array of voices and perspectives.

With this goal in mind, I am honored to present the “AI Anthology,” a compilation of 20 inspiring essays authored by distinguished scholars and professionals from various disciplines. The anthology explores the diverse ways in which AI can be channeled to benefit humanity while shedding light on potential challenges. By bringing together these different viewpoints, our aim is to stimulate thought-provoking conversations and encourage collaborative efforts that will guide AI toward a future that harnesses its potential for human flourishing.

I first encountered GPT-4, a remarkable large-scale language model, in the fall of 2022 while serving as the chair of Microsoft’s Aether Committee . The Aether leadership and engineering teams were granted early access to OpenAI’s latest innovation, with a mission to investigate potential challenges and wider societal consequences of its use. Our inquiries were anchored in Microsoft’s AI Principles , which were established by the committee in collaboration with Microsoft’s leadership in 2017. We conducted a comprehensive analysis of GPT-4’s capabilities, focusing on the possible challenges that applications employing this technology could pose in terms of safety, accuracy, privacy and fairness.

GPT-4 left me awestruck. I observed unexpected glimmers of intelligence beyond those seen in prior AI systems. When compared to its predecessor, GPT-3.5 — a model utilized by tens of millions as ChatGPT — I noticed a significant leap in capabilities. Its ability to interpret my intentions and provide sophisticated answers to numerous prompts felt like a “phase transition,” evoking imagery of emergent phenomena that I had encountered in physics. I found that GPT-4 is a polymath, with a remarkable capacity to integrate traditionally disparate concepts and methodologies. It seamlessly weaves together ideas that transcend disciplinary boundaries.

The remarkable capabilities of GPT-4 raised questions about potential disruptions and adverse consequences, as well as opportunities to benefit people and society. While our broader team vigorously explored safety and fairness concerns, I delved into complex challenges within medicine, education and the sciences. It became increasingly evident that the model and its successors — which would likely exhibit further jumps in capabilities — hold tremendous potential to be transformative. This led me to contemplate the wider societal ramifications.

Questions came to mind surrounding artistic creation and attribution, malicious actors, jobs and the economy, and unknown futures that we cannot yet envision. How might people react to no longer being the unparalleled fount of intellectual and artistic thought and creation, as generative AI tools become commonplace? How would these advancements affect our self-identity and individual aspirations? What short- and long-term consequences might be felt in the job market? How might people be credited for their creative contributions that AI systems would be learning from? How might malicious actors exploit these emerging powers to inflict harm? What are important potential unintended consequences of the uses, including those we might not yet foresee?

At the same time, I imagined futures in which people and society could thrive in extraordinary ways by harnessing this technology, just as they have with other revolutionary advances. These transformative influences range from the first tools of cognition — our shared languages, enabling unprecedented cooperation and coordination — to the instruments of science and engineering, the printing press, the steam engine, electricity, and the internet, culminating in today’s recent advances in AI.

Eager to investigate these opportunities in collaboration with others across a wide array of disciplines, we initiated the “AI Anthology” project, with OpenAI’s support. We invited 20 experts to explore GPT-4’s capabilities and contemplate the potential influences of future versions on humanity. Each participant was granted early confidential access to GPT-4, provided case studies in education, scientific exploration and medicine , drawn from my explorations, and asked to focus on two core questions:

  • How might this technology and its successors contribute to human flourishing?
  • How might we as society best guide the technology to achieve maximal benefits for humanity?

Building upon the ideas presented in my Tanner Lecture at the University of Michigan in November 2022 ( Arc of Intelligence: Humanity and its Tools of Reason and Imagination ), these questions highlight the importance of long-term thinking and maintaining an optimistic perspective on AI’s potential to enrich human lives. We could unlock immense potential benefits. But to realize this potential, we must create technical innovations and policies to protect against malicious uses and unintended consequences.

This anthology is a testament to the promise of envisioning and collaboration and to the importance of diverse perspectives in shaping the future of AI. The 20 essays offer a wealth of insights, hopes and concerns, illustrating the complexities and possibilities that arise with the rapid evolution of AI.

As you read these essays, I encourage you to remain open to new ideas, engage in thoughtful conversations, and lend your insights to the ongoing discourse on harnessing AI technology to benefit and empower humanity. The future of AI is not a predetermined path, but a journey we must navigate together with wisdom, foresight and a deep sense of responsibility. I hope that the ideas captured in these essays contribute to our collective understanding of the challenges and opportunities we face. They can help guide our efforts to create a future where AI systems complement human intellect and creativity to promote human flourishing.

Welcome to the “AI Anthology.” May it inspire you, challenge you, and ignite meaningful conversations that lead us toward a future where humanity flourishes by harnessing AI in creative and valuable ways.

We will publish four new essays at the beginning of each week starting today. The complete “AI Anthology” will be available on June 26, 2023.

As Microsoft’s Chief Scientific Officer, Eric Horvitz spearheads company-wide initiatives, navigating opportunities and challenges at the confluence of scientific frontiers, technology and society. He is known for his contributions to AI theory and practice, including research on principles and applications of AI amidst the complexities of the open world.

The views, opinions and proposals expressed in these essays are those of the authors and do not necessarily reflect the official policy or position of any other entity or organization, including Microsoft and OpenAI. The authors are solely responsible for the accuracy and originality of the information and arguments presented in their essays. Participation in the “AI Anthology” was voluntary and no incentives or compensation were provided to the authors.

  • Check us out on RSS

ai is the future of technology essay

Logo

Essay on Future of Artificial Intelligence

Students are often asked to write an essay on Future of Artificial Intelligence in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Future of Artificial Intelligence

Introduction.

Artificial Intelligence (AI) is the science of making machines think and learn like humans. It’s an exciting field that’s rapidly changing our world.

Future Possibilities

In the future, AI could take over many jobs, making our lives easier. Robots could clean our houses, and AI could help doctors diagnose diseases.

Challenges Ahead

However, there are challenges. We need to make sure AI is used responsibly, and that it doesn’t take away too many jobs.

The future of AI is promising, but we need to navigate it carefully to ensure it benefits everyone.

250 Words Essay on Future of Artificial Intelligence

Artificial Intelligence (AI) has become an integral part of our daily lives, from smartphones to autonomous vehicles. The future of AI is a topic of intense debate and speculation among scientists, technologists, and futurists.

AI in Everyday Life

The future of AI holds promising advancements in everyday life. We can expect more sophisticated personal assistants, smarter home automation, and advanced healthcare systems. AI will continue to streamline our lives, making mundane tasks more efficient.

AI in Business

In business, AI will revolutionize industries by automating processes and creating new business models. Predictive analytics, customer service, and supply chain management will become more efficient and accurate. AI will also enable personalized marketing, enhancing customer experience and retention.

AI in Ethics and Society

However, the future of AI also poses ethical and societal challenges. Issues such as job displacement due to automation, privacy concerns, and the potential misuse of AI technologies need to be addressed. Ensuring fairness, transparency, and accountability in AI systems will be crucial.

In conclusion, the future of AI is a blend of immense potential and challenges. It will transform our lives and businesses, but also necessitates careful consideration of ethical and societal implications. As we move forward, it is essential to foster a global dialogue about the responsible use and governance of AI.

500 Words Essay on Future of Artificial Intelligence

Artificial Intelligence (AI) has transformed from a fringe scientific concept into a commonplace technology, permeating every aspect of our lives. As we stand on the precipice of the future, it becomes crucial to understand AI’s potential trajectory and the profound implications it might have on society.

The Evolution of AI

The future of AI is rooted in its evolution. Initially, AI was about rule-based systems, where machines were programmed to perform specific tasks. However, the advent of Machine Learning (ML) marked a significant shift. ML enabled machines to learn from data and improve their performance over time, leading to more sophisticated AI models.

The current focus is on developing General AI, machines that can perform any intellectual task that a human being can. While we are yet to achieve this, advancements in Deep Learning and Neural Networks are bringing us closer to this reality.

AI in the Future

In the future, AI is expected to become more autonomous and integrated into our daily lives. We will see AI systems that can not only understand and learn from their environment but also make complex decisions, solve problems, and even exhibit creativity.

One of the most promising areas is AI’s role in data analysis. As data continues to grow exponentially, AI will become indispensable in making sense of this information, leading to breakthroughs in fields like healthcare, climate change, and social sciences.

Implications and Challenges

However, the future of AI is not without its challenges. As AI systems become more autonomous, we must grapple with ethical issues. For instance, who is accountable if an AI system makes a mistake? How do we ensure that AI systems are fair and unbiased?

Moreover, as AI continues to automate tasks, there are concerns about job displacement. While AI will undoubtedly create new jobs, it will also render many existing jobs obsolete. Therefore, societies must prepare for this transition by investing in education and training.

The future of AI is a landscape of immense potential and challenges. As we continue to develop more sophisticated AI systems, we must also be mindful of the ethical implications and societal impacts. By doing so, we can harness the power of AI to create a future where technology serves humanity, rather than the other way around.

That’s it! I hope the essay helped you.

If you’re looking for more, here are essays on other interesting topics:

  • Essay on Indian Economics Book
  • Essay on Sports and Health
  • Essay on Benefits of Sports

Apart from these, you can look at all the essays by clicking here .

Happy studying!

I really appreciate your efforts 👍 This is the superb website to get the easiest essay on any topic.

I really appriciate your efforts.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Home — Essay Samples — Information Science and Technology — Artificial Intelligence — The Future Of Artificial Intelligence Technology

test_template

The Future of Artificial Intelligence Technology

  • Categories: Advantages of Technology Artificial Intelligence Robots

About this sample

close

Words: 702 |

Published: Feb 8, 2022

Words: 702 | Pages: 2 | 4 min read

Table of contents

Artificial intelligence essay outline, artificial intelligence essay example, introduction.

  • The growth of artificial intelligence (AI) in recent years
  • The diversity of AI applications
  • The anticipation of further advancements in AI

Historical Background

  • The origins of AI research
  • Milestones in AI development, such as IBM's chess-playing computer

Current Applications of AI

  • Examples of AI use in various fields (medicine, defense, gaming, etc.)
  • AI's impact on precision, data analysis, and decision-making

Future Developments

  • Predictions for AI advancements by 2025 and 2030
  • Expectations for language translation, medical robotics, and prosthetics

Challenges and Controversies

  • Potential drawbacks of AI, including unemployment and misuse
  • The importance of maintaining the human element in technological development
  • Reflection on the unstoppable rise of AI
  • The role of humanity in shaping the future of AI technology

Works Cited

  • Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.
  • Chui, M., Manyika, J., & Miremadi, M. (2016). Where machines could replace humans—and where they can't (yet). McKinsey Quarterly.
  • Dean, J., & Ghemawat, S. (2017). Google's AI wins game one of Go tournament. Nature, 529(7587), 445.
  • Domingos, P. (2018). The master algorithm: How the quest for the ultimate learning machine will remake our world. Basic Books.
  • Gates, B. (2019). The road ahead: Future of artificial intelligence. TechCrunch.
  • Goodfellow, I., Bengio, Y., & Courville, A. (2016). Deep learning. MIT Press.
  • Russell, S. J., & Norvig, P. (2016). Artificial intelligence: A modern approach. Pearson.
  • Searle, J. R. (1980). Minds, brains, and programs. Behavioral and Brain Sciences, 3(3), 417-457.
  • Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Vintage.
  • Zeng, F., Qin, J., Zeng, Y., & Wan, Y. (2020). How artificial intelligence in education could be promising: A review. IEEE Access, 8, 63989-64006.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr Jacklynne

Verified writer

  • Expert in: Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 581 words

2 pages / 708 words

3 pages / 1494 words

4 pages / 1699 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

The Future of Artificial Intelligence Technology Essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Artificial Intelligence

Machine translation, the automated process of translating text or speech from one language to another, has come a long way since its inception. With the advent of artificial intelligence and advances in natural language [...]

Artificial intelligence (AI) has rapidly evolved in recent years, making significant advancements in various fields such as healthcare, finance, and manufacturing. This technology holds immense potential for transformative [...]

Language, the cornerstone of human communication, has continually evolved throughout history. As we stand on the precipice of a new era marked by rapid technological advancements, the concept of the "future language" becomes [...]

The trajectory of technological advancement has been nothing short of remarkable, and the pace at which innovation is occurring is accelerating exponentially. As we look ahead, it becomes evident that technology will continue to [...]

“Do you like human beings?” Edward asked. “I love them” Sophia replied.“Why?”“I am not sure I understand why yet”The conversation above is from an interview for Business Insider between a journalist – Jim Edwards and [...]

Our topic for a rebate is “Is the threat of Artificial Intelligence technology real?” Artificial intelligence may attempt to copy our own intelligence. Nowadays Computers can communicate and calculate data quicker than the [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

ai is the future of technology essay

  • Share full article

Advertisement

Supported by

current events conversation

What Students Are Saying About Learning to Write in the Age of A.I.

Does being able to write still matter when chatbots can do it for us? Teenagers weigh in on an essay from Opinion.

An illustration of a computer keyboard with every other key of its center row highlighted yellow. The keyboard stretches off into the distance where it meets the sun on the horizon.

By The Learning Network

With artificial intelligence programs like ChatGPT that can generate prose for us, how much should we care about learning to write — and write well?

In “ Our Semicolons, Ourselves ,” the Opinion contributor Frank Bruni argues that, for a multitude of reasons, communicating effectively is a skill we should still take seriously. “Good writing burnishes your message,” he writes. “It burnishes the messenger, too.”

We asked teenagers what they thought: Does learning to be a good writer still matter in the age of A.I.? Or will the technology someday replace the need for people to learn how to put pen to paper and fingers to keyboard?

Take a look at their conversation below, which explores the benefits of learning to express oneself, the promise and perils of chatbots, and what it means to be a writer.

Thank you to everyone who participated in the conversation on our writing prompts this week, including students from Glenbard North High School in Carol Stream, Ill.; Hinsdale Central High School in Hinsdale, Ill. and New Rochelle High School in New Rochelle, N.Y .

Please note: Student comments have been lightly edited for length, but otherwise appear as they were originally submitted.

Many students agreed with Mr. Bruni that learning to write is important. Some pointed to the practical reasons.

When you write any sort of persuasive essay or analysis essay, you learn to communicate your ideas to your audience. This skill can then be applied to your daily life. Whether it’s talking to your teachers, writing an email to your boss, or sending a text message to your friends, writing and communication is a fundamental ability that is needed to clearly and concisely express yourself. This is something that A.I. cannot help you with.

— Mara F.R., Hinsdale

In order to write, we must first be able to think on our own which allows us to be self-sufficient. With the frequent use of A.I., our minds become reliant on given information rather than us thinking for ourselves. I absolutely believe that learning to be a good writer still matters even in the age of Artificial Intelligence.

— Jordyne, Ellisville

I firmly believe that learning good writing skills develops communication, creativity, and problem-solving skills. A.I. can also be used as a tool; I have used it to ask practice questions, compare my answers, and find different/better ways to express myself. Sure, having my essay written for me in seconds is great, but come time for an interview or presentation later on in my life I’ll lack the confidence and ability to articulate my thoughts if I never learn how.

— CC, San Luis Obispo County

I, being a senior, have just finished my college applications. Throughout the process, I visited several essay help websites, and each one stressed this fact: essay readers want to hear a student’s voice. ChatGPT can write well-structured essays in two minutes, but these essays have no voice. They are formulaic and insipid — they won’t help a student get into UCLA. To have a chance, her essays must be eloquent and compelling. So, at least until AI writing technology improves, a student must put in the work, writing and rewriting until she has produced an essay that tells readers who she is.

— Cole, Central Coast, CA

Others discussed the joy and satisfaction that comes with being able to express oneself.

While AI has its advantages, it can’t replicate the satisfaction and authenticity which comes from writing by yourself. AI uses the existing ideas of others in order to generate a response. However, the response isn’t unique and doesn’t truly represent the idea the way you would. When you write, it causes you to think deeply about a topic and come up with an original idea. You uncover ideas which you wouldn’t have thought of previously and understand a topic for more than its face value. It creates a sense of clarity, in which you can generate your own viewpoint after looking at the different perspectives. Another example is that the feeling of writing something by yourself generates feelings of pleasure and satisfaction. The process of doing research about a topic for hours, to then come up with your own opinion. Or the feeling of having to use a dictionary to understand a word which you don’t know the meaning of. The satisfaction and authenticity or writing by yourself is irreplaceable. Therefore, it is still important to learn to be a good writer.

— Aditya, Hinsdale

You cannot depend on technology to do everything for you. An important factor of writing is expressing yourself and showing creativity. While AI can create a grammatically correct essay, it cannot express how you feel on the subject. Creativity attracts an audience, not being grammatically correct. Learning to write well-written essays without the assistance of AI is a skill that everyone should have.

— Aidan, Ellisville

A few commenters raised ethical concerns around using generators like ChatGPT.

I feel that even with AI, learning how to be a good writer still matters. For example, if you’re writing a college essay or an essay for a class using an AI generated thing, that is plagiarism, which can get you in a lot of trouble because it is against the law to take something that is not yours and try to make it seem like it is your writing. So I believe that learning how to be a good writer still matters a lot because if you want to get into a good college or get good grades, you need to know how to write at least semi-well and make sure the writing is in your own words, not words already generated for you.

— jeo, new york

There are obvious benefits, and I myself have used this software to better understand Calculus problems in a step by step format, or to answer my questions regarding a piece of literature, or time in history. That being said, ethics should be considered, and credit should be given where credit is due; as sources are cited in a traditional paper, so should the use of ChatGPT.

— Ariel, Miami Country Day School

Writing is still an important skill, but maybe not in the same way it has in the past. In an era of improving AI, topics such as grammar and spelling are less important than ever. Google already corrects small grammar mistakes; how long till they can suggest completely restructuring sentences? However, being a good writer is more than just grammar and vocabulary. It’s about collecting your thoughts into a cohesive and thoughtful presentation … If you want to communicate your own ideas, not just a conglomerate of ones on the internet, you’re better off just writing it yourself. That’s not to mention the plethora of issues like AI just making stuff up from time to time. So for now at least, improving your writing is still the best way to share your thoughts.

— Liam, Glenbard West High School

Several students shared how they use A.I. as a resource to aid, rather than replace, their own effort.

I think AI should be a tool for writers. It can help make outlines for writing pieces and it could help solve problems students are stuck on and give them an explanation. However, I think the line should be drawn if students use AI to do the whole entire assignment for them. That’s when it should be considered cheating and not be used.

— Sam, Hinsdale, IL

Sometimes I use A.I. programs such as ChatGPT to help with typing and communication. The results vary, but overall I find it helpful in generating creative ideas, cleaning up language, and speeding up the writing. However, I believe it is important to be careful and filter the results to ensure accuracy and precision. AI tools are valuable aids, but human input and insight are still needed to achieve the desired quality of written communication.

— Zach, New Rochelle High School

As of now, A.I. is not capable of replacing human prose effectively. Just look at the data, the only A.P. tests that ChatGPT did not pass were the ones for English Language and English Literature. This data lays bare a fact that most students refuse to accept: ChatGPT is not able to write a quality essay yet. Now that many schools are loosening restrictions regarding the use of generative A.I., students have two options: either they get back to work or they get a bad grade for their A.I.-generated essay.

On the other hand, there is another alternative that is likely to be the best one yet. A good friend once said, “A.I. software like ChatGPT solves the issue of having a clean sheet of paper”. By nature, humans are terrible at getting anything started. This is the issue that ChatGPT solves. As Bruni asserts, “Writing is thinking, but it’s thinking slowed down — stilled — to a point where dimensions and nuances otherwise invisible to you appear.” This is true, but ChatGPT can help students by creating a rough draft of what those ideas might look like on paper. The endpoint is this: while students are likely to keep needing to become good writers to excel at school, A.I. technology such as ChatGPT and Grammarly will become additional tools that will help students reach even higher levels of literary excellence.

— Francisco, Miami Country Day School

But some thought we might not be far from a future where A.I. can write for us.

I think that AI will eventually replace the need for the average person to write at the level that they do. AI is no different than every other tech advancement we’ve made, which have made tasks like writing easier. Similar concerns could have been raised with the introduction of computers in the classroom, and the loss of people having great handwriting. I don’t think the prospect should be worrying. AI is a tool. Having it write for us will allow us to focus on more important things that AI is not yet capable of.

— zack, Hinsdale Central

AI is becoming wildly accessible and increasingly more competent. The growth of this sector could mean more students find their way to an AI site to look for an answer. I agree that this could spell trouble for student intelligence if passable answers are so readily available. But you might want to consider the students themselves. The majority are hardworking and smart, not just smart about subjects in school, but about how using only AI for their work could end badly. Students will probably not use the newborn tech first hand until it is basically errorless, and that will take some time.

— Beau, Glen Ellyn, IL

Even so, there were students who doubted that technology could ever replace “what it means to be a writer.”

I don’t think AI will fully be able to replace humans, no matter how much time we as a society take to implement it into everyday life, as they are still just a bunch of numbers and code, and the complexity of a human and the intricacies of our emotions, our thoughts, and feelings, along with what makes each of us an individual, someone that matters, proves that humans will never be able to be fully replicated by AI, and that the most emotion-centric jobs, such as writing, and most fields in art, will forever be, or should forever be, dominated by the experiences and emotional complexity of humans.

— Liam, Hinsdale

AI uses data from the internet it gathers and then puts together a paragraph or two, while it may be able to do this faster than any human, it does not have any authenticity. If it is pulling its information from the web where someone has said something similar, the data found may be biased and the AI would not care. Yet some people still insist it’s the future for writing when in reality, AI will probably not come up with an original idea and only use possibly biased data to give to someone so they can just copy it and move on and undermine what it means to be a writer.

— John, Glenbard North HS

I have never personally used ChatGPT as I believe no robot can recreate the creativity or authenticity humans achieve in writing … Even with growing advances in technology, AI can only create with the information it already knows, which takes away the greatest quality writers have: creativity.

— Stella, Glenbard West

In my opinion, learning to be a good writer absolutely still matters in the age of AI. While artificial intelligence can assist with certain aspects of writing, such as grammar and syntax checking, it cannot replace the creativity, critical thinking, and emotional intelligence that we human writers bring to the table. Another reason is that storytelling, persuasion, and the art of crafting a compelling narrative are skills deeply rooted in human intuition and empathy. A good writer can connect with readers on a personal level, inspiring thoughts, feelings, and actions. AI may enhance efficiency, but it cannot replicate the authentic voice and unique perspective that a human writer brings to their work.

— McKenzie, Warrington, PA

Learn more about Current Events Conversation here and find all of our posts in this column .

America's Education News Source

Copyright 2024 The 74 Media, Inc

  • Cyberattack
  • absenteeism

Future of High School

Artificial intelligence.

  • science of reading

Learning Loss, AI and the Future of Education: Our 24 Most-Read Essays of 2023

From rethinking the american high school to the fiscal cliff, tutoring and special ed, what our most incisive opinion contributors had to say.

ai is the future of technology essay

Some of America’s biggest names in education tackled some of the thorniest issues facing the country’s schools on the op-ed pages of The 74 this year, expressing their concerns about continuing COVID-driven deficits among students and the future of education overall. There were some grim predictions, but also reasons for hope. Here are some of the most read, most incisive and most controversial essays we published in 2023.

ai is the future of technology essay

David Steiner

America’s education system is a mess, and students are paying the price.

ai is the future of technology essay

COVID-19, the legacy of race-based redlining, the lack of support for health care, child care and parental leave, and other social and economic policies have taken a terrible toll on student learning. But the fundamental cause of poor outcomes, writes contributor David Steiner of the Johns Hopkins Institute for Education Policy , is that policy leaders have eroded the instructional core and designed our education system for failure. As we have sown, so shall we reap. The challenges and rewards of learning are being washed away, and students are desperately the worse for the mess we have made. Read More

ai is the future of technology essay

Margaret Raymond

The terrible truth — current solutions to covid learning loss are doomed to fail.

Despite well-intended and rapid responses to COVID learning loss, solutions such as tutoring or summer school are doomed to fail, says contributor Margaret (Macke) Raymond of the Center for Research on Education Outcomes at Stanford University. How do we know? CREDO researchers looked at learning patterns for students at three levels of achievement in 16 states and found that even with five extra years of education, only about 75% will be at grade level by high school graduation. No school can offer that much. It is time to decide whether to make necessary changes or continue to support a system that will almost certainly fail.  Read More

ai is the future of technology essay

Mark Schneider

The future is stem — but without enough students, the u.s. will be left behind.

This is a photo of the U.S. Capitol building.

America no longer produces the most science and engineering research publications, patents or natural-science Ph.D.s, and these trends are unlikely to change anytime soon. The problem isn’t a lack of universities to train future scientists or an economy incapable of encouraging innovation. Rather, says contributor Mark Schneider of the Institute of Education Sciences, it originates much earlier in the supply chain, in elementary school. Congress has a chance to help turn this around, by passing the New Essential Education Discoveries (NEED) Act.  Read More

ai is the future of technology essay

John Bailey

The Promise of Personalized Learning Never Delivered. Today’s AI Is Different

Educators often encounter lofty promises of technology revolutionizing learning, only to find reality fails to meet expectations. But based on his experiences with the new generation of artificial intelligence tools, contributor John Bailey believes society may be in the early stages of a transformative moment. This may very well usher in an era of individualized learning, empowering all students to realize their full potential and fostering a more equitable and effective educational experience. Read his four reasons why this generation of AI tools is likely to succeed where other technologies have failed. Read More

ai is the future of technology essay

Chad Aldeman

Interactive — With More Teachers & Fewer Students, Districts Are Set up for Financial Trouble

ai is the future of technology essay

To understand the teacher labor market, you have to hold two competing narratives in your head. On one hand, teacher turnover hit new highs, morale is low and schools are facing shortages. At the same time, public schools employ more teachers than before COVID, while serving 1.9 million fewer students. Student-teacher ratios are near all-time lows. Contributor Chad Aldeman and Eamonn Fitzmaurice, The 74’s art and technology director, plotted these changes on an exclusive, interactive map — and explain how they’re putting districts in financial peril. View the Map

Fascinating, right? But these are only the tip of the iceberg. Here’s a roundup of some of the hottest topics our op-ed contributors tackled, and what they had to say:

ai is the future of technology essay

Credit Hours Are a Relic of the Past. How States Must Disrupt High School — Now

Russlynn Ali & Timothy Knowles

A tryptic of three XQ students, Ella Correia, Najid Smith and Lydia Nichols

Back to School — 6 Tips from Students on How to Make High School Relevant

Beth Fertig

ai is the future of technology essay

I Changed My Shoes, and It Revolutionized How I Was Able to Rethink High School

William Blake

Fiscal Cliff & School Funding

ai is the future of technology essay

The 50 Very Different States of American Public Education

ai is the future of technology essay

It’s Time to Start Preparing Now for School Closures that Are Coming

Timothy Daly

ai is the future of technology essay

Educators, Beware: As Budget Cuts Loom, Now Is NOT the Time to Quit Your Job

Katherine Silberstein & Marguerite Roza

ai is the future of technology essay

Schools Could Lose 136,000 Teaching Jobs When Federal COVID Funds Run Out

ai is the future of technology essay

Artificial Intelligence Will Not Transform K-12 Education Without Changes to ‘the Grammar of School’

An illustration of a robot typing on keyboard

Schools Must Embrace the Looming Disruption of ChatGPT

Sarah Dillard

ai is the future of technology essay

Personalized Education Is Not a Panacea. Neither Is Artificial Intelligence

Natalia Kucirkova

ai is the future of technology essay

Done Right, Tutoring Can Greatly Boost Student Learning. How Do We Get There?

Kevin Huffman

A photo of Virginia Gov. Glenn Youngkin

As Virginia Rolls Out Ambitious Statewide High-Dosage Tutoring Effort This Week, 3 Keys to Success

Maureen Kelleher

ai is the future of technology essay

Why This Tutoring ‘Moment’ Could Die If We Don’t Tighten Up the Models

Mike Goldstein

Learning Loss

ai is the future of technology essay

New NAEP Scores Reveal the Failure of Pandemic Academic Recovery Efforts

Vladimir Kogan

ai is the future of technology essay

Quarantines, Not School Closures, Led to Devastating Losses in Math and Reading

ai is the future of technology essay

6 Teachers Tell Their Secrets for Getting Middle Schoolers up to Speed in Math

Alexandra Frost

Special Ed and Gifted & Talented

a stock image of a large tidal wave

Bracing for a Tidal Wave of Unnecessary Special Education Referrals

Lauren Morando Rhim, Candace Cortiella, Lindsay Kubatzky & Laurie VanderPloeg

ai is the future of technology essay

Why Are Schools Comfortable Accepting Failure for Students with Disabilities?

David Flink & Lauren Morando Rhim

ai is the future of technology essay

NYC’s New Gifted & Talented Admissions Brings Chaos — and Disregards Research

Alina Adams

Get stories like these delivered straight to your inbox. Sign up for The 74 Newsletter

Bev Weintraub is an Executive Editor at The 74

ai is the future of technology essay

  • best of the year
  • fiscal cliff
  • future of high school
  • learning loss

We want our stories to be shared as widely as possible — for free.

Please view The 74's republishing terms.

By Bev Weintraub

ai is the future of technology essay

This story first appeared at The 74 , a nonprofit news site covering education. Sign up for free newsletters from The 74 to get more like this in your inbox.

On The 74 Today

Feb 13, 2023

200-500 Word Example Essays about Technology

Got an essay assignment about technology check out these examples to inspire you.

Technology is a rapidly evolving field that has completely changed the way we live, work, and interact with one another. Technology has profoundly impacted our daily lives, from how we communicate with friends and family to how we access information and complete tasks. As a result, it's no surprise that technology is a popular topic for students writing essays.

But writing a technology essay can be challenging, especially for those needing more time or help with writer's block. This is where Jenni.ai comes in. Jenni.ai is an innovative AI tool explicitly designed for students who need help writing essays. With Jenni.ai, students can quickly and easily generate essays on various topics, including technology.

This blog post aims to provide readers with various example essays on technology, all generated by Jenni.ai. These essays will be a valuable resource for students looking for inspiration or guidance as they work on their essays. By reading through these example essays, students can better understand how technology can be approached and discussed in an essay.

Moreover, by signing up for a free trial with Jenni.ai, students can take advantage of this innovative tool and receive even more support as they work on their essays. Jenni.ai is designed to help students write essays faster and more efficiently, so they can focus on what truly matters – learning and growing as a student. Whether you're a student who is struggling with writer's block or simply looking for a convenient way to generate essays on a wide range of topics, Jenni.ai is the perfect solution.

The Impact of Technology on Society and Culture

Introduction:.

Technology has become an integral part of our daily lives and has dramatically impacted how we interact, communicate, and carry out various activities. Technological advancements have brought positive and negative changes to society and culture. In this article, we will explore the impact of technology on society and culture and how it has influenced different aspects of our lives.

Positive impact on communication:

Technology has dramatically improved communication and made it easier for people to connect from anywhere in the world. Social media platforms, instant messaging, and video conferencing have brought people closer, bridging geographical distances and cultural differences. This has made it easier for people to share information, exchange ideas, and collaborate on projects.

Positive impact on education:

Students and instructors now have access to a multitude of knowledge and resources because of the effect of technology on education . Students may now study at their speed and from any location thanks to online learning platforms, educational applications, and digital textbooks.

Negative impact on critical thinking and creativity:

Technological advancements have resulted in a reduction in critical thinking and creativity. With so much information at our fingertips, individuals have become more passive in their learning, relying on the internet for solutions rather than logic and inventiveness. As a result, independent thinking and problem-solving abilities have declined.

Positive impact on entertainment:

Technology has transformed how we access and consume entertainment. People may now access a wide range of entertainment alternatives from the comfort of their own homes thanks to streaming services, gaming platforms, and online content makers. The entertainment business has entered a new age of creativity and invention as a result of this.

Negative impact on attention span:

However, the continual bombardment of information and technological stimulation has also reduced attention span and the capacity to focus. People are easily distracted and need help focusing on a single activity for a long time. This has hampered productivity and the ability to accomplish duties.

The Ethics of Artificial Intelligence And Machine Learning

The development of artificial intelligence (AI) and machine learning (ML) technologies has been one of the most significant technological developments of the past several decades. These cutting-edge technologies have the potential to alter several sectors of society, including commerce, industry, healthcare, and entertainment. 

As with any new and quickly advancing technology, AI and ML ethics must be carefully studied. The usage of these technologies presents significant concerns around privacy, accountability, and command. As the use of AI and ML grows more ubiquitous, we must assess their possible influence on society and investigate the ethical issues that must be taken into account as these technologies continue to develop.

What are Artificial Intelligence and Machine Learning?

Artificial Intelligence is the simulation of human intelligence in machines designed to think and act like humans. Machine learning is a subfield of AI that enables computers to learn from data and improve their performance over time without being explicitly programmed.

The impact of AI and ML on Society

The use of AI and ML in various industries, such as healthcare, finance, and retail, has brought many benefits. For example, AI-powered medical diagnosis systems can identify diseases faster and more accurately than human doctors. However, there are also concerns about job displacement and the potential for AI to perpetuate societal biases.

The Ethical Considerations of AI and ML

A. Bias in AI algorithms

One of the critical ethical concerns about AI and ML is the potential for algorithms to perpetuate existing biases. This can occur if the data used to train these algorithms reflects the preferences of the people who created it. As a result, AI systems can perpetuate these biases and discriminate against certain groups of people.

B. Responsibility for AI-generated decisions

Another ethical concern is the responsibility for decisions made by AI systems. For example, who is responsible for the damage if a self-driving car causes an accident? The manufacturer of the vehicle, the software developer, or the AI algorithm itself?

C. The potential for misuse of AI and ML

AI and ML can also be used for malicious purposes, such as cyberattacks and misinformation. The need for more regulation and oversight in developing and using these technologies makes it difficult to prevent misuse.

The developments in AI and ML have given numerous benefits to humanity, but they also present significant ethical concerns that must be addressed. We must assess the repercussions of new technologies on society, implement methods to limit the associated dangers, and guarantee that they are utilized for the greater good. As AI and ML continue to play an ever-increasing role in our daily lives, we must engage in an open and frank discussion regarding their ethics.

The Future of Work And Automation

Rapid technological breakthroughs in recent years have brought about considerable changes in our way of life and work. Concerns regarding the influence of artificial intelligence and machine learning on the future of work and employment have increased alongside the development of these technologies. This article will examine the possible advantages and disadvantages of automation and its influence on the labor market, employees, and the economy.

The Advantages of Automation

Automation in the workplace offers various benefits, including higher efficiency and production, fewer mistakes, and enhanced precision. Automated processes may accomplish repetitive jobs quickly and precisely, allowing employees to concentrate on more complex and creative activities. Additionally, automation may save organizations money since it removes the need to pay for labor and minimizes the danger of workplace accidents.

The Potential Disadvantages of Automation

However, automation has significant disadvantages, including job loss and income stagnation. As robots and computers replace human labor in particular industries, there is a danger that many workers may lose their jobs, resulting in higher unemployment and more significant economic disparity. Moreover, if automation is not adequately regulated and managed, it might lead to stagnant wages and a deterioration in employees' standard of life.

The Future of Work and Automation

Despite these difficulties, automation will likely influence how labor is done. As a result, firms, employees, and governments must take early measures to solve possible issues and reap the rewards of automation. This might entail funding worker retraining programs, enhancing education and skill development, and implementing regulations that support equality and justice at work.

IV. The Need for Ethical Considerations

We must consider the ethical ramifications of automation and its effects on society as technology develops. The impact on employees and their rights, possible hazards to privacy and security, and the duty of corporations and governments to ensure that automation is utilized responsibly and ethically are all factors to be taken into account.

Conclusion:

To summarise, the future of employment and automation will most certainly be defined by a complex interaction of technological advances, economic trends, and cultural ideals. All stakeholders must work together to handle the problems and possibilities presented by automation and ensure that technology is employed to benefit society as a whole.

The Role of Technology in Education

Introduction.

Nearly every part of our lives has been transformed by technology, and education is no different. Today's students have greater access to knowledge, opportunities, and resources than ever before, and technology is becoming a more significant part of their educational experience. Technology is transforming how we think about education and creating new opportunities for learners of all ages, from online courses and virtual classrooms to instructional applications and augmented reality.

Technology's Benefits for Education

The capacity to tailor learning is one of technology's most significant benefits in education. Students may customize their education to meet their unique needs and interests since they can access online information and tools. 

For instance, people can enroll in online classes on topics they are interested in, get tailored feedback on their work, and engage in virtual discussions with peers and subject matter experts worldwide. As a result, pupils are better able to acquire and develop the abilities and information necessary for success.

Challenges and Concerns

Despite the numerous advantages of technology in education, there are also obstacles and considerations to consider. One issue is the growing reliance on technology and the possibility that pupils would become overly dependent on it. This might result in a lack of critical thinking and problem-solving abilities, as students may become passive learners who only follow instructions and rely on technology to complete their assignments.

Another obstacle is the digital divide between those who have access to technology and those who do not. This division can exacerbate the achievement gap between pupils and produce uneven educational and professional growth chances. To reduce these consequences, all students must have access to the technology and resources necessary for success.

In conclusion, technology is rapidly becoming an integral part of the classroom experience and has the potential to alter the way we learn radically. 

Technology can help students flourish and realize their full potential by giving them access to individualized instruction, tools, and opportunities. While the benefits of technology in the classroom are undeniable, it's crucial to be mindful of the risks and take precautions to guarantee that all kids have access to the tools they need to thrive.

The Influence of Technology On Personal Relationships And Communication 

Technological advancements have profoundly altered how individuals connect and exchange information. It has changed the world in many ways in only a few decades. Because of the rise of the internet and various social media sites, maintaining relationships with people from all walks of life is now simpler than ever. 

However, concerns about how these developments may affect interpersonal connections and dialogue are inevitable in an era of rapid technological growth. In this piece, we'll discuss how the prevalence of digital media has altered our interpersonal connections and the language we use to express ourselves.

Direct Effect on Direct Interaction:

The disruption of face-to-face communication is a particularly stark example of how technology has impacted human connections. The quality of interpersonal connections has suffered due to people's growing preference for digital over human communication. Technology has been demonstrated to reduce the usage of nonverbal signs such as facial expressions, tone of voice, and other indicators of emotional investment in the connection.

Positive Impact on Long-Distance Relationships:

Yet there are positives to be found as well. Long-distance relationships have also benefited from technological advancements. The development of technologies such as video conferencing, instant messaging, and social media has made it possible for individuals to keep in touch with distant loved ones. It has become simpler for individuals to stay in touch and feel connected despite geographical distance.

The Effects of Social Media on Personal Connections:

The widespread use of social media has had far-reaching consequences, especially on the quality of interpersonal interactions. Social media has positive and harmful effects on relationships since it allows people to keep in touch and share life's milestones.

Unfortunately, social media has made it all too easy to compare oneself to others, which may lead to emotions of jealousy and a general decline in confidence. Furthermore, social media might cause people to have inflated expectations of themselves and their relationships.

A Personal Perspective on the Intersection of Technology and Romance

Technological advancements have also altered physical touch and closeness. Virtual reality and other technologies have allowed people to feel physical contact and familiarity in a digital setting. This might be a promising breakthrough, but it has some potential downsides. 

Experts are concerned that people's growing dependence on technology for intimacy may lead to less time spent communicating face-to-face and less emphasis on physical contact, both of which are important for maintaining good relationships.

In conclusion, technological advancements have significantly affected the quality of interpersonal connections and the exchange of information. Even though technology has made it simpler to maintain personal relationships, it has chilled interpersonal interactions between people. 

Keeping tabs on how technology is changing our lives and making adjustments as necessary is essential as we move forward. Boundaries and prioritizing in-person conversation and physical touch in close relationships may help reduce the harm it causes.

The Security and Privacy Implications of Increased Technology Use and Data Collection

The fast development of technology over the past few decades has made its way into every aspect of our life. Technology has improved many facets of our life, from communication to commerce. However, significant privacy and security problems have emerged due to the broad adoption of technology. In this essay, we'll look at how the widespread use of technological solutions and the subsequent explosion in collected data affects our right to privacy and security.

Data Mining and Privacy Concerns

Risk of Cyber Attacks and Data Loss

The Widespread Use of Encryption and Other Safety Mechanisms

The Privacy and Security of the Future in a Globalized Information Age

Obtaining and Using Individual Information

The acquisition and use of private information is a significant cause for privacy alarm in the digital age. Data about their customers' online habits, interests, and personal information is a valuable commodity for many internet firms. Besides tailored advertising, this information may be used for other, less desirable things like identity theft or cyber assaults.

Moreover, many individuals need to be made aware of what data is being gathered from them or how it is being utilized because of the lack of transparency around gathering personal information. Privacy and data security have become increasingly contentious as a result.

Data breaches and other forms of cyber-attack pose a severe risk.

The risk of cyber assaults and data breaches is another big issue of worry. More people are using more devices, which means more opportunities for cybercriminals to steal private information like credit card numbers and other identifying data. This may cause monetary damages and harm one's reputation or identity.

Many high-profile data breaches have occurred in recent years, exposing the personal information of millions of individuals and raising serious concerns about the safety of this information. Companies and governments have responded to this problem by adopting new security methods like encryption and multi-factor authentication.

Many businesses now use encryption and other security measures to protect themselves from cybercriminals and data thieves. Encryption keeps sensitive information hidden by encoding it so that only those possessing the corresponding key can decipher it. This prevents private information like bank account numbers or social security numbers from falling into the wrong hands.

Firewalls, virus scanners, and two-factor authentication are all additional security precautions that may be used with encryption. While these safeguards do much to stave against cyber assaults, they are not entirely impregnable, and data breaches are still possible.

The Future of Privacy and Security in a Technologically Advanced World

There's little doubt that concerns about privacy and security will persist even as technology improves. There must be strict safeguards to secure people's private information as more and more of it is transferred and kept digitally. To achieve this goal, it may be necessary to implement novel technologies and heightened levels of protection and to revise the rules and regulations regulating the collection and storage of private information.

Individuals and businesses are understandably concerned about the security and privacy consequences of widespread technological use and data collecting. There are numerous obstacles to overcome in a society where technology plays an increasingly important role, from acquiring and using personal data to the risk of cyber-attacks and data breaches. Companies and governments must keep spending money on security measures and working to educate people about the significance of privacy and security if personal data is to remain safe.

In conclusion, technology has profoundly impacted virtually every aspect of our lives, including society and culture, ethics, work, education, personal relationships, and security and privacy. The rise of artificial intelligence and machine learning has presented new ethical considerations, while automation is transforming the future of work. 

In education, technology has revolutionized the way we learn and access information. At the same time, our dependence on technology has brought new challenges in terms of personal relationships, communication, security, and privacy.

Jenni.ai is an AI tool that can help students write essays easily and quickly. Whether you're looking, for example, for essays on any of these topics or are seeking assistance in writing your essay, Jenni.ai offers a convenient solution. Sign up for a free trial today and experience the benefits of AI-powered writing assistance for yourself.

Try Jenni for free today

Create your first piece of content with Jenni today and never look back

AI Has Lost Its Magic

That’s how you know it’s taking over.

image of a pink ice-cream pop, overturned and melting

I frequently ask ChatGPT to write poems in the style of the American modernist poet Hart Crane. It does an admirable job of delivering. But the other day, when I instructed the software to give the Crane treatment to a plate of ice-cream sandwiches, I felt bored before I even saw the answer. “The oozing cream, like time, escapes our grasp, / Each moment slipping with a silent gasp.” This was fine. It was competent. I read the poem, Slacked part of it to a colleague, and closed the window. Whatever.

A year and a half has passed since generative AI captured the public imagination and my own. For many months, the fees I paid to ChatGPT and Midjourney felt like money better spent than the cost of my Netflix subscription, even just for entertainment. I’d sit on the couch and generate cheeseburger kaiju while Bridgerton played, unwatched, before me. But now that time is over. The torpor that I felt in asking for Hart Crane’s ode to an ice-cream sandwich seemed to mark the end point of a brief, glorious phase in the history of technology. Generative AI appeared as if from nowhere, bringing magic, both light and dark. If the curtain on that show has now been drawn, it’s not because AI turned out to be a flop. Just the opposite: The tools that it enables have only slipped into the background, from where they will exert their greatest influence.

Looking back at my ChatGPT history, I used to ask for Hart Crane–ice-cream stuff all the time. An Emily Dickinson poem about Sizzler (“In Sizzler’s embrace, we find our space / Where simple joys and flavors interlace”). Edna St. Vincent Millay on Beverly Hills, 90210 (“In sun-kissed land where palm trees sway / Jeans of stone-wash in a bygone day”). Biz Markie and then Eazy-E verses about the (real!) Snoop Dogg cereal Frosted Drizzlerz. A blurb about Rainbow Brite in the style of the philosopher Jacques Derrida. I asked for these things, at first, just to see what each model was capable of doing, to explore how it worked. I found that AI had the uncanny ability to blend concepts both precisely and creatively.

Read: The AI Mona Lisa explains everything

Last autumn, I wrote in The Atlantic that, at its best, generative AI could be used as a tool to supercharge your imagination . I’d been using DALL-E to give a real-ish form to almost any notion that popped into my head. One weekend, I spent most of a family outing stealing moments to build out the fictional, 120-year history of a pear-flavored French soft drink called P’Poire. Then there was Trotter, a cigarette made by and for pigs. I’ve spent so many hours on these sideline pranks that the products now feel real to me. They are real, at least in the way that any fiction—Popeye, Harry Potter—can be real.

But slowly, invisibly, the work of really using AI took over. While researching a story about lemon-lime flavor , I asked ChatGPT to give me an overview of the U.S. market for beverages with this ingredient, but had to do my own research to confirm the facts. In the course of working out new programs of study for my university department, I had the software assess and devise possible names. Neither task produced a fraction of the delight that I’d once derived from just a single AI-generated phrase, “jeans of stone-wash.” But at least the latter gave me what I needed at the time: a workable mediocrity.

I still found some opportunities to supercharge my imagination, but those became less frequent over time. In their place, I assigned AI the mule-worthy burden of mere tasks . Faced with the question of which wait-listed students to admit into an overenrolled computer-science class , I used ChatGPT to apply the relevant and complicated criteria. (If a parent or my provost is reading this, I did not send any student’s actual name or personal data to OpenAI.) In need of a project website on short order, I had the service create one far more quickly than I could have by hand. When I wanted to analyze the full corpus of Wordle solutions for a recent story on the New York Times games library, I asked for help from OpenAI’s Data Analyst. Nobody had promised me any of this, so having something that kind of worked felt like a gift.

The more imaginative uses of AI were always bound to buckle under this actual utility. A year ago, university professors like me were already fretting over the technology’s practical consequences, and we spent many weeks debating whether and how universities could control the use of large language models in assignments. Indeed, for students, generative AI seemed obviously and immediately productive: Right away, it could help them write college essays and do homework . (Teachers found lots of ways to use it, too.) The applications seemed to grow and grow. In November, OpenAI CEO Sam Altman said the ChatGPT service had 100 million weekly users. In January, the job-ratings website Glassdoor put out a survey finding that 62 percent of professionals, including 77 percent of those in marketing, were using ChatGPT at work. And last month, Pew Research Center reported that almost half of American adults believe they interact with AI, in one form or another, several times a week at least.

Read: Things get strange when AI starts training itself

The rapid adoption was in part a function of AI’s novelty—without initial interest, nothing can catch on. But that user growth could be sustained only by the technology’s transition into something unexciting. Inventions become important not when they offer a glimpse of some possible future—as, say, the Apple Vision Pro does right now—but when they’re able to recede into the background, to become mundane. Of course you have a smartphone. Of course you have a refrigerator, a television, a microwave, an automobile. These technologies are not—which is to say, they are no longer —delightful.

Not all inventions lose their shimmer right away, but the ones that change the world won’t take long to seem humdrum. I already miss the feeling of enchantment that came from making new Hart Crane poems or pear-soft-drink ad campaigns. I miss the joy of seeing any imaginable idea brought instantly to life. But whatever nostalgia one might have for the early days of ChatGPT and DALL-E will be no less fleeting in the end. First the magic fades, then the nostalgia. This is what happens to a technology that’s taking over. This is a measure of its power.

To revisit this article, visit My Profile, then View saved stories .

  • Backchannel
  • Newsletters
  • WIRED Insider
  • WIRED Consulting

Amanda Hoover

Students Are Likely Writing Millions of Papers With AI

Illustration of four hands holding pencils that are connected to a central brain

Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows.

A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. (Turnitin is owned by Advance, which also owns Condé Nast, publisher of WIRED.) Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.

ChatGPT’s launch was met with knee-jerk fears that the English class essay would die . The chatbot can synthesize information and distill it near-instantly—but that doesn’t mean it always gets it right. Generative AI has been known to hallucinate , creating its own facts and citing academic references that don’t actually exist. Generative AI chatbots have also been caught spitting out biased text on gender and race . Despite those flaws, students have used chatbots for research, organizing ideas, and as a ghostwriter . Traces of chatbots have even been found in peer-reviewed, published academic writing .

Teachers understandably want to hold students accountable for using generative AI without permission or disclosure. But that requires a reliable way to prove AI was used in a given assignment. Instructors have tried at times to find their own solutions to detecting AI in writing, using messy, untested methods to enforce rules , and distressing students. Further complicating the issue, some teachers are even using generative AI in their grading processes.

Detecting the use of gen AI is tricky. It’s not as easy as flagging plagiarism, because generated text is still original text. Plus, there’s nuance to how students use gen AI; some may ask chatbots to write their papers for them in large chunks or in full, while others may use the tools as an aid or a brainstorm partner.

Students also aren't tempted by only ChatGPT and similar large language models. So-called word spinners are another type of AI software that rewrites text, and may make it less obvious to a teacher that work was plagiarized or generated by AI. Turnitin’s AI detector has also been updated to detect word spinners, says Annie Chechitelli, the company’s chief product officer. It can also flag work that was rewritten by services like spell checker Grammarly, which now has its own generative AI tool . As familiar software increasingly adds generative AI components, what students can and can’t use becomes more muddled.

Detection tools themselves have a risk of bias. English language learners may be more likely to set them off; a 2023 study found a 61.3 percent false positive rate when evaluating Test of English as a Foreign Language (TOEFL) exams with seven different AI detectors. The study did not examine Turnitin’s version. The company says it has trained its detector on writing from English language learners as well as native English speakers. A study published in October found that Turnitin was among the most accurate of 16 AI language detectors in a test that had the tool examine undergraduate papers and AI-generated papers.

Airchat Is Silicon Valley’s Latest Obsession

Lauren Goode

Donald Trump Poses a Unique Threat to Truth Social, Says Truth Social

William Turton

Fake Footage of Iran’s Attack on Israel Is Going Viral

Vittoria Elliott

The Best Strollers for Carting Kids Just About Anywhere

Nena Farrell

Schools that use Turnitin had access to the AI detection software for a free pilot period, which ended at the start of this year. Chechitelli says a majority of the service’s clients have opted to purchase the AI detection. But the risks of false positives and bias against English learners have led some universities to ditch the tools for now. Montclair State University in New Jersey announced in November that it would pause use of Turnitin’s AI detector. Vanderbilt University and Northwestern University did the same last summer.

“This is hard. I understand why people want a tool,” says Emily Isaacs, executive director of the Office of Faculty Excellence at Montclair State. But Isaacs says the university is concerned about potentially biased results from AI detectors, as well as the fact that the tools can’t provide confirmation the way they can with plagiarism. Plus, Montclair State doesn’t want to put a blanket ban on AI, which will have some place in academia. With time and more trust in the tools, the policies could change. “It’s not a forever decision, it’s a now decision,” Isaacs says.

Chechitelli says the Turnitin tool shouldn’t be the only consideration in passing or failing a student. Instead, it’s a chance for teachers to start conversations with students that touch on all of the nuance in using generative AI. “People don’t really know where that line should be,” she says.

You Might Also Like …

In your inbox: The best and weirdest stories from WIRED’s archive

Jeffrey Epstein’s island visitors exposed by data broker

8 Google employees invented modern AI. Here’s the inside story

The crypto fraud kingpin who almost got away

It's shadow time! How to view the solar eclipse, online and in person

ai is the future of technology essay

Steven Levy

Here's How Generative AI Depicts Queer People

Reece Rogers

How to Resist the Temptation of AI When Writing

Estelle Erasmus

No One Actually Knows How AI Will Affect Jobs

Will Knight

To Build a Better AI Supercomputer, Let There Be Light

To fully appreciate AI expectations, look to the trillions being invested

An AI (Artificial Intelligence) sign is seen at the World Artificial Intelligence Conference (WAIC) in Shanghai, China July 6, 2023. REUTERS/Aly Song

The technology's potentially far-reaching impacts have spurred a race to shape its future development. Image:  REUTERS/Aly Song

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} John Letzing

ai is the future of technology essay

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, artificial intelligence.

  • The public sector is increasingly getting involved in fostering and financing AI’s future.
  • Saudi Arabia, for instance, may establish a $40 billion AI initiative.
  • Such ‘industrial policy’ is generally applied to a country’s most vital interests.

In the last week of March, these two things happened in the world of artificial intelligence: news dropped of a planned AI supercomputer that would cost more than the annual GDP of Bulgaria , and an inveterate tech CEO publicly poked fun at an AI-powered toothbrush that sells for about $140 .

But the AI “hype cycle” alluded to by the CEO doesn’t seem anywhere near its peak yet. And in any case, viewing this as just a typical financial bubble might not be an accurate way to frame things.

If, like another seasoned tech CEO , you think AI may be a more meaningful invention than fire , you’re likely in favor of keeping the funding floodgates open.

Seems you’re in luck.

As of last week, Amazon and Microsoft have reportedly committed at least a combined $15 billion to competing generative AI startups . The CEO of one of those startups may yet try to raise as much as $7 trillion more (that’s not a typo), to make the precious chips needed to train models for AI systems more abundant.

Venture capital investors have lavishly funded a pipeline of additional upstarts; eight of the most prominent were recently valued at an average of 83 times their projected annual revenue in the process.

The public sector is also getting in on the action. Saudi Arabia was recently reported to be forming a $40 billion AI initiative, to invest in everything from chipmaking to data centers. It would be a singular vote of confidence in the technology from one of the world's biggest sovereign wealth funds.

The insatiable appetite for investing in AI is widely shared.

The overall amount of money funneled into private investments like venture capital deals may have slipped by 2022, but AI remains a high priority for governments around the world. When that gets translated into targeted legislation and funding, it tends to be dubbed “ industrial policy .”

This public money can be spent in the form of a fund like Saudi Arabia’s, or via the $30 billion in subsidies the US government is using to attract makers of AI chips (the EU has a similar , €43 billion chip program at least partly focused on AI).

Spending can also come in the shape of a €540 million supercomputer for training AI models, like the one being financed by the EU, France, and the Netherlands, or the £900 million, UK version intended to help that country build its own “ BritGPT .” India’s government has a multi-pronged “ AI mission ” funded with the equivalent of $1.2 billion . China’s spending on AI is projected to surpass $38 billion by 2027.

In the same way that predictions can shape reality , so can mountains of money.

Now, it’s a question of who will do the shaping.

Trying to (mostly) pick winners

Aggressive industrial policy in China is woven into the fabric of its economy . In other parts of the world, it’s only recently become less of a dirty word.

That’s true even in the US, much to the chagrin of some people . It’s a place where the Horatio Alger myth persists, but it’s also one where directing policy and taxpayer money at specific sectors might now be one of the few things the country’s two main political parties can agree on.

The US has actually relied on industrial policy throughout its history a lot more than one might assume. Its public investment in the essential elements of what became the internet , for example, has probably paid for itself a few times over by now. And people still make daily use of infrastructure built as part of the government-funded New Deal established nearly a century ago. But it hasn't all been smooth sailing.

“Economists in general don’t like industrial policy because they say, well, markets will figure it out,” the economist Laura D’Andrea Tyson said during an “Industrial Policy 2.0” panel discussion at Davos earlier this year. But, she added, “markets don’t pay attention to national security issues.”

And there’s the rub.

Because the potential impacts of AI are so far-reaching, no one wants to be faced with the grave implications of failing to master it and actively participate in molding its future development.

Some places have more homegrown AI investors and startups and others.

Others on the Davos panel were less upbeat. The most generous thing the economist Adam Posen had to offer about industrial policy: “Sometimes it’s coincided with success, and sometimes not.” Putting up money is fine. But people get uncomfortable with the idea of governments propping up “winners” plugged into domestic politics, and shunning better-qualified “losers” without those connections.

Still, industrial policy likely helped spark the Industrial Revolution – so it might be logical for it to play a bigger role in the Fourth Industrial Revolution. For one thing, it’s a means for countries lacking abundant homegrown venture investors and startups to level the playing field.

In a development that risks inducing symptoms of AI fatigue , those venture investors are now not just heavily backing AI startups, they’re also using AI to decide which startups to back. Another potentially off-putting trend: AI’s insatiable appetite for energy . Not to mention, we’re also literally running out of original content to feed AI systems.

It's natural to want to poke holes in something suddenly so overwhelming. But it’s also true that the collective hive mind isn’t always great at gauging future value (as a cub reporter I was sent into the streets of New York to ask people if they’d buy then-brand-new shares of Google at their IPO price, which would’ve turned each $1 into $30 over the next 15 years, and nearly everyone said “no.”)

It might all boil down to the nature of expectations. Is AI really key to revolutionizing research and improving general well-being, or merely a means to more efficiently perform menial tasks and run content mills while pocketing a lot of money along the way? If we truly believe it’s the former, clinging to orthodoxy about investment strategy may not be the best way forward.

The Saudi Minister of Industry and Mineral Resources also participated in that Davos panel on industrial policy. He had a succinct summary: it’s simply a way to stimulate “some of the things we want to happen faster.”

More reading on AI and industrial policy

For more context, here are links to further reading from the World Economic Forum's Strategic Intelligence platform :

  • Industrial policy in Europe may be entering a “second golden age,” as governments’ confidence in the ability of private enterprise to spur new markets wavers, according to this piece – but progress should not come at a steep social cost. ( Social Europe )
  • A thing that makes us unique as humans, according to this study, is the ability to perform a new task after receiving verbal instructions just once. That is, used to make us unique – AI-powered robots seem to be capable of that now, too. ( Science Daily )
  • “We also didn’t get into this industry without the fundamental belief that the future can be made better.” Two venture investors describe ways to improve the lot of healthcare workers with AI (those workers just have to be okay with the idea of being more “fungible”). ( Stanford Social Innovation Review )
  • Industrial policy hasn’t just re-emerged as a viable policy option, it’s also become the subject of a growing amount of research, according to this analysis – which may help improve its success rate in the future. ( CEPR )
  • About 95% of the solar panels used in the EU come from China, according to this piece, which argues for a smarter industrial policy strategy to reduce that dependence. ( Bruegel )
  • Governments are investing in AI, and they’re also using it. This survey digs into public perception of the use of face-recognition technology in particular. ( RAND Corporation )
  • When ChatGPT took the world by storm last year it caught many government officials by surprise, according to the piece, which details how the US government will now require notification whenever when a company starts training high-powered AI algorithms. ( Wired )

On the Strategic Intelligence platform, you can find feeds of expert analysis related to Artificial Intelligence , Capital Markets , the Fourth Industrial Revolution and hundreds of additional topics. You’ll need to register to view.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

The Agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} Weekly

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Artificial Intelligence .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

ai is the future of technology essay

AI for Impact: The Role of Artificial Intelligence in Social Innovation

ai is the future of technology essay

Causal AI: the revolution uncovering the ‘why’ of decision-making

Darko Matovski

April 11, 2024

ai is the future of technology essay

How to build the skills needed for the age of AI

Juliana Guaqueta Ospina

ai is the future of technology essay

These are the 4 skills to look for when building an AI governance team

Allan Millington

April 10, 2024

ai is the future of technology essay

UN and EU both agree new AI rules, and other digital technology stories you need to know

April 8, 2024

ai is the future of technology essay

AI vs Art: Will AI help or hinder human expression?

Robin Pomeroy and Sophia Akram

AI is pervasive. Here’s when we’ll see its real economic benefits materialize

Many are betting on artificial intelligence to solve the decades-old problem of productivity stagnation.

This year marks a turning point for artificial intelligence (AI). The EU parliament has voted to approve the EU AI Act after three years of negotiations, moving the conversation around responsible AI from theory to reality and setting a new global standard for AI policy.

IBM welcomed this legislation and its balanced, risk-based approach to regulating AI. Why? Because history has shown us time and again that with every new disruptive technology, we must balance that disruption with responsibility.

We’ve known for years that AI will touch every aspect of our lives and work, and there’s been much attention paid to the incredible potential of this technology to solve our most pressing problems. But not all of AI’s impact will be flashy and newsworthy–its success will also lie in the day-to-day ways that it will help humans be more productive.  

The productivity and growth conundrum

Right now, technology is advancing faster than ever, but productivity is not. A recent McKinsey report shows labor productivity in the U.S. has grown at a lackluster 1.4%. The findings show that “regaining historical rates of productivity growth would add $10 trillion to U.S. GDP–a boost needed to confront workforce shortages, debt, inflation, and the energy transition.” Similar productivity slowdown is happening globally, despite the technology boom of the past 15 years.  

Anthropologist Jason Hickel said “nearly every government in the world rich and poor alike, is focused single-mindedly on GDP (Gross Domestic Product) growth. This is no longer a matter of choice.”  

The formula for GDP growth has historically been population growth + productivity growth + debt growth. Two-thirds of this formula, population and debt growth, are unlikely to move in the near future. Aging populations and a shrinking workforce could lead to significant talent gaps, especially in terms of highly skilled and educated workers and as skills-first training and hiring continue to ramp up. Debt access is tightening after 15 years of the lowest interest rates in modern history come to an end.

That leaves productivity gains as our main driver of GDP growth. The world needs increased productivity to drive financial success for companies, as well as economic growth for countries.

AI is the answer to the productivity problem–but only if it can be developed and deployed responsibly and with clear purpose.

Reaping the benefits on responsible AI

Gartner estimates $5 trillion in technology spending in 2024, growing to $6.5 trillion by 2026. This will be the ultimate catalyst for the next stage of growth in the global economy.

However, one in five companies surveyed for the 2023 IBM Global AI Adoption Index say they don’t yet plan to use AI across their business. Cited among their concerns: limited AI skills and expertise, too much data complexity, and ethical concerns. This is the status quo component in our current paradox. But responsibility and disruption can–and must–co-exist.

As governments focus on smart AI regulation, business leaders must focus on accelerating responsible AI adoption. I meet with clients daily–and I’ve seen four priorities emerge in the path to adoption: Model choice, governance, skills, and open AI.

Providing model choice is critical to accelerating AI adoption. Different models will be better at some tasks than they are at other tasks. The best model will depend on the industry, domain, use case, and size of model, meaning most will utilize many smaller models versus one larger model.

And with the right governance , companies can be assured that their workflows are compliant with existing and upcoming government regulations and free of bias.

In today’s economy, jobs require skills, not just degrees. Technology is evolving faster than many can follow, creating a gap between demand and skills. Leaders must now prioritize skills-first hiring and training and upskilling the existing workforce to thrive in the AI era.

Finally, leveraging open-source models and proprietary models, with well-documented data sources, is the best way to achieve the transparency needed to advance responsible AI. Open is good for diversity because it makes it much easier to identify bias, for sovereignty because all the data sources are easily identifiable, and for education because it naturally lends itself to collaboration across the community.

AI can drive a level of GDP growth that none of us have ever seen in our lifetimes. It may mean the evolution of jobs in the near term. But just as with any other technological revolution, as upskilling occurs, there will eventually be new jobs, markets, and industries.

For business and government, 2024 must be the year of adoption, where we move from the experimentation phase to the deployment phase. With the right vision and approach to responsible AI adoption, we will begin to see widespread economic benefits of this technology in the next three years, with many more years of sustained growth and prosperity to come.

Rob Thomas is SVP of Software and Chief Commercial Officer at IBM.

More must-read commentary published by  Fortune :

  • Glassdoor CEO : ‘Anonymous posts will always stay anonymous’
  • We analyzed 46 years of consumer sentiment data–and found that  today’s ‘vibecession’ is just men  starting to feel as bad about the economy as women historically have
  • Housing market data suggests  the most optimistic buyers during the pandemic  are more likely to stop paying their mortgages
  • Intel CEO : ‘Our goal is to have at least 50% of the world’s advanced semiconductors produced in the U.S. and Europe by the end of the decade’

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of  Fortune .

Latest in Commentary

  • 0 minutes ago

Alan Shaw, the president and CEO of Norfolk Southern Corporation, testifies before the Senate Environment and Public Works Committee in the wake of the Norfolk Southern train derailment and chemical release in East Palestine, Ohio in 2023.

Norfolk Southern is getting back on track–but activist investor Ancora is trying to derail it in a vicious proxy fight

Attorney Christine Shen

Healing the workplace means letting ex-employees tell their side of the story without fear of retribution. It’s time to stop signing them into silence

U.S. Trade Representative Katherine Tai speaks during the Asia-Pacific Economic Cooperation (APEC) Leaders' Week in San Francisco on Nov. 16.

America is mirroring China’s tech protectionism and reversing decades of free trade doctrine. Here’s why that’s bad news for U.S. leadership

Households with a breadwinner and a homemaker are treated the most favorably by the tax code–but there are less and less of them.

The tax code is made for tradwives. Here’s how much it punishes dual-earning couples

Jennifer Young

My mental health hit a low point due to a difficult pregnancy. Every employer should offer the kind of benefits package that pulled me through

Between 2016 and 2023, the top 58 U.S.-based financiers poured more than $134 billion into meat, dairy, and feed corporations, according to Friends of the Earth.

Big Meat leaves ‘a huge cow-shaped hole’ in big banks’ climate commitments, new report finds

Most popular.

ai is the future of technology essay

Founder of Toms shoes went on a men’s retreat with other entrepreneurs to combat his loneliness and depression: ‘I lost a lot of my clear meaning and purpose’

ai is the future of technology essay

Some ex-TikTok employees say the social media service worked closely with its China-based parent despite claims of independence

ai is the future of technology essay

Nike’s boss says remote work was hurting innovation, so the company realigned and is ‘ruthlessly’ focused on building a disruptive pipeline

ai is the future of technology essay

Workers at Elon Musk’s Boring Co. accidentally dug too close to a supporting column of the Las Vegas monorail last year, forcing officials to briefly halt service

ai is the future of technology essay

Bitcoin ‘halving’ will cost crypto miners $10 billion a year in lost revenue and ‘could well determine who comes out ahead and who gets left behind’

  • ACS Foundation
  • Diversity, Equity, and Inclusion
  • ACS Archives
  • Careers at ACS
  • Federal Legislation
  • State Legislation
  • Regulatory Issues
  • Get Involved
  • SurgeonsPAC
  • About ACS Quality Programs
  • Accreditation & Verification Programs
  • Data & Registries
  • Standards & Staging
  • Membership & Community
  • Practice Management
  • Professional Growth
  • News & Publications
  • Information for Patients and Family
  • Preparing for Your Surgery
  • Recovering from Your Surgery
  • Jobs for Surgeons
  • Become a Member
  • Media Center

Our top priority is providing value to members. Your Member Services team is here to ensure you maximize your ACS member benefits, participate in College activities, and engage with your ACS colleagues. It's all here.

  • Membership Benefits
  • Find a Surgeon
  • Find a Hospital or Facility
  • Quality Programs
  • Education Programs
  • Member Benefits
  • April 2024 | Volume 109, I...

Artificial Intelligence: The Future Is What We Make It

Vivek Singh, Divya Kewalramani, MD, Juan C. Paramo, MD, FACS, Tyler J. Loftus, MD, FACS, Daniel A. Hashimoto, MD, MSTR

April 10, 2024

Vivek Singh, Dr. Daniel Hashimoto

As members of the ACS Health Information Technology (HIT) Committee, we appreciate Dr. Elsey's insights and share his concerns regarding the potential risks associated with artificial intelligence (AI) in healthcare. The proper adoption and risk mitigation of these technologies will indeed require a concerted effort from multiple stakeholders, including governmental agencies, academic institutions, and professional societies like the ACS.

Partial engagement or poorly executed approaches could result in missed opportunities and potential harm to both the surgical profession and society. Trustworthiness of AI technologies, which relies on factors such as digital literacy and AI literacy among healthcare professionals and patients alike, is a key area of concern. The fear surrounding AI is reasonable, and this unease can be mitigated by educating surgeons on AI principles and fostering a deeper understanding of these technologies within our community.

Today, AI is often viewed as possessing seemingly limitless potential. AI advocates and skeptics highlight this in their discussions of the risks and benefits that may result from widespread AI adoption. However, expectations surrounding these risks and benefits may be tempered when one considers the significant limitations of current AI technologies.

“Enchanted determinism,” a cognitive bias that arises when a lack of understanding of technical principles leads one to view the technology as magical, has certainly impacted perceptions of AI applications for healthcare. 1 Painting AI as limitless stimulates creativity around potential use cases but also risks distracting from immediate problems and issues that plague healthcare AI, such as lack of quality data, inequalities in access to healthcare that bias data, unequal access to datasets, and inappropriate or misleading use of metrics to measure algorithmic performance. 2 The “last mile” problem in healthcare AI will be difficult to overcome and will limit meaningful applications of AI unless clinicians, supported by our representative societies such as the ACS, become AI literate. 3

Effective and safe implementation of AI technologies into clinical workflows will require tremendous effort among all stakeholders. The most successful translational advances in healthcare AI have combined the expertise of clinicians and computer scientists. 4 For these types of collaborations to occur, clinicians must possess the ability to engage in a meaningful dialogue with the developers of AI tools.

Like the wave of digital health technologies that came before it, AI demands its own unique set of competencies, termed AI literacy. Basic skills in areas such as statistics, data science, and computer science are foundational to the ways AI tools function; however, clinicians have traditionally demonstrated low performance in these domains. 5-7 Moreover, AI literacy has never been objectively measured in clinicians.

Thus, there is an urgent need for educational efforts aimed at closing this AI literacy gap in clinicians. As Dr. Elsey alludes, there are risks associated with AI use in high-stakes settings. To mitigate these risks, surgeons must know, understand, evaluate, and contribute to the development of AI tools.

The ACS is uniquely suited to support the development of AI literacy initiatives among surgeons. The Journal of the American College of Surgeons has already published a great deal of scientific research on applications of AI in surgical settings, and the ACS has released an online course for surgeons to learn more about AI and data science. 8 In addition, the ACS has spent considerable effort gathering surgeon-scientists with expertise in AI to lead initiatives like the HIT Committee, which includes an AI subcommittee.

As AI begins to integrate itself into clinical workflows, the College can help develop AI literacy among surgeons in several ways. First, the ACS can function as an educational body and house materials related to foundational principles in AI, data science, statistics, and other domains. While AI itself is an expansive (and still rapidly expanding) field, applications in surgery are relatively nascent. As surgical AI methodology becomes more common in surgical research, it will become necessary for surgeons to understand the methods used in these papers to offer substantial critique. 2,9,10 Moreover, the current limitations of AI models are still widely misunderstood by the general public, perpetuating “enchanted determinism” and perceptions of AI’s applications as limitless. Mitigating this bias will be crucial in the appraisal of AI research and technologies directed toward surgeons.

The ACS also can serve as a forum to engage in a meaningful dialogue about the future development of surgical AI, including concerns surrounding trust, privacy, and equity. As research and commercial interest grow, AI will become more of a part of surgeons’ daily lives. Likewise, there should be ample opportunities for surgeons to share their experiences with AI technologies, whether positive or negative.

In the scientific literature, AI models are often evaluated by certain performance metrics that may or may not reflect the stated goals of their creators. 2 However, in clinical settings, models also will be evaluated by way of user experience. Whether surgeons find AI technologies acceptable for their stated uses will be a crucial component of their translational success. As such, user experience considerations should be integrated into the process of model development using insights gained from surgeons.

Finally, the ACS can establish professional standards for the appropriate use of AI applications in surgical settings and communicate this to the public. The landscape surrounding AI is in a state of flux, but the College can continue its role in advocating for our patients by promoting laws and regulations that protect surgeons and patients while enabling the research and development necessary to drive surgical innovation.

The thoughts and opinions expressed in this viewpoint article are solely those of the authors and do not necessarily reflect those of the ACS.

Vivek Singh is a medical student at Boston University Chobanian & Avedisian School of Medicine and a research fellow at the Penn Computer Assisted Surgery and Outcomes (PCASO) Laboratory at the Perelman School of Medicine at the University of Pennsylvania in Philadelphia.

Dr. Daniel Hashimoto is an assistant professor of surgery and computer and information science at the University of Pennsylvania and director of the PCASO Laboratory. He also is Vice-Chair of Education and Research for the ACS Health Information Technology Committee.

  • Cadario R, Longoni C, Morewedge CK. Understanding, explaining, and utilizing medical artificial intelligence. Nat Hum Behav. 2021;5:1636–1642.
  • Reinke A, Tizabi MD, Baumgartner M, et al. Understanding metric-related pitfalls in image analysis validation. Nat Methods. 2024;21:182–194.
  • Cabitza F, Campagner A, Balsano C. Bridging the “last mile” gap between AI implementation and operation: “Data awareness” that matters. Ann Transl Med. 2020;8:501.
  • Maier-Hein L, Eisenmann M, Sarikaya D, et al. Surgical data science—from concepts toward clinical translation. Med Image.  2022;76:102306.
  • Windish DM, Huot SJ, Green ML. Medicine residents’ understanding of the biostatistics and results in the medical literature. JAMA. 2007;298:1010–1022.
  • Susarla SM, Redett RJ. Plastic surgery residents’ attitudes and understanding of biostatistics: A pilot study. J Surg Educ 2014;71:574–579.
  • Anderson BL, Williams S, Schulkin J. Statistical literacy of obstetrics-gynecology residents. J Grad Med Educ 2013;5:272–275.
  • Artificial Intelligence and Machine Learning: Transforming Surgical Practice and Education. ACS. Available at: https://www.facs.org/for-medical-professionals/education/programs/artificial-intelligence-and-machine-learning-transforming-surgical-practice-and-education/ . Accessed March 22, 2024.
  • Hashimoto DA, Varas J, Schwartz TA. Practical guide to machine learning and artificial intelligence in surgical education research. JAMA Surg. Published online January 3, 2024. doi:10.1001/jamasurg.2023.6687.
  • Maier-Hein L, Reinke A, Godau P, et al. Metrics reloaded: Recommendations for image analysis validation. Nat Methods. 2024;21:195–212.

Related Pages

Understanding surgical cpt coding essentials will help ensure proper reimbursement.

This column provides guidance on commonly asked coding questions through fictional case studies.

ACS Cancer Conference Highlights Quality Efforts, Current Complexities in Cancer Care

ACS Cancer Conference Highlights Quality Efforts, Current Complexities in Cancer Care

Sessions highlighted new information on standards and data collection, and offered insights into the new pediatric accreditation standards.

Trauma Meeting Spotlights New Image-Focused STOP THE BLEED Course

Trauma Meeting Spotlights New Image-Focused STOP THE BLEED Course

A Special Session at the COT Annual Meeting provided an overview of the new version of the STOP THE BLEED course.

ACS Statement Guides Surgeons in Telehealth Practices

ACS Statement Guides Surgeons in Telehealth Practices

The ACS Board of Regents approved a new policy statement on telehealth at its February 2023 meeting.

JACS Highlights

JACS Highlights

Read capsule summaries of articles appearing in the April 2024 issue of the Journal of the American College of Surgeons.

Engineering Surgical Innovation Is a Joint Effort

Engineering Surgical Innovation Is a Joint Effort

The Annual Surgeons and Engineers meeting hosted its first DIY simulator/model competition and provided a platform for innovative collaboration.

In Memoriam: Dr. Edward (Ted) Copeland, ACS Past-President

In Memoriam: Dr. Edward (Ted) Copeland, ACS Past-President

Renowned breast cancer surgeon and ACS Past-President Dr. Edward Copeland passed away March 31, 2024, at the age of 86.

Member News April 2024

Learn more about ACS members who have been recognized for noteworthy achievements.

ACS’s Advocacy Achievements

ACS’s Advocacy Achievements

Dr. Turner addresses the importance of surgeon advocacy and details some recent successes.

Are Antibiotics the Answer to Treating Appendicitis?

Are Antibiotics the Answer to Treating Appendicitis?

Discover new perspectives on operative versus nonoperative approaches to managing uncomplicated acute appendicitis.

For Optimal Outcomes, Surgeons Should Tap into “Collective Surgical Consciousness”

For Optimal Outcomes, Surgeons Should Tap into “Collective Surgical Consciousness”

Learn about the role of AI in medical imaging and its potential impact on patient decision-making.

New Technologies, Approaches Help Surgeons Maximize the Use of Transplant Organs

New Technologies, Approaches Help Surgeons Maximize the Use of Transplant Organs

Read about innovative advancements that help recover, preserve, and rehabilitate donor organs in an effort to streamline the allocation process.

Physician Workforce Data  Suggest Epochal Change

Physician Workforce Data Suggest Epochal Change

Gain new perspectives on current and future surgeon workforce supply culled from the AAMC’s US Physician Workforce Data Dashboard.

Artificial Intelligence: The Future Is Now

Artificial Intelligence: The Future Is Now

Surgeons are encouraged to take an active role in the adoption of AI to ensure the quality of care and safety of our patients.

  • Research article
  • Open access
  • Published: 12 April 2024

Feedback sources in essay writing: peer-generated or AI-generated feedback?

  • Seyyed Kazem Banihashem 1 , 2 ,
  • Nafiseh Taghizadeh Kerman 3 ,
  • Omid Noroozi 2 ,
  • Jewoong Moon 4 &
  • Hendrik Drachsler 1 , 5  

International Journal of Educational Technology in Higher Education volume  21 , Article number:  23 ( 2024 ) Cite this article

371 Accesses

2 Altmetric

Metrics details

Peer feedback is introduced as an effective learning strategy, especially in large-size classes where teachers face high workloads. However, for complex tasks such as writing an argumentative essay, without support peers may not provide high-quality feedback since it requires a high level of cognitive processing, critical thinking skills, and a deep understanding of the subject. With the promising developments in Artificial Intelligence (AI), particularly after the emergence of ChatGPT, there is a global argument that whether AI tools can be seen as a new source of feedback or not for complex tasks. The answer to this question is not completely clear yet as there are limited studies and our understanding remains constrained. In this study, we used ChatGPT as a source of feedback for students’ argumentative essay writing tasks and we compared the quality of ChatGPT-generated feedback with peer feedback. The participant pool consisted of 74 graduate students from a Dutch university. The study unfolded in two phases: firstly, students’ essay data were collected as they composed essays on one of the given topics; subsequently, peer feedback and ChatGPT-generated feedback data were collected through engaging peers in a feedback process and using ChatGPT as a feedback source. Two coding schemes including coding schemes for essay analysis and coding schemes for feedback analysis were used to measure the quality of essays and feedback. Then, a MANOVA analysis was employed to determine any distinctions between the feedback generated by peers and ChatGPT. Additionally, Spearman’s correlation was utilized to explore potential links between the essay quality and the feedback generated by peers and ChatGPT. The results showed a significant difference between feedback generated by ChatGPT and peers. While ChatGPT provided more descriptive feedback including information about how the essay is written, peers provided feedback including information about identification of the problem in the essay. The overarching look at the results suggests a potential complementary role for ChatGPT and students in the feedback process. Regarding the relationship between the quality of essays and the quality of the feedback provided by ChatGPT and peers, we found no overall significant relationship. These findings imply that the quality of the essays does not impact both ChatGPT and peer feedback quality. The implications of this study are valuable, shedding light on the prospective use of ChatGPT as a feedback source, particularly for complex tasks like argumentative essay writing. We discussed the findings and delved into the implications for future research and practical applications in educational contexts.

Introduction

Feedback is acknowledged as one of the most crucial tools for enhancing learning (Banihashem et al., 2022 ). The general and well-accepted definition of feedback conceptualizes it as information provided by an agent (e.g., teacher, peer, self, AI, technology) regarding aspects of one’s performance or understanding (e.g., Hattie & Timplerely, 2007 ). Feedback serves to heighten students’ self-awareness concerning their strengths and areas warranting improvement, through providing actionable steps required to enhance performance (Ramson, 2003 ). The literature abounds with numerous studies that illuminate the positive impact of feedback on diverse dimensions of students’ learning journey including increasing motivation (Amiryousefi & Geld, 2021 ), fostering active engagement (Zhang & Hyland, 2022 ), promoting self-regulation and metacognitive skills (Callender et al., 2016 ; Labuhn et al., 2010 ), and enriching the depth of learning outcomes (Gan et al., 2021 ).

Normally, teachers have primarily assumed the role of delivering feedback, providing insights into students’ performance on specific tasks or their grasp of particular subjects (Konold et al., 2004 ). This responsibility has naturally fallen upon teachers owing to their expertise in the subject matter and their competence to offer constructive input (Diezmann & Watters, 2015 ; Holt-Reynolds, 1999 ; Valero Haro et al., 2023 ). However, teachers’ role as feedback providers has been challenged in recent years as we have witnessed a growth in class sizes due to the rapid advances in technology and the widespread use of digital technologies that resulted in flexible and accessible education (Shi et al., 2019 ). The growth in class sizes has translated into an increased workload for teachers, leading to a pertinent predicament. This situation has directly impacted their capacity to provide personalized and timely feedback to each student, a capability that has encountered limitations (Er et al., 2021 ).

In response to this challenge, various solutions have emerged, among which peer feedback has arisen as a promising alternative instructional approach (Er et al., 2021 ; Gao et al., 2024 ; Noroozi et al., 2023 ; Kerman et al., 2024 ). Peer feedback entails a process wherein students assume the role of feedback providers instead of teachers (Liu & Carless, 2006 ). Involving students in feedback can add value to education in several ways. First and foremost, research indicates that students delve into deeper and more effective learning when they take on the role of assessors, critically evaluating and analyzing their peers’ assignments (Gielen & De Wever, 2015 ; Li et al., 2010 ). Moreover, involving students in the feedback process can augment their self-regulatory awareness, active engagement, and motivation for learning (e.g., Arguedas et al., 2016 ). Lastly, the incorporation of peer feedback not only holds the potential to significantly alleviate teachers’ workload by shifting their responsibilities from feedback provision to the facilitation of peer feedback processes but also nurtures a dynamic learning environment wherein students are actively immersed in the learning journey (e.g., Valero Haro et al., 2023 ).

Despite the advantages of peer feedback, furnishing high-quality feedback to peers remains a challenge. Several factors contribute to this challenge. Primarily, generating effective feedback necessitates a solid understanding of feedback principles, an element that peers often lack (Latifi et al., 2023 ; Noroozi et al., 2016 ). Moreover, offering high-quality feedback is inherently a complex task, demanding substantial cognitive processing to meticulously evaluate peers’ assignments, identify issues, and propose constructive remedies (King, 2002 ; Noroozi et al., 2022 ). Furthermore, the provision of valuable feedback calls for a significant level of domain-specific expertise, which is not consistently possessed by students (Alqassab et al., 2018 ; Kerman et al., 2022 ).

In recent times, advancements in technology, coupled with the emergence of fields like Learning Analytics (LA), have presented promising avenues to elevate feedback practices through the facilitation of scalable, timely, and personalized feedback (Banihashem et al., 2023 ; Deeva et al., 2021 ; Drachsler, 2023 ; Drachsler & Kalz, 2016 ; Pardo et al., 2019 ; Zawacki-Richter et al., 2019 ; Rüdian et al., 2020 ). Yet, a striking stride forward in the field of educational technology has been the advent of a novel Artificial Intelligence (AI) tool known as “ChatGPT,” which has sparked a global discourse on its potential to significantly impact the current education system (Ray, 2023 ). This tool’s introduction has initiated discussions on the considerable ways AI can support educational endeavors (Bond et al., 2024 ; Darvishi et al., 2024 ).

In the context of feedback, AI-powered ChatGPT introduces what is referred to as AI-generated feedback (Farrokhnia et al., 2023 ). While the literature suggests that ChatGPT has the potential to facilitate feedback practices (Dai et al., 2023 ; Katz et al., 2023 ), this literature is very limited and mostly not empirical leading us to realize that our current comprehension of its capabilities in this regard is quite restricted. Therefore, we lack a comprehensive understanding of how ChatGPT can effectively support feedback practices and to what degree it can improve the timeliness, impact, and personalization of feedback, which remains notably limited at this time.

More importantly, considering the challenges we raised for peer feedback, the question is whether AI-generated feedback and more specifically feedback provided by ChatGPT has the potential to provide quality feedback. Taking this into account, there is a scarcity of knowledge and research gaps regarding the extent to which AI tools, specifically ChatGPT, can effectively enhance feedback quality compared to traditional peer feedback. Hence, our research aims to investigate the quality of feedback generated by ChatGPT within the context of essay writing and to juxtapose its quality with that of feedback generated by students.

This study carries the potential to make a substantial contribution to the existing body of recent literature on the potential of AI and in particular ChatGPT in education. It can cast a spotlight on the quality of AI-generated feedback in contrast to peer-generated feedback, while also showcasing the viability of AI tools like ChatGPT as effective automated feedback mechanisms. Furthermore, the outcomes of this study could offer insights into mitigating the feedback-related workload experienced by teachers through the intelligent utilization of AI tools (e.g., Banihashem et al., 2022 ; Er et al., 2021 ; Pardo et al., 2019 ).

However, there might be an argument regarding the rationale for conducting this study within the specific context of essay writing. Addressing this potential query, it is crucial to highlight that essay writing stands as one of the most prevalent yet complex tasks for students (Liunokas, 2020 ). This task is not without its challenges, as evidenced by the extensive body of literature that indicates students often struggle to meet desired standards in their essay composition (e.g., Bulqiyah et al., 2021 ; Noroozi et al., 2016 ;, 2022 ; Latifi et al., 2023 ).

Furthermore, teachers frequently express dissatisfaction with the depth and overall quality of students’ essay writing (Latifi et al., 2023 ). Often, these teachers lament that their feedback on essays remains superficial due to the substantial time and effort required for critical assessment and individualized feedback provision (Noroozi et al., 2016 ;, 2022 ). Regrettably, these constraints prevent them from delving deeper into the evaluation process (Kerman et al., 2022 ).

Hence, directing attention towards the comparison of peer-generated feedback quality and AI-generated feedback quality within the realm of essay writing bestows substantial value upon both research and practical application. This study enriches the academic discourse and informs practical approaches by delivering insights into the adequacy of feedback quality offered by both peers and AI for the domain of essay writing. This investigation serves as a critical step in determining whether the feedback imparted by peers and AI holds the necessary caliber to enhance the craft of essay writing.

The ramifications of addressing this query are noteworthy. Firstly, it stands to significantly alleviate the workload carried by teachers in the process of essay evaluation. By ascertaining the viability of feedback from peers and AI, teachers can potentially reduce the time and effort expended in reviewing essays. Furthermore, this study has the potential to advance the quality of essay compositions. The collaboration between students providing feedback to peers and the integration of AI-powered feedback tools can foster an environment where essays are not only better evaluated but also refined in their content and structure.With this in mind, we aim to tackle the following key questions within the scope of this study:

RQ1. To what extent does the quality of peer-generated and ChatGPT-generated feedback differ in the context of essay writing?

Rq2. does a relationship exist between the quality of essay writing performance and the quality of feedback generated by peers and chatgpt, context and participant.

This study was conducted in the academic year of 2022–2023 at a Dutch university specializing in life sciences. In total, 74 graduate students from food sciences participated in this study in which 77% of students were female ( N  = 57) and 23% were male ( N  = 17).

Study design and procedure

This empirical study has an exploratory nature and it was conducted in two phases. An online module called “ Argumentative Essay Writing ” (AEW) was designed to be followed by students within the Brightspace platform. The purpose of the AEW module was to improve students’ essay writing skills by engaging them in a peer learning process where students were invited to provide feedback on each other’s essays. After designing the module, the study was implemented in two weeks and followed in two phases.

In week one (phase one), students were asked to write an essay on given topics. The topics for the essay were controversial and included “ Scientists with affiliations to the food industry should abstain from participating in risk assessment processes ”, “ powdered infant formula must adhere to strict sterility standards ”, and “ safe food consumption is the responsibility of the consumer ”. The given controversial topics were directly related to the course content and students’ area of study. Students had time for one week to write their essays individually and submit them to the Brightspace platform.

In week two (phase two), students were randomly invited to provide two sets of written/asynchronous feedback on their peers’ submitted essays. We gave a prompt to students to be used for giving feedback ( Please provide feedback to your peer and explain the extent to which she/he has presented/elaborated/justified various elements of an argumentative essay. What are the problems and what are your suggestions to improve each element of the essay? Your feedback must be between 250 and 350 words ). To be able to engage students in the online peer feedback activity, we used the FeedbackFruits app embedded in the Brightspace platform. FeedbackFruits functions as an external educational technology tool seamlessly integrated into Brightspace, aimed at enhancing student engagement via diverse peer collaboration approaches. Among its features are peer feedback, assignment evaluation, skill assessment, automated feedback, interactive videos, dynamic documents, discussion tasks, and engaging presentations (Noroozi et al., 2022 ). In this research, our focus was on the peer feedback feature of the FeedbackFruits app, which empowers teachers to design tasks that enable students to offer feedback to their peers.

In addition, we used ChatGPT as another feedback source on peers’ essays. To be consistent with the criteria for peer feedback, we gave the same feedback prompt question with a minor modification to ChatGPT and asked it to give feedback on the peers’ essays ( Please read and provide feedback on the following essay and explain the extent to which she/he has presented/elaborated/justified various elements of an argumentative essay. What are the problems and what are your suggestions to improve each element of the essay? Your feedback must be between 250 and 350 words ).

Following this design, we were able to collect students’ essay data, peer feedback data, and feedback data generated by ChatGPT. In the next step, we used two coding schemes to analyze the quality of the essays and feedback generated by peers and ChatGPT.

Measurements

Coding scheme to assess the quality of essay writing.

In this study, a coding scheme proposed by Noroozi et al. ( 2016 ) was employed to assess students’ essay quality. This coding system was constructed based on the key components of high-quality essay composition, encompassing eight elements: introduction pertaining to the subject, taking a clear stance on the subject, presenting arguments in favor of the chosen position, providing justifications for the arguments supporting the position, counter-arguments, justifications for counter-arguments, responses to counter-arguments, and concluding with implications. Each element in the coding system is assigned a score ranging from zero (indicating the lowest quality level) to three (representing the highest quality level). The cumulative scores across all these elements were aggregated to determine the overall quality score of the student’s written essays. Two experienced coders in the field of education collaborated to assess the quality of the written essays, and their agreement level was measured at 75% (Cohen’s Kappa = 0.75 [95% confidence interval: 0.70–0.81]; z = 25.05; p  < 0.001), signifying a significant level of consensus between the coders.

Coding scheme to assess the quality of feedback generated by peers and ChatGPT

To assess the quality of feedback provided by both peers and ChatGPT, we employed a coding scheme developed by Noroozi et al. ( 2022 ). This coding framework dissects the characteristics of feedback, encompassing three key elements: the affective component, which considers the inclusion of emotional elements such as positive sentiments like praise or compliments, as well as negative emotions such as anger or disappointment; the cognitive component, which includes description (a concise summary of the essay), identification (pinpointing and specifying issues within the essay), and justification (providing explanations and justifications for the identified issues); and the constructive component, which involves offering recommendations, albeit not detailed action plans for further enhancements. Ratings within this coding framework range from zero, indicating poor quality, to two, signifying good quality. The cumulative scores were tallied to determine the overall quality of the feedback provided to the students. In this research, as each essay received feedback from both peers and ChatGPT, we calculated the average score from the two sets of feedback to establish the overall quality score for the feedback received, whether from peers or ChatGPT. The same two evaluators were involved in the assessment. The inter-rater reliability between the evaluators was determined to be 75% (Cohen’s Kappa = 0.75 [95% confidence interval: 0.66–0.84]; z = 17.52; p  < 0.001), showing a significant level of agreement between them.

The logic behind choosing these coding schemes was as follows: Firstly, from a theoretical standpoint, both coding schemes were developed based on robust and well-established theories. The coding scheme for evaluating essay quality draws on Toulmin’s argumentation model ( 1958 ), a respected framework for essay writing. It encompasses all elements essential for high-quality essay composition and aligns well with the structure of essays assigned in the chosen course for this study. Similarly, the feedback coding scheme is grounded in prominent works on identifying feedback features (e.g., Nelson & Schunn, 2009 ; Patchan et al., 2016 ; Wu & Schunn, 2020 ), enabling the identification of key features of high-quality feedback (Noroozi et al., 2022 ). Secondly, from a methodological perspective, both coding schemes feature a transparent scoring method, mitigating coder bias and bolstering the tool’s credibility.

To ensure the data’s validity and reliability for statistical analysis, two tests were implemented. Initially, the Levene test assessed group homogeneity, followed by the Kolmogorov-Smirnov test to evaluate data normality. The results confirmed both group homogeneity and data normality. For the first research question, gender was considered as a control variable, and the MANCOVA test was employed to compare the variations in feedback quality between peer feedback and ChatGPT-generated feedback. Addressing the second research question involved using Spearman’s correlation to examine the relationships among original argumentative essays, peer feedback, and ChatGPT-generated feedback.

The results showed a significant difference in feedback quality between peer feedback and ChatGPT-generated feedback. Peers provided feedback of higher quality compared to ChatGPT. This difference was mainly due to the descriptive and identification of the problem features of feedback. ChatGPT tended to produce more extensive descriptive feedback including a summary statement such as the description of the essay or taken action, while students performed better in pinpointing and identifying the issues in the feedback provided (see Table  1 ).

A comprehensive list featuring selected examples of feedback generated by peers and ChatGPT is presented in Fig  1 . This table additionally outlines examples of how the generated feedback was coded based on the coding scheme to assess the quality of feedback.

figure 1

A comparative list of selected examples of peer-generated and ChatGPT-generated feedback

Overall, the results indicated that there was no significant relationship between the quality of essay writing and the feedback generated by peers and ChatGPT. However, a positive correlation was observed between the quality of the essay and the affective feature of feedback generated by ChatGPT, while a negative relationship was observed between the quality of the essay and the affective feature of feedback generated by peers. This finding means that as the quality of the essay improves, ChatGPT tends to provide more affective feedback, while peers tend to provide less affective feedback (see Table  2 ).

This study was an initial effort to explore the potential of ChatGPT as a feedback source in the context of essay writing and to compare the extent to which the quality of feedback generated by ChatGPT differs from the feedback provided by peers. Below we discuss our findings for each research question.

Discussion on the results of RQ1

For the first research question, the results revealed a disparity in feedback quality when comparing peer-generated feedback to feedback generated by ChatGPT. Peer feedback demonstrated higher quality compared to ChatGPT-generated feedback. This discrepancy is attributed primarily to variations in the descriptive and problem-identification features of the feedback.

ChatGPT tended to provide more descriptive feedback, often including elements such as summarizing the content of the essay. This inclination towards descriptive feedback could be related to ChatGPT’s capacity to analyze and synthesize textual information effectively. Research on ChatGPT further supports this notion, demonstrating the AI tool’s capacity to offer a comprehensive overview of the provided content, therefore potentially providing insights and a holistic perspective on the content (Farrokhnia et al., 2023 ; Ray, 2023 ).

ChatGPT’s proficiency in providing extensive descriptive feedback could be seen as a strength. It might be particularly valuable for summarizing complex arguments or providing comprehensive overviews, which could aid students in understanding the overall structure and coherence of their essays.

In contrast, students’ feedback content entailed high quality regarding identifying specific issues and areas for improvement. Peers outperformance compared to ChatGPT in identifying problems within the essays could be related to humans’ potential in cognitive skills, critical thinking abilities, and contextual understanding (e.g., Korteling et al., 2021 ; Lamb et al., 2019 ). This means that students, with their contextual knowledge and critical thinking skills, may be better equipped to identify issues within the essays that ChatGPT may overlook.

Furthermore, a detailed look at the findings of the first research question discloses that the feedback generated by ChatGPT comprehensively encompassed all essential components characterizing high-quality feedback, including affective, cognitive, and constructive dimensions (Kerman et al., 2022 ; Patchan et al., 2016 ). This comprehensive observation could be an indication of the fact that ChatGPT-generated feedback could potentially serve as a viable source of feedback. This observation is supported by previous studies where a positive role for AI-generated feedback and automated feedback in enhancing educational outcomes has been recognized (e.g., Bellhäuser et al., 2023 ; Gombert et al., 2024 ; Huang et al., 2023 ; Xia et al., 2022 ).

Finally, an overarching look at the results of the first research question suggests a potential complementary role for ChatGPT and students in the feedback process. This means that using these two feedback sources together creates a synergistic relationship that could result in better feedback outcomes.

Discussion on the results of RQ2

Results for the second research question revealed no observations of a significant correlation between the quality of the essays and the quality of the feedback generated by both peers and ChatGPT. These findings carry a consequential implication, suggesting that the inherent quality of the essays under scrutiny exerts negligible influence over the quality of feedback furnished by both students and the ChatGPT.

In essence, these results point to a notable degree of independence between the writing prowess exhibited in the essays and the efficacy of the feedback received from either source. This disassociation implies that the ability to produce high-quality essays does not inherently translate into a corresponding ability to provide equally insightful feedback, neither for peers nor for ChatGPT. This decoupling of essay quality from feedback quality highlighted the multifaceted nature of these evaluative processes, where proficiency in constructing a coherent essay does not necessarily guarantee an equally adept capacity for evaluating and articulating constructive commentary on peers’ work.

The implications of these findings are both intriguing and defy conventional expectations, as they deviate somewhat from the prevailing literature’s stance. The existing body of scholarly work generally posits a direct relationship between the quality of an essay and the subsequent quality of generated feedback (Noroozi et al., 2016 ;, 2022 ; Kerman et al., 2022 ; Vale Haro et al., 2023 ). This line of thought contends that essays of inferior quality might serve as a catalyst for more pronounced error detection among students, encompassing grammatical intricacies, depth of content, clarity, and coherence, as well as the application of evidence and support. Conversely, when essays are skillfully crafted, the act of pinpointing areas for enhancement becomes a more complex task, potentially necessitating a heightened level of subject comprehension and nuanced evaluation.

However, the present study’s findings challenge this conventional wisdom. The observed decoupling of essay quality from feedback quality suggests a more nuanced interplay between the two facets of assessment. Rather than adhering to the anticipated pattern, wherein weaker essays prompt clearer identification of deficiencies, and superior essays potentially render the feedback process more challenging, the study suggests that the process might be more complex than previously thought. It hints at a dynamic in which the act of evaluating essays and providing constructive feedback transcends a simple linear connection with essay quality.

These findings, while potentially unexpected, are an indication of the complex nature of essay assignments and feedback provision highlighting the complexity of cognitive processes that underlie both tasks, and suggesting that the relationship between essay quality and feedback quality is not purely linear but influenced by a multitude of factors, including the evaluator’s cognitive framework, familiarity with the subject matter, and critical analysis skills.

Despite this general observation, a closer examination of the affective features within the feedback reveals a different pattern. The positive correlation between essay quality and the affective features present in ChatGPT-generated feedback could be related to ChatGPT’s capacity to recognize and appreciate students’ good work. As the quality of the essay increases, ChatGPT might be programmed to offer more positive and motivational feedback to acknowledge students’ progress (e.g., Farrokhnia et al., 2023 ; Ray, 2023 ). In contrast, the negative relationship between essay quality and the affective features in peer feedback may be attributed to the evolving nature of feedback from peers (e.g., Patchan et al., 2016 ). This suggests that as students witness improvements in their peers’ essay-writing skills and knowledge, their feedback priorities may naturally evolve. For instance, students may transition from emphasizing emotional and affective comments to focusing on cognitive and constructive feedback, with the goal of further enhancing the overall quality of the essays.

Limitations and implications for future research and practice

We acknowledge the limitations of this study. Primarily, the data underpinning this investigation was drawn exclusively from a singular institution and a solitary course, featuring a relatively modest participant pool. This confined scope inevitably introduces certain constraints that need to be taken into consideration when interpreting the study’s outcomes and generalizing them to broader educational contexts. Under this constrained sampling, the findings might exhibit a degree of contextual specificity, potentially limiting their applicability to diverse institutional settings and courses with distinct curricular foci. The diverse array of academic environments, student demographics, and subject matter variations existing across educational institutions could potentially yield divergent patterns of results. Therefore, while the current study’s outcomes provide insights within the confines of the studied institution and course, they should be interpreted and generalized with prudence. Recognizing these limitations, for future studies, we recommend considering a large-scale participant pool with a diverse range of variables, including individuals from various programs and demographics. This approach would enrich the depth and breadth of understanding in this domain, fostering a more comprehensive comprehension of the complex dynamics at play.

In addition, this study omitted an exploration into the degree to which students utilize feedback provided by peers and ChatGPT. That is to say that we did not investigate the effects of such feedback on essay enhancements in the revision phase. This omission inherently introduces a dimension of uncertainty and places a constraint on the study’s holistic understanding of the feedback loop. By not addressing these aspects, the study’s insights are somewhat partial, limiting the comprehensive grasp of the potential influences that these varied feedback sources wield on students’ writing enhancement processes. An analysis of the feedback assimilation patterns and their subsequent effects on essay refinement would have unveiled insights into the practical utility and impact of the feedback generated by peers and ChatGPT.

To address this limitation, future investigations could be structured to encompass a more thorough examination of students’ feedback utilization strategies and the resulting implications for the essay revision process. By shedding light on the complex interconnection between feedback reception, its integration into the revision process, and the ultimate outcomes in terms of essay improvement, a more comprehensive understanding of the dynamics involved could be attained.

Furthermore, in this study, we employed identical question prompts for both peers and ChatGPT. However, there is evidence indicating that ChatGPT is sensitive to how prompts are presented to it (e.g., Cao et al., 2023 ; White et al., 2023 ; Zuccon & Koopman, 2023 ). This suggests that variations in the wording, structure, or context of prompts might influence the responses generated by ChatGPT, potentially impacting the comparability of its outputs with those of peers. Therefore, it is essential to carefully consider and control for prompt-related factors in future research when assessing ChatGPT’s performance and capabilities in various tasks and contexts.

In addition, We acknowledge that ChatGPT can potentially generate inaccurate results. Nevertheless, in the context of this study, our examination of the results generated by ChatGPT did not reveal a significant inaccuracies that would warrant inclusion in our findings.

From a methodological perspective, we reported the interrater reliability between the coders to be 75%. While this level of agreement was statistically significant, signifying the reliability of our coders’ analyses, it did not reach the desired level of precision. We acknowledge this as a limitation of the study and suggest enhancing interrater reliability through additional coder training.

In addition, it is worth noting that the advancement of Generative AI like ChatGPT, opens new avenues in educational feedback mechanisms. Beyond just generating feedback, these AI models have the potential to redefine how feedback is presented and assimilated. In the realm of research on adaptive learning systems, the findings of this study also echo the importance of adaptive learning support empowered by AI and ChatGPT (Rummel et al., 2016 ). It can pave the way for tailored educational experiences that respond dynamically to individual student needs. This is not just about the feedback’s content but its delivery, timing, and adaptability. Further exploratory data analyses, such as sequential analysis and data mining, may offer insights into the nuanced ways different adaptive learning supports can foster student discussions (Papamitsiou & Economides, 2014 ). This involves dissecting the feedback dynamics, understanding how varied feedback types stimulate discourse, and identifying patterns that lead to enhanced student engagement.

Ensuring the reliability and validity of AI-empowered feedback is also crucial. The goal is to ascertain that technology-empowered learning support genuinely enhances students’ learning process in a consistent and unbiased manner. Given ChatGPT’s complex nature of generating varied responses based on myriad prompts, the call for enhancing methodological rigor through future validation studies becomes both timely and essential. For example, in-depth prompt validation and blind feedback assessment studies could be employed to meticulously probe the consistency and quality of ChatGPT’s responses. Also, comparative analysis with different AI models can be useful.

From an educational standpoint, our research findings advocate for the integration of ChatGPT as a feedback resource with peer feedback within higher education environments for essay writing tasks since there is a complementary role potential for pee-generated and ChatGPT-generated feedback. This approach holds the potential to alleviate the workload burden on teachers, particularly in the context of online courses with a significant number of students.

This study contributes to and adds value to the young existing but rapidly growing literature in two distinct ways. From a research perspective, this study addresses a significant void in the current literature by responding to the lack of research on AI-generated feedback for complex tasks like essay writing in higher education. The research bridges this gap by analyzing the effectiveness of ChatGPT-generated feedback compared to peer-generated feedback, thereby establishing a foundation for further exploration in this field. From a practical perspective of higher education, the study’s findings offer insights into the potential integration of ChatGPT as a feedback source within higher education contexts. The discovery that ChatGPT’s feedback quality could potentially complement peer feedback highlights its applicability for enhancing feedback practices in higher education. This holds particular promise for courses with substantial enrolments and essay-writing components, providing teachers with a feasible alternative for delivering constructive feedback to a larger number of students.

Data availability

The data is available upon a reasonable request.

Alqassab, M., Strijbos, J. W., & Ufer, S. (2018). Training peer-feedback skills on geometric construction tasks: Role of domain knowledge and peer-feedback levels. European Journal of Psychology of Education , 33 (1), 11–30. https://doi.org/10.1007/s10212-017-0342-0 .

Article   Google Scholar  

Amiryousefi, M., & Geld, R. (2021). The role of redressing teachers’ instructional feedback interventions in EFL learners’ motivation and achievement in distance education. Innovation in Language Learning and Teaching , 15 (1), 13–25. https://doi.org/10.1080/17501229.2019.1654482 .

Arguedas, M., Daradoumis, A., & Xhafa Xhafa, F. (2016). Analyzing how emotion awareness influences students’ motivation, engagement, self-regulation and learning outcome. Educational Technology and Society , 19 (2), 87–103. https://www.jstor.org/stable/jeductechsoci.19.2.87 .

Google Scholar  

Banihashem, S. K., Noroozi, O., van Ginkel, S., Macfadyen, L. P., & Biemans, H. J. (2022). A systematic review of the role of learning analytics in enhancing feedback practices in higher education. Educational Research Review , 100489. https://doi.org/10.1016/j.edurev.2022.100489 .

Banihashem, S. K., Dehghanzadeh, H., Clark, D., Noroozi, O., & Biemans, H. J. (2023). Learning analytics for online game-based learning: A systematic literature review. Behaviour & Information Technology , 1–28. https://doi.org/10.1080/0144929X.2023.2255301 .

Bellhäuser, H., Dignath, C., & Theobald, M. (2023). Daily automated feedback enhances self-regulated learning: A longitudinal randomized field experiment. Frontiers in Psychology , 14 , 1125873. https://doi.org/10.3389/fpsyg.2023.1125873 .

Bond, M., Khosravi, H., De Laat, M., Bergdahl, N., Negrea, V., Oxley, E., & Siemens, G. (2024). A meta systematic review of artificial intelligence in higher education: A call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education , 21 (4), 1–41. https://doi.org/10.1186/s41239-023-00436-z .

Bulqiyah, S., Mahbub, M., & Nugraheni, D. A. (2021). Investigating writing difficulties in Essay writing: Tertiary Students’ perspectives. English Language Teaching Educational Journal , 4 (1), 61–73. https://doi.org/10.12928/eltej.v4i1.2371 .

Callender, A. A., Franco-Watkins, A. M., & Roberts, A. S. (2016). Improving metacognition in the classroom through instruction, training, and feedback. Metacognition and Learning , 11 (2), 215–235. https://doi.org/10.1007/s11409-015-9142-6 .

Cao, J., Li, M., Wen, M., & Cheung, S. C. (2023). A study on prompt design, advantages and limitations of chatgpt for deep learning program repair. arXiv Preprint arXiv:2304 08191 . https://doi.org/10.48550/arXiv.2304.08191 .

Dai, W., Lin, J., Jin, F., Li, T., Tsai, Y. S., Gasevic, D., & Chen, G. (2023). Can large language models provide feedback to students? A case study on ChatGPT. https://doi.org/10.35542/osf.io/hcgzj .

Darvishi, A., Khosravi, H., Sadiq, S., Gašević, D., & Siemens, G. (2024). Impact of AI assistance on student agency. Computers & Education , 210 , 104967. https://doi.org/10.1016/j.compedu.2023.104967 .

Deeva, G., Bogdanova, D., Serral, E., Snoeck, M., & De Weerdt, J. (2021). A review of automated feedback systems for learners: Classification framework, challenges and opportunities. Computers & Education , 162 , 104094. https://doi.org/10.1016/j.compedu.2020.104094 .

Diezmann, C. M., & Watters, J. J. (2015). The knowledge base of subject matter experts in teaching: A case study of a professional scientist as a beginning teacher. International Journal of Science and Mathematics Education , 13 , 1517–1537. https://doi.org/10.1007/s10763-014-9561-x .

Drachsler, H. (2023). Towards highly informative learning analytics . Open Universiteit. https://doi.org/10.25656/01:26787 .

Drachsler, H., & Kalz, M. (2016). The MOOC and learning analytics innovation cycle (MOLAC): A reflective summary of ongoing research and its challenges. Journal of Computer Assisted Learning , 32 (3), 281–290. https://doi.org/10.1111/jcal.12135 .

Er, E., Dimitriadis, Y., & Gašević, D. (2021). Collaborative peer feedback and learning analytics: Theory-oriented design for supporting class-wide interventions. Assessment & Evaluation in Higher Education , 46 (2), 169–190. https://doi.org/10.1080/02602938.2020.1764490 .

Farrokhnia, M., Banihashem, S. K., Noroozi, O., & Wals, A. (2023). A SWOT analysis of ChatGPT: Implications for educational practice and research. Innovations in Education and Teaching International , 1–15. https://doi.org/10.1080/14703297.2023.2195846 .

Gan, Z., An, Z., & Liu, F. (2021). Teacher feedback practices, student feedback motivation, and feedback behavior: How are they associated with learning outcomes? Frontiers in Psychology , 12 , 697045. https://doi.org/10.3389/fpsyg.2021.697045 .

Gao, X., Noroozi, O., Gulikers, J. T. M., Biemans, H. J., & Banihashem, S. K. (2024). A systematic review of the key components of online peer feedback practices in higher education. Educational Research Review , 100588. https://doi.org/10.1016/j.edurev.2023.100588 .

Gielen, M., & De Wever, B. (2015). Scripting the role of assessor and assessee in peer assessment in a wiki environment: Impact on peer feedback quality and product improvement. Computers & Education , 88 , 370–386. https://doi.org/10.1016/j.compedu.2015.07.012 .

Gombert, S., Fink, A., Giorgashvili, T., Jivet, I., Di Mitri, D., Yau, J., & Drachsler, H. (2024). From the Automated Assessment of Student Essay Content to highly informative feedback: A case study. International Journal of Artificial Intelligence in Education , 1–39. https://doi.org/10.1007/s40593-023-00387-6 .

Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research , 77 (1), 81–112. https://doi.org/10.3102/003465430298487 .

Holt-Reynolds, D. (1999). Good readers, good teachers? Subject matter expertise as a challenge in learning to teach. Harvard Educational Review , 69 (1), 29–51. https://doi.org/10.17763/haer.69.1.pl5m5083286l77t2 .

Huang, A. Y., Lu, O. H., & Yang, S. J. (2023). Effects of artificial intelligence–enabled personalized recommendations on learners’ learning engagement, motivation, and outcomes in a flipped classroom. Computers & Education , 194 , 104684. https://doi.org/10.1016/j.compedu.2022.104684 .

Katz, A., Wei, S., Nanda, G., Brinton, C., & Ohland, M. (2023). Exploring the efficacy of ChatGPT in analyzing Student Teamwork Feedback with an existing taxonomy. arXiv Preprint arXiv . https://doi.org/10.48550/arXiv.2305.11882 .

Kerman, N. T., Noroozi, O., Banihashem, S. K., Karami, M., & Biemans, H. J. (2022). Online peer feedback patterns of success and failure in argumentative essay writing. Interactive Learning Environments , 1–13. https://doi.org/10.1080/10494820.2022.2093914 .

Kerman, N. T., Banihashem, S. K., Karami, M., Er, E., Van Ginkel, S., & Noroozi, O. (2024). Online peer feedback in higher education: A synthesis of the literature. Education and Information Technologies , 29 (1), 763–813. https://doi.org/10.1007/s10639-023-12273-8 .

King, A. (2002). Structuring peer interaction to promote high-level cognitive processing. Theory into Practice , 41 (1), 33–39. https://doi.org/10.1207/s15430421tip4101_6 .

Konold, K. E., Miller, S. P., & Konold, K. B. (2004). Using teacher feedback to enhance student learning. Teaching Exceptional Children , 36 (6), 64–69. https://doi.org/10.1177/004005990403600608 .

Korteling, J. H., van de Boer-Visschedijk, G. C., Blankendaal, R. A., Boonekamp, R. C., & Eikelboom, A. R. (2021). Human-versus artificial intelligence. Frontiers in Artificial Intelligence , 4 , 622364. https://doi.org/10.3389/frai.2021.622364 .

Labuhn, A. S., Zimmerman, B. J., & Hasselhorn, M. (2010). Enhancing students’ self-regulation and mathematics performance: The influence of feedback and self-evaluative standards. Metacognition and Learning , 5 , 173–194. https://doi.org/10.1007/s11409-010-9056-2 .

Lamb, R., Firestone, J., Schmitter-Edgecombe, M., & Hand, B. (2019). A computational model of student cognitive processes while solving a critical thinking problem in science. The Journal of Educational Research , 112 (2), 243–254. https://doi.org/10.1080/00220671.2018.1514357 .

Latifi, S., Noroozi, O., & Talaee, E. (2023). Worked example or scripting? Fostering students’ online argumentative peer feedback, essay writing and learning. Interactive Learning Environments , 31 (2), 655–669. https://doi.org/10.1080/10494820.2020.1799032 .

Li, L., & Liu, X. (2010). Steckelberg. Assessor or assessee: How student learning improves by giving and receiving peer feedback. British Journal of Educational Technology , 41 (3), 525–536. https://doi.org/10.1111/j.1467-8535.2009.00968.x .

Liu, N. F., & Carless, D. (2006). Peer feedback: The learning element of peer assessment. Teaching in Higher Education , 11 (3), 279–290. https://doi.org/10.1080/13562510600680582 .

Liunokas, Y. (2020). Assessing students’ ability in writing argumentative essay at an Indonesian senior high school. IDEAS: Journal on English language teaching and learning. Linguistics and Literature , 8 (1), 184–196. https://doi.org/10.24256/ideas.v8i1.1344 .

Nelson, M. M., & Schunn, C. D. (2009). The nature of feedback: How different types of peer feedback affect writing performance. Instructional Science , 37 , 375–401. https://doi.org/10.1007/s11251-008-9053-x .

Noroozi, O., Banihashem, S. K., Taghizadeh Kerman, N., Parvaneh Akhteh Khaneh, M., Babayi, M., Ashrafi, H., & Biemans, H. J. (2022). Gender differences in students’ argumentative essay writing, peer review performance and uptake in online learning environments. Interactive Learning Environments , 1–15. https://doi.org/10.1080/10494820.2022.2034887 .

Noroozi, O., Biemans, H., & Mulder, M. (2016). Relations between scripted online peer feedback processes and quality of written argumentative essay. The Internet and Higher Education , 31, 20-31. https://doi.org/10.1016/j.iheduc.2016.05.002

Noroozi, O., Banihashem, S. K., Biemans, H. J., Smits, M., Vervoort, M. T., & Verbaan, C. L. (2023). Design, implementation, and evaluation of an online supported peer feedback module to enhance students’ argumentative essay quality. Education and Information Technologies , 1–28. https://doi.org/10.1007/s10639-023-11683-y .

Papamitsiou, Z., & Economides, A. A. (2014). Learning analytics and educational data mining in practice: A systematic literature review of empirical evidence. Journal of Educational Technology & Society , 17 (4), 49–64. https://doi.org/10.2307/jeductechsoci.17.4.49 . https://www.jstor.org/stable/ .

Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., & Mirriahi, N. (2019). Using learning analytics to scale the provision of personalised feedback. British Journal of Educational Technology , 50 (1), 128–138. https://doi.org/10.1111/bjet.12592 .

Patchan, M. M., Schunn, C. D., & Correnti, R. J. (2016). The nature of feedback: How peer feedback features affect students’ implementation rate and quality of revisions. Journal of Educational Psychology , 108 (8), 1098. https://doi.org/10.1037/edu0000103 .

Ramsden, P. (2003). Learning to teach in higher education . Routledge.

Ray, P. P. (2023). ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems , 3 , 121–154. https://doi.org/10.1016/j.iotcps.2023.04.003 .

Rüdian, S., Heuts, A., & Pinkwart, N. (2020). Educational Text Summarizer: Which sentences are worth asking for? In DELFI 2020 - The 18th Conference on Educational Technologies of the German Informatics Society (pp. 277–288). Bonn, Germany.

Rummel, N., Walker, E., & Aleven, V. (2016). Different futures of adaptive collaborative learning support. International Journal of Artificial Intelligence in Education , 26 , 784–795. https://doi.org/10.1007/s40593-016-0102-3 .

Shi, M. (2019). The effects of class size and instructional technology on student learning performance. The International Journal of Management Education , 17 (1), 130–138. https://doi.org/10.1016/j.ijme.2019.01.004 .

Article   MathSciNet   Google Scholar  

Toulmin, S. (1958). The uses of argument . Cambridge University Press.

Valero Haro, A., Noroozi, O., Biemans, H. J., Mulder, M., & Banihashem, S. K. (2023). How does the type of online peer feedback influence feedback quality, argumentative essay writing quality, and domain-specific learning? Interactive Learning Environments , 1–20. https://doi.org/10.1080/10494820.2023.2215822 .

White, J., Fu, Q., Hays, S., Sandborn, M., Olea, C., Gilbert, H., & Schmidt, D. C. (2023). A prompt pattern catalog to enhance prompt engineering with chatgpt. arXiv preprint arXiv:2302.11382 . https://doi.org/10.48550/arXiv.2302.11382 .

Wu, Y., & Schunn, C. D. (2020). From feedback to revisions: Effects of feedback features and perceptions. Contemporary Educational Psychology , 60 , 101826. https://doi.org/10.1016/j.cedpsych.2019.101826 .

Xia, Q., Chiu, T. K., Zhou, X., Chai, C. S., & Cheng, M. (2022). Systematic literature review on opportunities, challenges, and future research recommendations of artificial intelligence in education. Computers and Education: Artificial Intelligence , 100118. https://doi.org/10.1016/j.caeai.2022.100118 .

Zawacki-Richter, O., Marín, V. I., Bond, M., & Gouverneur, F. (2019). Systematic review of research on artificial intelligence applications in higher education–where are the educators? International Journal of Educational Technology in Higher Education , 16 (1), 1–27. https://doi.org/10.1186/s41239-019-0171-0 .

Zhang, Z. V., & Hyland, K. (2022). Fostering student engagement with feedback: An integrated approach. Assessing Writing , 51 , 100586. https://doi.org/10.1016/j.asw.2021.100586 .

Zuccon, G., & Koopman, B. (2023). Dr ChatGPT, tell me what I want to hear: How prompt knowledge impacts health answer correctness. arXiv preprint arXiv:2302 .13793. https://doi.org/10.48550/arXiv.2302.13793 .

Download references

No funding has been received for this research.

Author information

Authors and affiliations.

Open Universiteit, Heerlen, The Netherlands

Seyyed Kazem Banihashem & Hendrik Drachsler

Wageningen University and Research, Wageningen, The Netherlands

Seyyed Kazem Banihashem & Omid Noroozi

Ferdowsi University of Mashhad, Mashhad, Iran

Nafiseh Taghizadeh Kerman

The University of Alabama, Tuscaloosa, USA

Jewoong Moon

DIPE Leibniz Institute, Goethe University, Frankfurt, Germany

Hendrik Drachsler

You can also search for this author in PubMed   Google Scholar

Contributions

S. K. Banihashem led this research experiment. N. T. Kerman contributed to the data analysis and writing. O. Noroozi contributed to the designing, writing, and reviewing the manuscript. J. Moon contributed to the writing and revising the manuscript. H. Drachsler contributed to the writing and revising the manuscript.

Corresponding author

Correspondence to Seyyed Kazem Banihashem .

Ethics declarations

Declaration of ai-assisted technologies in the writing process.

The authors used generative AI for language editing and took full responsibility.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Banihashem, S.K., Kerman, N.T., Noroozi, O. et al. Feedback sources in essay writing: peer-generated or AI-generated feedback?. Int J Educ Technol High Educ 21 , 23 (2024). https://doi.org/10.1186/s41239-024-00455-4

Download citation

Received : 20 November 2023

Accepted : 18 March 2024

Published : 12 April 2024

DOI : https://doi.org/10.1186/s41239-024-00455-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • AI-generated feedback
  • Essay writing
  • Feedback sources
  • Higher education
  • Peer feedback

ai is the future of technology essay

IMAGES

  1. SOLUTION: Essay About The Future Of Artificial Intelligence

    ai is the future of technology essay

  2. ≫ Present and Future of Artificial Intelligence Free Essay Sample on

    ai is the future of technology essay

  3. What Will The Future Of AI Be Like In 2030?

    ai is the future of technology essay

  4. Future of Artificial Intelligence

    ai is the future of technology essay

  5. ≫ Artificial Intelligence: the Present & the Future Free Essay Sample

    ai is the future of technology essay

  6. Essay on Artificial Intelligence

    ai is the future of technology essay

VIDEO

  1. World of Artificial General Intelligence

  2. Visual Ai Device's Future Technology Hindi

  3. AI In 2030 ( The Future Predictions of AI )

  4. Why the AI Revolution Needs YOU #aicreativity

  5. AI future technology

  6. future of AI. WHAT ?

COMMENTS

  1. 500+ Words Essay on Artificial Intelligence

    Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics. Artificial Intelligence Essay. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer ...

  2. The Future of AI: How AI Is Changing the World

    Innovations in the field of artificial intelligence continue to shape the future of humanity across nearly every industry. AI is already the main driver of emerging technologies like big data, robotics and IoT, and generative AI has further expanded the possibilities and popularity of AI. According to a 2023 IBM survey, 42 percent of enterprise-scale businesses integrated AI into their ...

  3. Artificial intelligence is transforming our world

    Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems "human-level." AI-generated image of a horse 9. Transformative artificial intelligence is defined by the impact this technology would have on the world

  4. The present and future of AI

    When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems. In the near future, two applications that I'm ...

  5. The AI Anthology: 20 Essays You Should Read About Our Future With AI

    Dubbed " AI Anthology ," the project features 20 op-ed essays from an eclectic mix of scholars and professionals providing their diverse perspectives on the transformative potential of AI. With the backdrop of impressive leaps in AI capabilities, notably OpenAI's GPT-4, the anthology is a collaborative effort aimed at elucidating the profound ...

  6. What Is the Future of AI?

    He's also, along with Kartik, the co-Director of our Center on AI at Wharton. And his research examines how artificial intelligence and automation are changing consumption and society. And ...

  7. How will AI change the world? 5 deep dives into the technology's future

    NPR Explains. AI is a multi-billion dollar industry. Friends are using apps to morph their photos into realistic avatars. TV scripts, school essays and resumes are written by bots that sound a lot ...

  8. Artificial Intelligence: History, Challenges, and Future Essay

    In the editorial "A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence" by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired ...

  9. The Future of AI: What Comes Next and What to Expect

    This is one look at the future of chatbots and other A.I. technologies: A new wave of multimodal systems will juggle images, sounds and videos as well as text. Yesterday, my colleague Kevin Roose ...

  10. The future of AI's impact on society

    Artificial intelligence is already changing society at a faster pace than we realize, but at the same time it is not as novel or unique in human experience as we are often led to imagine.

  11. What's next for AI in 2024

    In 2024, generative AI might actually become useful for the regular, non-tech person, and we are going to see more people tinkering with a million little AI models. State-of-the-art AI models ...

  12. How AI could change computing, culture and the course of history

    The months since the release of Open AI 's Chat GPT, a conversational interface now powered by GPT-4, have seen an entrepreneurial explosion that makes the dotcom boom look sedate. For users ...

  13. Reflections on AI and the future of human flourishing

    The 20 essays offer a wealth of insights, hopes and concerns, illustrating the complexities and possibilities that arise with the rapid evolution of AI. As you read these essays, I encourage you to remain open to new ideas, engage in thoughtful conversations, and lend your insights to the ongoing discourse on harnessing AI technology to benefit ...

  14. Future of Artificial Intelligence: [Essay Example], 569 words

    Future of Artificial Intelligence. Artificial intelligence (AI) has rapidly evolved in recent years, making significant advancements in various fields such as healthcare, finance, and manufacturing. This technology holds immense potential for transformative advancements, but it also raises concerns regarding ethical implications. This essay ...

  15. PDF Artificial Intelligence and the Future of Teaching and Learning

    ahead of the expected increase of AI in education technology—and they want to roll up their sleeves and start working together. In late 2022 and early 2023, the public became aware of new generative AI chatbots and began to explore how AI could be used to write essays, create lesson

  16. Essay on Future of Artificial Intelligence

    500 Words Essay on Future of Artificial Intelligence Introduction. Artificial Intelligence (AI) has transformed from a fringe scientific concept into a commonplace technology, permeating every aspect of our lives. As we stand on the precipice of the future, it becomes crucial to understand AI's potential trajectory and the profound ...

  17. How Artificial Intelligence Will Change the Future? Essay

    Examples of artificial intelligence in our day to day lives include: Siri, Google, drones, advertising and so much more. Every industry is incorporating artificial intelligence into its back bone in one form or the other, changing the way we think and live. The future of artificial intelligence is both fascinating and a constant cause of ...

  18. The Future of Artificial Intelligence Technology

    Artificial Intelligence Essay Example. Since 'The Terminator' (1984) with its infamous line "Artificial intelligence (AI for short) is mankind's last invention", not only has artificial intelligence become more widely known to the public audience, but is also expected to be the next big thing in the high-tech industry by experts. As ...

  19. What Students Are Saying About Learning to Write in the Age of A.I

    So, at least until AI writing technology improves, a student must put in the work, writing and rewriting until she has produced an essay that tells readers who she is. — Cole, Central Coast, CA

  20. AI-Powered Essays: Enhancing Your Writing with Technology

    Learn how technology-driven essays are reshaping writing, using AI to foster creativity, enhance productivity, and ensure precision in your work. Feb 5, 2024. ... Future of AI in Essay Writing. The future of AI in essay writing is poised for transformative advancements, potentially reshaping the landscape of writing tools and their capabilities

  21. The Future of Artificial Intelligence: Predictions and Challenges

    1. AI will become increasingly ingrained in our daily lives, according to. predictions. As AI technology advances and gets more sophisticated, it. will permeate more aspects of our life. This ...

  22. Learning Loss, AI and the Future of Education: Our 24 Most-Read Essays

    December 7, 2023. Some of America's biggest names in education tackled some of the thorniest issues facing the country's schools on the op-ed pages of The 74 this year, expressing their concerns about continuing COVID-driven deficits among students and the future of education overall. There were some grim predictions, but also reasons for hope.

  23. 200-500 Word Example Essays about Technology

    Embark on a technological journey with jenni.ai's curated essays. From bite-sized 200-word insights to in-depth 500-word analyses, immerse yourself in discussions on the innovations and implications of today's tech landscape. ... while automation is transforming the future of work. In education, technology has revolutionized the way we learn ...

  24. AI Has Lost Its Magic

    The torpor that I felt in asking for Hart Crane's ode to an ice-cream sandwich seemed to mark the end point of a brief, glorious phase in the history of technology. Generative AI appeared as if ...

  25. Students Are Likely Writing Millions of Papers With AI

    A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million ...

  26. To really grasp AI expectations, look to the trillions being invested

    Saudi Arabia was recently reportedto be forming a $40 billion AI initiative, to invest in everything from chipmaking to data centers. It would be a singular vote of confidence in the technology from one of the world's biggestsovereign wealth funds. The insatiable appetite for investing in AI is widely shared. Image: World Economic Forum.

  27. How AI And Humans Will Transform The Current Education System

    Looking To The Future. While AI can be an excellent teacher's aide, the human element is crucial in education. For all the good it can do, AI doesn't understand human motivation, empathy or ...

  28. AI is pervasive. Here's when we'll see its real economic benefits

    Reaping the benefits on responsible AI. Gartner estimates $5 trillion in technology spending in 2024, growing to $6.5 trillion by 2026. This will be the ultimate catalyst for the next stage of ...

  29. Artificial Intelligence: The Future Is What We Make It

    As AI begins to integrate itself into clinical workflows, the College can help develop AI literacy among surgeons in several ways. First, the ACS can function as an educational body and house materials related to foundational principles in AI, data science, statistics, and other domains. While AI itself is an expansive (and still rapidly ...

  30. Feedback sources in essay writing: peer-generated or AI-generated

    Peer feedback is introduced as an effective learning strategy, especially in large-size classes where teachers face high workloads. However, for complex tasks such as writing an argumentative essay, without support peers may not provide high-quality feedback since it requires a high level of cognitive processing, critical thinking skills, and a deep understanding of the subject. With the ...