Oxford Martin School logo

The brief history of artificial intelligence: the world has changed fast — what might be next?

Despite their brief history, computers and ai have fundamentally changed what we see, what we know, and what we do. little is as important for the world’s future and our own lives as how this history continues..

To see what the future might look like, it is often helpful to study our history. This is what I will do in this article. I retrace the brief history of computers and artificial intelligence to see what we can expect for the future.

How did we get here?

How rapidly the world has changed becomes clear by how even quite recent computer technology feels ancient today. Mobile phones in the ‘90s were big bricks with tiny green displays. Two decades before that, the main storage for computers was punch cards.

In a short period, computers evolved so quickly and became such an integral part of our daily lives that it is easy to forget how recent this technology is. The first digital computers were only invented about eight decades ago, as the timeline shows.

development of artificial intelligence essay

Since the early days of this history, some computer scientists have strived to make machines as intelligent as humans. The next timeline shows some of the notable artificial intelligence (AI) systems and describes what they were capable of.

The first system I mention is the Theseus. It was built by Claude Shannon in 1950 and was a remote-controlled mouse that was able to find its way out of a labyrinth and could remember its course. 1 In seven decades, the abilities of artificial intelligence have come a long way.

development of artificial intelligence essay

The language and image recognition capabilities of AI systems have developed very rapidly

The chart shows how we got here by zooming into the last two decades of AI development. The plotted data stems from a number of tests in which human and AI performance were evaluated in different domains, from handwriting recognition to language understanding.

Within each of the domains, the initial performance of the AI system is set to –100, and human performance in these tests is used as a baseline set to zero. This means that when the model’s performance crosses the zero line is when the AI system scored more points in the relevant test than the humans who did the same test. 2

Just 10 years ago, no machine could reliably provide language or image recognition at a human level. But, as the chart shows, AI systems have become steadily more capable and are now beating humans in tests in all these domains. 3

Outside of these standardized tests, the performance of these AIs is mixed. In some real-world cases, these systems are still performing much worse than humans. On the other hand, some implementations of such AI systems are already so cheap that they are available on the phone in your pocket: image recognition categorizes your photos and speech recognition transcribes what you dictate.

From image recognition to image generation

The previous chart showed the rapid advances in the perceptive abilities of artificial intelligence. AI systems have also become much more capable of generating images.

This series of nine images shows the development over the last nine years. None of the people in these images exist; all were generated by an AI system.

The series begins with an image from 2014 in the top left, a primitive image of a pixelated face in black and white. As the first image in the second row shows, just three years later, AI systems were already able to generate images that were hard to differentiate from a photograph.

In recent years, the capability of AI systems has become much more impressive still. While the early systems focused on generating images of faces, these newer models broadened their capabilities to text-to-image generation based on almost any prompt. The image in the bottom right shows that even the most challenging prompts — such as “A Pomeranian is sitting on the King’s throne wearing a crown. Two tiger soldiers are standing next to the throne” — are turned into photorealistic images within seconds. 5

Timeline of images generated by artificial intelligence 4

development of artificial intelligence essay

Language recognition and production is developing fast

Just as striking as the advances of image-generating AIs is the rapid development of systems that parse and respond to human language.

The image shows examples of an AI system developed by Google called PaLM. In these six examples, the system was asked to explain six different jokes. I find the explanation in the bottom right particularly remarkable: the AI explains an anti-joke specifically meant to confuse the listener.

AIs that produce language have entered our world in many ways over the last few years. Emails get auto-completed, massive amounts of online texts get translated, videos get automatically transcribed, school children use language models to do their homework, reports get auto-generated, and media outlets publish AI-generated journalism.

AI systems are not yet able to produce long, coherent texts. In the future, we will see whether the recent developments will slow down — or even end — or whether we will one day read a bestselling novel written by an AI.

Output of the AI system PaLM after being asked to interpret six different jokes 6

development of artificial intelligence essay

Where we are now: AI is here

These rapid advances in AI capabilities have made it possible to use machines in a wide range of new domains:

When you book a flight, it is often an artificial intelligence, no longer a human, that decides what you pay. When you get to the airport, it is an AI system that monitors what you do at the airport. And once you are on the plane, an AI system assists the pilot in flying you to your destination.

AI systems also increasingly determine whether you get a loan , are eligible for welfare, or get hired for a particular job. Increasingly, they help determine who is released from jail .

Several governments have purchased autonomous weapons systems for warfare, and some use AI systems for surveillance and oppression .

AI systems help to program the software you use and translate the texts you read. Virtual assistants , operated by speech recognition, have entered many households over the last decade. Now self-driving cars are becoming a reality.

In the last few years, AI systems have helped to make progress on some of the hardest problems in science.

Large AIs called recommender systems determine what you see on social media, which products are shown to you in online shops, and what gets recommended to you on YouTube. Increasingly they are not just recommending the media we consume, but based on their capacity to generate images and texts, they are also creating the media we consume.

Artificial intelligence is no longer a technology of the future; AI is here, and much of what is reality now would have looked like sci-fi just recently. It is a technology that already impacts all of us, and the list above includes just a few of its many applications .

The wide range of listed applications makes clear that this is a very general technology that can be used by people for some extremely good goals — and some extraordinarily bad ones, too. For such “dual-use technologies”, it is important that all of us develop an understanding of what is happening and how we want the technology to be used.

Just two decades ago, the world was very different. What might AI technology be capable of in the future?

What is next?

The AI systems that we just considered are the result of decades of steady advances in AI technology.

The big chart below brings this history over the last eight decades into perspective. It is based on the dataset produced by Jaime Sevilla and colleagues. 7

The rise of artificial intelligence over the last 8 decades: As training computation has increased, AI systems have become more powerful 8

development of artificial intelligence essay

Each small circle in this chart represents one AI system. The circle’s position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of computation used to train the particular AI system.

Training computation is measured in floating point operations , or FLOP for short. One FLOP is equivalent to one addition, subtraction, multiplication, or division of two decimal numbers.

All AI systems that rely on machine learning need to be trained, and in these systems, training computation is one of the three fundamental factors that are driving the capabilities of the system. The other two factors are the algorithms and the input data used for the training. The visualization shows that as training computation has increased, AI systems have become more and more powerful.

The timeline goes back to the 1940s when electronic computers were first invented. The first shown AI system is ‘Theseus’, Claude Shannon’s robotic mouse from 1950 that I mentioned at the beginning. Towards the other end of the timeline, you find AI systems like DALL-E and PaLM; we just discussed their abilities to produce photorealistic images and interpret and generate language. They are among the AI systems that used the largest amount of training computation to date.

The training computation is plotted on a logarithmic scale so that from each grid line to the next, it shows a 100-fold increase. This long-run perspective shows a continuous increase. For the first six decades, training computation increased in line with Moore’s Law , doubling roughly every 20 months. Since about 2010, this exponential growth has sped up further, to a doubling time of just about 6 months. That is an astonishingly fast rate of growth. 9

The fast doubling times have accrued to large increases. PaLM’s training computation was 2.5 billion petaFLOP, more than 5 million times larger than AlexNet, the AI with the largest training computation just 10 years earlier. 10

Scale-up was already exponential and has sped up substantially over the past decade. What can we learn from this historical development for the future of AI?

Studying the long-run trends to predict the future of AI

AI researchers study these long-term trends to see what is possible in the future. 11

Perhaps the most widely discussed study of this kind was published by AI researcher Ajeya Cotra. She studied the increase in training computation to ask at what point the computation to train an AI system could match that of the human brain. The idea is that, at this point, the AI system would match the capabilities of a human brain. In her latest update, Cotra estimated a 50% probability that such “transformative AI” will be developed by the year 2040, less than two decades from now. 12

In a related article , I discuss what transformative AI would mean for the world. In short, the idea is that such an AI system would be powerful enough to bring the world into a ‘qualitatively different future’. It could lead to a change at the scale of the two earlier major transformations in human history, the agricultural and industrial revolutions. It would certainly represent the most important global change in our lifetimes.

Cotra’s work is particularly relevant in this context as she based her forecast on the kind of historical long-run trend of training computation that we just studied. But it is worth noting that other forecasters who rely on different considerations arrive at broadly similar conclusions. As I show in my article on AI timelines , many AI experts believe that there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

Building a public resource to enable the necessary public conversation

Computers and artificial intelligence have changed our world immensely, but we are still in the early stages of this history. Because this technology feels so familiar, it is easy to forget that all of these technologies we interact with are very recent innovations and that the most profound changes are yet to come.

Artificial intelligence has already changed what we see, what we know, and what we do. This is despite the fact that this technology has had only a brief history.

There are no signs that these trends are hitting any limits anytime soon. On the contrary, particularly over the course of the last decade, the fundamental trends have accelerated: investments in AI technology have rapidly increased , and the doubling time of training computation has shortened to just six months.

All major technological innovations lead to a range of positive and negative consequences. This is already true of artificial intelligence. As this technology becomes more and more powerful, we should expect its impact to still increase.

Because of the importance of AI, we should all be able to form an opinion on where this technology is heading and understand how this development is changing our world. For this purpose, we are building a repository of AI-related metrics, which you can find on OurWorldinData.org/artificial-intelligence .

We are still in the early stages of this history, and much of what will become possible is yet to come. A technological development as powerful as this should be at the center of our attention. Little might be as important for how the future of our world — and the future of our lives — will play out.

Acknowledgments: I would like to thank my colleagues Natasha Ahuja, Daniel Bachler, Julia Broden, Charlie Giattino, Bastian Herre, Edouard Mathieu, and Ike Saunders for their helpful comments to drafts of this essay and their contributions in preparing the visualizations.

On the Theseus see Daniel Klein (2019) — Mighty mouse , Published in MIT Technology Review. And this video on YouTube of a presentation by its inventor Claude Shannon.

The chart shows that the speed at which these AI technologies developed increased over time. Systems for which development was started early — handwriting and speech recognition — took more than a decade to approach human-level performance, while more recent AI developments led to systems that overtook humans in only a few years. However, one should not overstate this point. To some extent, this is dependent on when the researchers started to compare machine and human performance. One could have started evaluating the system for language understanding much earlier, and its development would appear much slower in this presentation of the data.

It is important to remember that while these are remarkable achievements — and show very rapid gains — these are the results from specific benchmarking tests. Outside of tests, AI models can fail in surprising ways and do not reliably achieve performance that is comparable with human capabilities.

The relevant publications are the following:

2014: Goodfellow et al.: Generative Adversarial Networks

2015: Radford, Metz, and Chintala: Unsupervised Representation Learning with Deep Convolutional Generative Adversarial Networks

2016: Liu and Tuzel: Coupled Generative Adversarial Networks

2017: Karras et al.: Progressive Growing of GANs for Improved Quality, Stability, and Variation

2018: Karras, Laine, and Aila: A Style-Based Generator Architecture for Generative Adversarial Networks (StyleGAN from NVIDIA)

2019: Karras et al.: Analyzing and Improving the Image Quality of StyleGAN

AI-generated faces generated by this technology can be found on thispersondoesnotexist.com .

2020: Ho, Jain, and Abbeel: Denoising Diffusion Probabilistic Models

2021: Ramesh et al: Zero-Shot Text-to-Image Generation (first DALL-E from OpenAI; blog post ). See also Ramesh et al. (2022) — Hierarchical Text-Conditional Image Generation with CLIP Latents (DALL-E 2 from OpenAI; blog post ).

2022: Saharia et al: Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding (Google’s Imagen; blog post )

Because these systems have become so powerful, the latest AI systems often don’t allow the user to generate images of human faces to prevent abuse.

From Chowdhery et al. (2022) —  PaLM: Scaling Language Modeling with Pathways . Published on arXiv on 7 Apr 2022.

See the footnote on the chart's title for the references and additional information.

The data is taken from Jaime Sevilla, Lennart Heim, Anson Ho, Tamay Besiroglu, Marius Hobbhahn, Pablo Villalobos (2022) — Compute Trends Across Three eras of Machine Learning . Published in arXiv on March 9, 2022. See also their post on the Alignment Forum .

The authors regularly update and extend their dataset, a helpful service to the AI research community. At Our World in Data, my colleague Charlie Giattino regularly updates the interactive version of this chart with the latest data made available by Sevilla and coauthors.

See also these two related charts:

Number of parameters in notable artificial intelligence systems

Number of datapoints used to train notable artificial intelligence systems

At some point in the future, training computation is expected to slow down to the exponential growth rate of Moore's Law. Tamay Besiroglu, Lennart Heim, and Jaime Sevilla of the Epoch team estimate in their report that the highest probability for this reversion occurring is in the early 2030s.

The training computation of PaLM, developed in 2022, was 2,700,000,000 petaFLOP. The training computation of AlexNet, the AI with the largest training computation up to 2012, was 470 petaFLOP. 2,500,000,000 petaFLOP / 470 petaFLOP = 5,319,148.9. At the same time, the amount of training computation required to achieve a given performance has been falling exponentially.

The costs have also increased quickly. The cost to train PaLM is estimated to be $9–$23 million, according to Lennart Heim, a researcher in the Epoch team. See Lennart Heim (2022) — Estimating PaLM's training cost .

Scaling up the size of neural networks — in terms of the number of parameters and the amount of training data and computation — has led to surprising increases in the capabilities of AI systems. This realization motivated the “scaling hypothesis.” See Gwern Branwen (2020) — The Scaling Hypothesis ⁠.

Her research was announced in various places, including in the AI Alignment Forum here: Ajeya Cotra (2020) —  Draft report on AI timelines . As far as I know, the report always remained a “draft report” and was published here on Google Docs .

The cited estimate stems from Cotra’s Two-year update on my personal AI timelines , in which she shortened her median timeline by 10 years.

Cotra emphasizes that there are substantial uncertainties around her estimates and therefore communicates her findings in a range of scenarios. She published her big study in 2020, and her median estimate at the time was that around the year 2050, there will be a 50%-probability that the computation required to train such a model may become affordable. In her “most conservative plausible”-scenario, this point in time is pushed back to around 2090, and in her “most aggressive plausible”-scenario, this point is reached in 2040.

The same is true for most other forecasters: all emphasize the large uncertainty associated with their forecasts .

It is worth emphasizing that the computation of the human brain is highly uncertain. See Joseph Carlsmith's New Report on How Much Computational Power It Takes to Match the Human Brain from 2020.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

How artificial intelligence is transforming the world

Subscribe to techstream, darrell m. west and darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies john r. allen john r. allen.

April 24, 2018

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

  • 49 min read

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st -century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.

Intentionality

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.

Intelligence

AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

Related Content

Jack Karsten, Darrell M. West

October 26, 2015

Makada Henry-Nickie

November 16, 2017

Sunil Johal, Daniel Araya

February 28, 2017

Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.

Transportation

Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

  • Thomas Davenport, Jeff Loucks, and David Schatsky, “Bullish on the Business Value of Cognitive” (Deloitte, 2017), p. 3 (www2.deloitte.com/us/en/pages/deloitte-analytics/articles/cognitive-technology-adoption-survey.html).
  • Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next (New York: Penguin–TarcherPerigee, 2017).
  • Shubhendu and Vijay, “Applicability of Artificial Intelligence in Different Fields of Life.”
  • Andrew McAfee and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future (New York: Norton, 2017).
  • Portions of this paper draw on Darrell M. West, The Future of Work: Robots, AI, and Automation , Brookings Institution Press, 2018.
  • PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 1.
  • Nathaniel Popper, “Stocks and Bots,” New York Times Magazine , February 28, 2016.
  • Michael Lewis, Flash Boys: A Wall Street Revolt (New York: Norton, 2015).
  • Cade Metz, “In Quantum Computing Race, Yale Professors Battle Tech Giants,” New York Times , November 14, 2017, p. B3.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, pp. 27-28.
  • Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Washington Post , December 3, 2017.
  • John R. Allen and Amir Husain, “On Hyperwar,” Naval Institute Proceedings , July 17, 2017, pp. 30-36.
  • Paul Mozur, “China Sets Goal to Lead in Artificial Intelligence,” New York Times , July 21, 2017, p. B1.
  • Paul Mozur and John Markoff, “Is China Outsmarting American Artificial Intelligence?” New York Times , May 28, 2017.
  • Economist , “America v China: The Battle for Digital Supremacy,” March 15, 2018.
  • Rasmus Rothe, “Applying Deep Learning to Real-World Problems,” Medium , May 23, 2017.
  • Eric Horvitz, “Reflections on the Status and Future of Artificial Intelligence,” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016, p. 5.
  • Jeff Asher and Rob Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago,” New York Times Upshot , June 13, 2017.
  • Caleb Watney, “It’s Time for our Justice System to Embrace Artificial Intelligence,” TechTank (blog), Brookings Institution, July 20, 2017.
  • Asher and Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.”
  • Paul Mozur and Keith Bradsher, “China’s A.I. Advances Help Its Tech Industry, and State Security,” New York Times , December 3, 2017.
  • Simon Denyer, “China’s Watchful Eye,” Washington Post , January 7, 2018.
  • Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings Institution, October 16, 2017.
  • Portions of this section are drawn from Darrell M. West, “Driverless Cars in China, Europe, Japan, Korea, and the United States,” Brookings Institution, September 2016.
  • Yuming Ge, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in China and the United States,” Center for Technology Innovation, Brookings Institution, December 2017.
  • Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,” Washington Post , November 20, 2017.
  • Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times , March 19, 2018.
  • Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings Institution, June 23, 2017.
  • Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company , November 14, 2013.
  • Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,” TechRepublic , November 6, 2017.
  • Osonde Osoba and William Welser IV, “The Risks of Artificial Intelligence to Security and the Future of Work” (Santa Monica, Calif.: RAND Corp., December 2017) (www.rand.org/pubs/perspectives/PE237.html).
  • Ibid., p. 7.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 7.
  • Executive Office of the President, “Preparing for the Future of Artificial Intelligence,” October 2016, pp. 30-31.
  • Elaine Glusac, “As Airbnb Grows, So Do Claims of Discrimination,” New York Times , June 21, 2016.
  • “Joy Buolamwini,” Bloomberg Businessweek , July 3, 2017, p. 80.
  • Mark Purdy and Paul Daugherty, “Why Artificial Intelligence is the Future of Growth,” Accenture, 2016.
  • Jon Valant, “Integrating Charter Schools and Choice-Based Education Systems,” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
  • Tucker, “‘A White Mask Worked Better.’”
  • Cliff Kuang, “Can A.I. Be Taught to Explain Itself?” New York Times Magazine , November 21, 2017.
  • Yale Law School Information Society Project, “Governing Machine Learning,” September 2017.
  • Katie Benner, “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness,” New York Times , June 19, 2016.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.”
  • Nancy Scolar, “Facebook’s Next Project: American Inequality,” Politico , February 19, 2018.
  • Darrell M. West, “What Internet Search Data Reveals about Donald Trump’s First Year in Office,” Brookings Institution policy report, January 17, 2018.
  • Ian Buck, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • Keith Nakasone, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Greg Brockman, “The Dawn of Artificial Intelligence,” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
  • Amir Khosrowshahi, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • James Kurose, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Stephen Noonoo, “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans,” EdSurge , September 13, 2017.
  • Congress.gov, “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
  • Elizabeth Zima, “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology , January 4, 2018.
  • Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” New Yorker , December 20, 2017.
  • Sheera Frenkel, “Tech Giants Brace for Europe’s New Data Privacy Rules,” New York Times , January 28, 2018.
  • Claire Miller and Kevin O’Brien, “Germany’s Complicated Relationship with Google Street View,” New York Times , April 23, 2013.
  • Cade Metz, “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe,” Wired , July 11, 2016.
  • Eric Siegel, “Predictive Analytics Interview Series: Andrew Burt,” Predictive Analytics Times , June 14, 2017.
  • Oren Etzioni, “How to Regulate Artificial Intelligence,” New York Times , September 1, 2017.
  • “Ethical Considerations in Artificial Intelligence and Autonomous Systems,” unpublished paper. IEEE Global Initiative, 2018.
  • Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia, “A Voting-Based System for Ethical Decision Making,” Computers and Society , September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
  • Miles Brundage, et al., “The Malicious Use of Artificial Intelligence,” University of Oxford unpublished paper, February 2018.
  • John Markoff, “As Artificial Intelligence Evolves, So Does Its Criminal Potential,” New York Times, October 24, 2016, p. B3.
  • Economist , “The Challenger: Technopolitics,” March 17, 2018.
  • Douglas Maughan, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Levi Tillemann and Colin McCormick, “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy,” New American Foundation, March 2017.

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

Jeremy Baum, John Villasenor

April 17, 2024

Molly Kinder

April 12, 2024

Tom Wheeler

April 9, 2024

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Two men wearing hospital scrubs, two wearing blue jackets with the logo for the company EndoShunt, in front of medical equipment

Seven SEAS teams named President’s Innovation Challenge finalists

Start-ups will vie for up to $75,000 in prize money

Computer Science , Design , Electrical Engineering , Entrepreneurship , Events , Master of Design Engineering , Materials Science & Mechanical Engineering , MS/MBA

A group of Harvard SEAS students standing behind a wooden table, in front of a sign that says "Agents of Change"

Exploring the depths of AI

 New SEAS club spends Spring Break meeting AI technology professionals in San Francisco

AI / Machine Learning , Computer Science , Student Organizations

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

  • Online Degree Explore Bachelor’s & Master’s degrees
  • MasterTrack™ Earn credit towards a Master’s degree
  • University Certificates Advance your career with graduate-level learning
  • Top Courses
  • Join for Free

What Is Artificial Intelligence? Definition, Uses, and Types

Learn what artificial intelligence actually is, how it’s used today, and what it may do in the future.

[Featured Image] Waves of 0 and 1 digits on a blue background.

Artificial intelligence (AI) refers to computer systems capable of performing complex tasks that historically only a human could do, such as reasoning, making decisions, or solving problems. 

Today, the term “AI” describes a wide range of technologies that power many of the services and goods we use every day – from apps that recommend tv shows to chatbots that provide customer support in real time. But do all of these really constitute artificial intelligence as most of us envision it? And if not, then why do we use the term so often? 

In this article, you’ll learn more about artificial intelligence, what it actually does, and different types of it. In the end, you’ll also learn about some of its benefits and dangers and explore flexible courses that can help you expand your knowledge of AI even further.  

Want to try out your AI skills? Enroll in AI for Everyone, an online program offered by DeepLearning.AI. In just 6 hours , you'll gain foundational knowledge about AI terminology , strategy , and the workflow of machine learning projects . Your first week is free .

What is artificial intelligence?

Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning , deep learning , and natural language processing (NLP) . 

Although the term is commonly used to describe a range of different technologies in use today, many disagree on whether these actually constitute artificial intelligence. Instead, some argue that much of the technology used in the real world today actually constitutes highly advanced machine learning that is simply a first step towards true artificial intelligence, or “general artificial intelligence” (GAI).

Yet, despite the many philosophical disagreements over whether “true” intelligent machines actually exist, when most people use the term AI today, they’re referring to a suite of machine learning-powered technologies, such as Chat GPT or computer vision, that enable machines to perform tasks that previously only humans can do like generating written content, steering a car, or analyzing data. 

Artificial intelligence examples 

Though the humanoid robots often associated with AI (think Star Trek: The Next Generation’s Data or Terminator’s   T-800) don’t exist yet, you’ve likely interacted with machine learning-powered services or devices many times before. 

At the simplest level, machine learning uses algorithms trained on data sets to create machine learning models that allow computer systems to perform tasks like making song recommendations, identifying the fastest way to travel to a destination, or translating text from one language to another. Some of the most common examples of AI in use today include: 

ChatGPT : Uses large language models (LLMs) to generate text in response to questions or comments posed to it. 

Google Translate: Uses deep learning algorithms to translate text from one language to another. 

Netflix: Uses machine learning algorithms to create personalized recommendation engines for users based on their previous viewing history. 

Tesla: Uses computer vision to power self-driving features on their cars. 

Read more: Deep Learning vs. Machine Learning: Beginner’s Guide

The increasing accessibility of generative AI tools has made it an in-demand skill for many tech roles . If you're interested in learning to work with AI for your career, you might consider a free, beginner-friendly online program like Google's Introduction to Generative AI .

AI in the workforce

Artificial intelligence is prevalent across many industries. Automating tasks that don't require human intervention saves money and time, and can reduce the risk of human error. Here are a couple of ways AI could be employed in different industries:

Finance industry. Fraud detection is a notable use case for AI in the finance industry. AI's capability to analyze large amounts of data enables it to detect anomalies or patterns that signal fraudulent behavior.

Health care industry. AI-powered robotics could support surgeries close to highly delicate organs or tissue to mitigate blood loss or risk of infection.

Not ready to take classes or jump into a project yet? Consider subscribing to our weekly newsletter, Career Chat . It's a low-commitment way to stay current with industry trends and skills you can use to guide your career path.

What is artificial general intelligence (AGI)? 

Artificial general intelligence (AGI) refers to a theoretical state in which computer systems will be able to achieve or exceed human intelligence. In other words, AGI is “true” artificial intelligence as depicted in countless science fiction novels, television shows, movies, and comics. 

As for the precise meaning of “AI” itself, researchers don’t quite agree on how we would recognize “true” artificial general intelligence when it appears. However, the most famous approach to identifying whether a machine is intelligent or not is known as the Turing Test or Imitation Game, an experiment that was first outlined by influential mathematician, computer scientist, and cryptanalyst Alan Turing in a 1950 paper on computer intelligence. There, Turing described a three-player game in which a human “interrogator” is asked to communicate via text with another human and a machine and judge who composed each response. If the interrogator cannot reliably identify the human, then Turing says the machine can be said to be intelligent [ 1 ]. 

To complicate matters, researchers and philosophers also can’t quite agree whether we’re beginning to achieve AGI, if it’s still far off, or just totally impossible. For example, while a recent paper from Microsoft Research and OpenAI argues that Chat GPT-4 is an early form of AGI, many other researchers are skeptical of these claims and argue that they were just made for publicity [ 2 , 3 ].

Regardless of how far we are from achieving AGI, you can assume that when someone uses the term artificial general intelligence, they’re referring to the kind of sentient computer programs and machines that are commonly found in popular science fiction. 

Strong AI vs. Weak AI

When researching artificial intelligence, you might have come across the terms “strong” and “weak” AI. Though these terms might seem confusing, you likely already have a sense of what they mean. 

Strong AI is essentially AI that is capable of human-level, general intelligence. In other words, it’s just another way to say “artificial general intelligence.” 

Weak AI , meanwhile, refers to the narrow use of widely available AI technology, like machine learning or deep learning, to perform very specific tasks, such as playing chess, recommending songs, or steering cars. Also known as Artificial Narrow Intelligence (ANI), weak AI is essentially the kind of AI we use daily.

Read more: Machine Learning vs. AI: Differences, Uses, and Benefits

The 4 Types of AI 

As researchers attempt to build more advanced forms of artificial intelligence, they must also begin to formulate more nuanced understandings of what intelligence or even consciousness precisely mean. In their attempt to clarify these concepts, researchers have outlined four types of artificial intelligence .

Here’s a summary of each AI type, according to Professor Arend Hintze of the University of Michigan [ 4 ]: 

1. Reactive machines

Reactive machines are the most basic type of artificial intelligence. Machines built in this way don’t possess any knowledge of previous events but instead only “react” to what is before them in a given moment. As a result, they can only perform certain advanced tasks within a very narrow scope, such as playing chess, and are incapable of performing tasks outside of their limited context. 

2. Limited memory machines

Machines with limited memory possess a limited understanding of past events. They can interact more with the world around them than reactive machines can. For example, self-driving cars use a form of limited memory to make turns, observe approaching vehicles, and adjust their speed. However, machines with only limited memory cannot form a complete understanding of the world because their recall of past events is limited and only used in a narrow band of time. 

3. Theory of mind machines

Machines that possess a “theory of mind” represent an early form of artificial general intelligence. In addition to being able to create representations of the world, machines of this type would also have an understanding of other entities that exist within the world. As of this moment, this reality has still not materialized. 

4. Self-aware machines

Machines with self-awareness are the theoretically most advanced type of AI and would possess an understanding of the world, others, and itself. This is what most people mean when they talk about achieving AGI. Currently, this is a far-off reality. 

AI benefits and dangers

AI has a range of applications with the potential to transform how we work and our daily lives. While many of these transformations are exciting, like self-driving cars, virtual assistants, or wearable devices in the healthcare industry, they also pose many challenges.

It’s a complicated picture that often summons competing images: a utopia for some, a dystopia for others. The reality is likely to be much more complex. Here are a few of the possible benefits and dangers AI may pose: 

These are just some of the ways that AI provides benefits and dangers to society. When using new technologies like AI, it’s best to keep a clear mind about what it is and isn’t. With great power comes great responsibility, after all. 

Read more: AI Ethics: What It Is and Why It Matters

Build AI skills on Coursera

Artificial Intelligence is quickly changing the world we live in. If you’re interested in learning more about AI and how you can use it at work or in your own life, consider taking a relevant course on Coursera today. 

In DeepLearning.AI’s AI For Everyone course , you’ll learn what AI can realistically do and not do, how to spot opportunities to apply AI to problems in your own organization, and what it feels like to build machine learning and data science projects. 

In DeepLearning.AI’s AI For Good Specialization , meanwhile, you’ll build skills combining human and machine intelligence for positive real-world impact using AI in a beginner-friendly, three-course program. 

Article sources

UMBC. “ Computing Machinery and Intelligence by A. M. Turing , https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf.” Accessed March 30, 2024.

ArXiv. “ Sparks of Artificial General Intelligence: Early experiments with GPT-4 , https://arxiv.org/abs/2303.12712.” Accessed March 30, 2024.

Wired. “ What’s AGI, and Why Are AI Experts Skeptical? , https://www.wired.com/story/what-is-artificial-general-intelligence-agi-explained/.” Accessed March 30, 2024.

GovTech. “ Understanding the Four Types of Artificial Intelligence , https://www.govtech.com/computing/understanding-the-four-types-of-artificial-intelligence.html.” Accessed March 30, 2024.

Keep reading

Coursera staff.

Editorial Team

Coursera’s editorial team is comprised of highly experienced professional editors, writers, and fact...

This content has been made available for informational purposes only. Learners are advised to conduct additional research to ensure that courses and other credentials pursued meet their personal, professional, and financial goals.

Artificial Intelligence Essay for Students and Children

500+ words essay on artificial intelligence.

Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence of humans and animals. With Artificial Intelligence, machines perform functions such as learning, planning, reasoning and problem-solving. Most noteworthy, Artificial Intelligence is the simulation of human intelligence by machines. It is probably the fastest-growing development in the World of technology and innovation . Furthermore, many experts believe AI could solve major challenges and crisis situations.

Artificial Intelligence Essay

Types of Artificial Intelligence

First of all, the categorization of Artificial Intelligence is into four types. Arend Hintze came up with this categorization. The categories are as follows:

Type 1: Reactive machines – These machines can react to situations. A famous example can be Deep Blue, the IBM chess program. Most noteworthy, the chess program won against Garry Kasparov , the popular chess legend. Furthermore, such machines lack memory. These machines certainly cannot use past experiences to inform future ones. It analyses all possible alternatives and chooses the best one.

Type 2: Limited memory – These AI systems are capable of using past experiences to inform future ones. A good example can be self-driving cars. Such cars have decision making systems . The car makes actions like changing lanes. Most noteworthy, these actions come from observations. There is no permanent storage of these observations.

Type 3: Theory of mind – This refers to understand others. Above all, this means to understand that others have their beliefs, intentions, desires, and opinions. However, this type of AI does not exist yet.

Type 4: Self-awareness – This is the highest and most sophisticated level of Artificial Intelligence. Such systems have a sense of self. Furthermore, they have awareness, consciousness, and emotions. Obviously, such type of technology does not yet exist. This technology would certainly be a revolution .

Get the huge list of more than 500 Essay Topics and Ideas

Applications of Artificial Intelligence

First of all, AI has significant use in healthcare. Companies are trying to develop technologies for quick diagnosis. Artificial Intelligence would efficiently operate on patients without human supervision. Such technological surgeries are already taking place. Another excellent healthcare technology is IBM Watson.

Artificial Intelligence in business would significantly save time and effort. There is an application of robotic automation to human business tasks. Furthermore, Machine learning algorithms help in better serving customers. Chatbots provide immediate response and service to customers.

development of artificial intelligence essay

AI can greatly increase the rate of work in manufacturing. Manufacture of a huge number of products can take place with AI. Furthermore, the entire production process can take place without human intervention. Hence, a lot of time and effort is saved.

Artificial Intelligence has applications in various other fields. These fields can be military , law , video games , government, finance, automotive, audit, art, etc. Hence, it’s clear that AI has a massive amount of different applications.

To sum it up, Artificial Intelligence looks all set to be the future of the World. Experts believe AI would certainly become a part and parcel of human life soon. AI would completely change the way we view our World. With Artificial Intelligence, the future seems intriguing and exciting.

{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [{ “@type”: “Question”, “name”: “Give an example of AI reactive machines?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Reactive machines react to situations. An example of it is the Deep Blue, the IBM chess program, This program defeated the popular chess player Garry Kasparov.” } }, { “@type”: “Question”, “name”: “How do chatbots help in business?”, “acceptedAnswer”: { “@type”: “Answer”, “text”:”Chatbots help in business by assisting customers. Above all, they do this by providing immediate response and service to customers.”} }] }

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

Science in the News

Opening the lines of communication between research scientists and the wider community.

development of artificial intelligence essay

  • SITN Facebook Page
  • SITN Twitter Feed
  • SITN Instagram Page
  • SITN Lectures on YouTube
  • SITN Podcast on SoundCloud
  • Subscribe to the SITN Mailing List
  • SITN Website RSS Feed

development of artificial intelligence essay

The History of Artificial Intelligence

by Rockwell Anyoha

Can Machines Think?

In the first half of the 20 th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the “heartless” Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis . By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why can’t machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

Making the Pursuit Possible

Unfortunately, talk is cheap. What stopped Turing from getting to work right then and there? First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldn’t store commands, only execute them. In other words, computers could be told what to do but couldn’t remember what they did. Second, computing was extremely expensive . In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. Only prestigious universities and big technology companies could afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from high profile people were needed to persuade funding sources that machine intelligence was worth pursuing.

The Conference that Started it All

Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simon’s, Logic Theorist . The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It’s considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy, imagining a great collaborative effort, brought together top researchers from various fields for an open ended discussion on artificial intelligence, the term which he coined at the very event. Sadly, the conference fell short of McCarthy’s expectations; people came and went as they pleased, and there was failure to agree on standard methods for the field. Despite this, everyone whole-heartedly aligned with the sentiment that AI was achievable. The significance of this event cannot be undermined as it catalyzed the next twenty years of AI research.

Roller Coaster of Success and Setbacks

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon’s General Problem Solver and Joseph Weizenbaum’s ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively. These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency (DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher. In 1970 Marvin Minsky told Life Magazine, “from three to eight years we will have a machine with the general intelligence of an average human being.” However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.

Anyoha SITN Figure 2 AI timeline

Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of computational power to do anything substantial: computers simply couldn’t store enough information or process it fast enough. In order to communicate, for example, one needs to know the meanings of many words and understand them in many combinations. Hans Moravec, a doctoral student of McCarthy at the time, stated that “computers were still millions of times too weak to exhibit intelligence.”  As patience dwindled so did the funding, and research came to a slow roll for ten years.

In the 1980’s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized “deep learning” techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it could be argued that the indirect effects of the FGCP inspired a talented young generation of engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the limelight.

Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBM’s Deep Blue , a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows . This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasn’t a problem machines couldn’t handle. Even human emotion was fair game as evidenced by Kismet , a robot developed by Cynthia Breazeal that could recognize and display emotions.

Time Heals all Wounds

We haven’t gotten any smarter about how we are coding artificial intelligence, so what changed? It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem. Moore’s Law , which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Google’s Alpha Go was able to defeat Chinese Go champion, Ke Jie, only a few months ago. It offers a bit of an explanation to the roller coaster of AI research; we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moore’s Law to catch up again.

Artificial Intelligence is Everywhere

We now live in the age of “ big data ,” an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, banking , marketing , and entertainment . We’ve seen that even if algorithms don’t improve much, big data and massive computing simply allow artificial intelligence to learn through brute force. There may be evidence that Moore’s law is slowing down a tad, but the increase in data certainly hasn’t lost any momentum . Breakthroughs in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moore’s Law.

So what is in store for the future? In the immediate future, AI language is looking like the next big thing. In fact, it’s already underway. I can’t remember the last time I called a company and directly spoke with a human. These days, machines are even calling me! One could imagine interacting with an expert system in a fluid conversation, or having a conversation in two different languages being translated in real time. We can also expect to see driverless cars on the road in the next twenty years (and that is conservative). In the long term, the goal is general intelligence, that is a machine that surpasses human cognitive abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in movies. To me, it seems inconceivable that this would be accomplished in the next 50 years. Even if the capability is there, the ethical questions would serve as a strong barrier against fruition. When that time comes (but better even before the time comes), we will need to have a serious conversation about machine policy and ethics (ironically both fundamentally human subjects), but for now, we’ll allow AI to steadily improve and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. His current project employs the use of machine learning to model animal behavior. In his free time, Rockwell enjoys playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence .

For more information:

Brief Timeline of AI

https://www.livescience.com/47544-history-of-a-i-artificial-intelligence-infographic.html

Complete Historical Overview

http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

Dartmouth Summer Research Project on Artificial Intelligence

https://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802

Future of AI

https://www.technologyreview.com/s/602830/the-future-of-artificial-intelligence-and-cybernetics/

Discussion on Future Ethical Challenges Facing AI

http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-intelligence

Detailed Review of Ethics of AI

https://intelligence.org/files/EthicsofAI.pdf

Share this:

  • Click to print (Opens in new window)
  • Click to email a link to a friend (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on Twitter (Opens in new window)
  • Click to share on Reddit (Opens in new window)

287 thoughts on “ The History of Artificial Intelligence ”

I like it because its is too long and I can’t find the info I want but hey it was worth a try.

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.

Currently you have JavaScript disabled. In order to post comments, please make sure JavaScript and Cookies are enabled, and reload the page. Click here for instructions on how to enable JavaScript in your browser.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

  • Artificial Intelligence and the Future of Humans

Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will

Table of contents.

  • 1. Concerns about human agency, evolution and survival
  • 2. Solutions to address AI’s anticipated negative impacts
  • 3. Improvements ahead: How humans and AI might evolve together in the next decade
  • About this canvassing of experts
  • Acknowledgments

Table that shows that people in most of the surveyed countries are more willing to discuss politics in person than via digital channels.

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

[chart id=”21972″]

Specifically, participants were asked to consider the following:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:

Sonia Katyal , co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”

We need to work aggressively to make sure technology matches our values. Erik Brynjolfsson

[machine learning]

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Andrew McLaughlin , executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Michael M. Roberts , first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

danah boyd , a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

Barry Chudakov , founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

John C. Havens , executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”

At stake is nothing less than what sort of society we want to live in and how we experience our humanity. Batya Friedman

Batya Friedman , a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Greg Shannon , chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

Kostas Alexandridis , author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”

James Scofield O’Rourke , a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Mark Surman , executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”

The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Sign up for our weekly newsletter

Fresh data delivery Saturday mornings

Sign up for The Briefing

Weekly updates on the world of news & information

  • Artificial Intelligence
  • Emerging Technology
  • Future of the Internet (Project)
  • Technology Adoption

Many Americans think generative AI programs should credit the sources they rely on

Americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, striking findings from 2023, what the data says about americans’ views of artificial intelligence, most popular, report materials.

  • Shareable quotes from experts about artificial intelligence and the future of humans

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Age & Generations
  • Coronavirus (COVID-19)
  • Economy & Work
  • Family & Relationships
  • Gender & LGBTQ
  • Immigration & Migration
  • International Affairs
  • Internet & Technology
  • Methodological Research
  • News Habits & Media
  • Non-U.S. Governments
  • Other Topics
  • Politics & Policy
  • Race & Ethnicity
  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

Copyright 2024 Pew Research Center

Terms & Conditions

Privacy Policy

Cookie Settings

Reprints, Permissions & Use Policy

  • Undergraduate
  • High School
  • Architecture
  • American History
  • Asian History
  • Antique Literature
  • American Literature
  • Asian Literature
  • Classic English Literature
  • World Literature
  • Creative Writing
  • Linguistics
  • Criminal Justice
  • Legal Issues
  • Anthropology
  • Archaeology
  • Political Science
  • World Affairs
  • African-American Studies
  • East European Studies
  • Latin-American Studies
  • Native-American Studies
  • West European Studies
  • Family and Consumer Science
  • Social Issues
  • Women and Gender Studies
  • Social Work
  • Natural Sciences
  • Pharmacology
  • Earth science
  • Agriculture
  • Agricultural Studies
  • Computer Science
  • IT Management
  • Mathematics
  • Investments
  • Engineering and Technology
  • Engineering
  • Aeronautics
  • Medicine and Health
  • Alternative Medicine
  • Communications and Media
  • Advertising
  • Communication Strategies
  • Public Relations
  • Educational Theories
  • Teacher's Career
  • Chicago/Turabian
  • Company Analysis
  • Education Theories
  • Shakespeare
  • Canadian Studies
  • Food Safety
  • Relation of Global Warming and Extreme Weather Condition
  • Movie Review
  • Admission Essay
  • Annotated Bibliography
  • Application Essay
  • Article Critique
  • Article Review
  • Article Writing
  • Book Review
  • Business Plan
  • Business Proposal
  • Capstone Project
  • Cover Letter
  • Creative Essay
  • Dissertation
  • Dissertation - Abstract
  • Dissertation - Conclusion
  • Dissertation - Discussion
  • Dissertation - Hypothesis
  • Dissertation - Introduction
  • Dissertation - Literature
  • Dissertation - Methodology
  • Dissertation - Results
  • GCSE Coursework
  • Grant Proposal
  • Marketing Plan
  • Multiple Choice Quiz
  • Personal Statement

Power Point Presentation

  • Power Point Presentation With Speaker Notes
  • Questionnaire
  • Reaction Paper
  • Research Paper
  • Research Proposal
  • SWOT analysis
  • Thesis Paper
  • Online Quiz
  • Literature Review
  • Movie Analysis
  • Statistics problem
  • Math Problem
  • All papers examples
  • How It Works
  • Money Back Policy
  • Terms of Use
  • Privacy Policy
  • We Are Hiring

Development’s of Artificial Intelligence, Essay Example

Pages: 4

Words: 1062

Hire a Writer for Custom Essay

Use 10% Off Discount: "custom10" in 1 Click 👇

You are free to use it as an inspiration or a source for your own work.

Introduction

From the dawn of humankind from the first imaginative thinking and thoughts to the development of Artificial Intelligence (AI) continues to be progressive and innovative while introducing new heights of human intelligence in computer technology. The development of artificial intelligence (AI) is one the most controversial issues in the computing technology industry. The technology industry has spent many years examining AI in the fields of computer science, mathematics, and engineering. Most will agree that AI is the method of creating systems that can show characteristics of intelligent behavior. AI research is conducted by a range of scientists and technologists with varying perspectives, interests, and motivations. Scientists tend to be interested in understanding the underlying basis of intelligence and cognition, some with an emphasis on unraveling the mysteries of human thought and others examining intelligence more broad(Computer Science and Telecommunication Board 198).

Computer Technology

The computer technology industries have made many technological advances in AI while contributing to the modern world. The very thought of clone or non-human entity would be able to evolve to the point of higher intelligence of a human being. In the real world of computer technology, intelligence is traditionally thought of as a personal level of knowledge or genius. That genius has the ability to gather, retain, and understand extremely large and complex concepts. However, the computer technology study advances has some concerned about building artificial intelligence that smarter than a human being and does not have a conscious to handled that programmed intelligence. In our society, AI is a technology that can be seen making significant contributions in our daily lives. Artificial Intelligence using computer technology using the algorithms to programmed banking transactions. The contributions are a large percentage of the computer revolutions that is improving the way we learn.

New Developments

There many new developments in the field of AI that has support from the Defense Advanced Research Projects Agency (DARPA, known during certain periods as ARPA) and other units of the Department of Defense (DOD). Other funding agencies have included the National Institutes of Health, National Science Foundation, and National Aeronautics and Space Administration (NASA) (Computer Science and Telecommunication Board 199). Dartmouth University continues to develop new advances in AI along with IBM that jump-started the AI development in the 1950s. Another advancement is LISP is an important part of AI that programs language using computational calculations for knowledge. This involves logical reasoning, problem solving, and formulas. The Stanford Research Institute has been developing improvement of AI for over 60 years.  The next decade will bring in new AI technology such as Snake Like Robots, Robotic Surgery, Underwater Robots, AI learning that imitates children learning and Robots that fix power outages (Science Daily 1).

The ethics of the technology called artificial intelligence is because artificial intelligence does not have a conscious. The robot can be programmed to release the atom bomb without having any thought of humanity or how many people would be killed. The robotethics is about the behavior of humans who design, use, and program and develop artificially intelligent beings (Guerin 3). Roboethics is concerned with the robot given programming to kill or maim or unmorally disposed of millions.

The second type of AI ethics is known as machine ethics, which based on moral behavior of the artificial mortal agents. In reference to the movie” I Robot” starring Will Smith, the move addressed machine ethics motivated by next generation robotics. The scientist in the movie was seeking to make artificial intelligent robots with feelings and equipped with ethical standards. Motivated by planned next-generation robotic systems, machine ethics typically explores solutions for agents with autonomous capacities intermediate between those of current artificial agents and humans, with designs developed incrementally by and embedded in a society of human agents(Shulman,Jonsson & Tarleton,2009 96).

The future on AI presents new future technology that will change society. The beginning of AI has not reach the maximum potential yet. However, society will continue to change just like the evolution of the computer. Society has the capability of using IPhones that can talk, track, and search while doing anything a computer can do. In addition, every industry has begun to take advantage of AI technology. As Big Data evolves, so will machine-learning systems that can process it, and apply it toward particular outcomes. We are witnessing the beginning of a revolution that will see a fundamental change in the way businesses run, and people work (RocketFuel 1).

In our society, there are boundaries we as human being cannot reach. The human being cannot travel a billion miles away with losing life. However, artificial intelligence can reach that planet sending critical information back to earth. The thought of flying cars, futuristic self-sufficient homes, and traveling billions of miles away provide opportunities for the world. The technology must be controlled legally to ensure the AI technology does not fall into the wrong hands. AI should be used for humanitarian improvements for all people but not used for destruction. The legal ramifications from AI in the hands of the commercial market is concerning. The AI can become intellectual technology of the business that purchases the right to that AI. In addition, releasing AI technology to the public for sale is dangerous because any country can purchase the advance AI technology. There must boundaries set before we unleash to the full potential of AI. There are disadvantages to AI such possibility The possibility of engineers building or programming a machine to outthink human beings that may lead to catastrophic results relying on a machine to make decisions. In addition, to the danger of mastery of task that is so important, that it could launch a nuclear bomb because it was programmed to defend the United States. It is possible that a machine could falsely believe commercial airplane is attacking the United States thus ending the world with nuclear bomb. The AI contributions will bring futuristic technology changes that will affect our communities, environments, and cultures; however, AI must be left unchecked.

Works Cited

Computer Science and Telecommunication Board. Funding a revolution: Government support for computing research. Washington, DC: The National Academies Press, 1999. Print. Funding

Guerin, F. (2014).On roboethics and the robotic human. Retrieved on October 8 th , 2014 from http://www.truth-out.org/news/item/25281-on-roboethics-and-the-robotic-human

RocketFuel. (2014).Artificial intelligence is changing the world and humankind must adapt. Retrieved October 4, 2014 from http://rocketfuel.com/blog/artificial-intelligence-is-changing-the-world-and-humankind-must-adapt

Shulman, C., Henrik J., and Nick T. (2009). Machine ethics and super intelligence. Retrieved on October 8 th , 2014 from http://ia-cap.org/ap-cap09/proceedings.pdf

Stuck with your Essay?

Get in touch with one of our experts for instant help!

Ragnar Danneskjolds Robin Hood, Essay Example

Advanced Practice Nurse Acts, Power Point Presentation Example

Time is precious

don’t waste it!

Plagiarism-free guarantee

Privacy guarantee

Secure checkout

Money back guarantee

E-book

Related Essay Samples & Examples

Voting as a civic responsibility, essay example.

Pages: 1

Words: 287

Utilitarianism and Its Applications, Essay Example

Words: 356

The Age-Related Changes of the Older Person, Essay Example

Pages: 2

Words: 448

The Problems ESOL Teachers Face, Essay Example

Pages: 8

Words: 2293

Should English Be the Primary Language? Essay Example

Words: 999

The Term “Social Construction of Reality”, Essay Example

Words: 371

Artificial Intelligence and Its Impact on Education Essay

Introduction, ai’s impact on education, the impact of ai on teachers, the impact of ai on students, reference list.

Rooted in computer science, Artificial Intelligence (AI) is defined by the development of digital systems that can perform tasks, which are dependent on human intelligence (Rexford, 2018). Interest in the adoption of AI in the education sector started in the 1980s when researchers were exploring the possibilities of adopting robotic technologies in learning (Mikropoulos, 2018). Their mission was to help learners to study conveniently and efficiently. Today, some of the events and impact of AI on the education sector are concentrated in the fields of online learning, task automation, and personalization learning (Chen, Chen and Lin, 2020). The COVID-19 pandemic is a recent news event that has drawn attention to AI and its role in facilitating online learning among other virtual educational programs. This paper seeks to find out the possible impact of artificial intelligence on the education sector from the perspectives of teachers and learners.

Technology has transformed the education sector in unique ways and AI is no exception. As highlighted above, AI is a relatively new area of technological development, which has attracted global interest in academic and teaching circles. Increased awareness of the benefits of AI in the education sector and the integration of high-performance computing systems in administrative work have accelerated the pace of transformation in the field (Fengchun et al. , 2021). This change has affected different facets of learning to the extent that government agencies and companies are looking to replicate the same success in their respective fields (IBM, 2020). However, while the advantages of AI are widely reported in the corporate scene, few people understand its impact on the interactions between students and teachers. This research gap can be filled by understanding the impact of AI on the education sector, as a holistic ecosystem of learning.

As these gaps in education are minimized, AI is contributing to the growth of the education sector. Particularly, it has increased the number of online learning platforms using big data intelligence systems (Chen, Chen and Lin, 2020). This outcome has been achieved by exploiting opportunities in big data analysis to enhance educational outcomes (IBM, 2020). Overall, the positive contributions that AI has had to the education sector mean that it has expanded opportunities for growth and development in the education sector (Rexford, 2018). Therefore, teachers are likely to benefit from increased opportunities for learning and growth that would emerge from the adoption of AI in the education system.

The impact of AI on teachers can be estimated by examining its effects on the learning environment. Some of the positive outcomes that teachers have associated with AI adoption include increased work efficiency, expanded opportunities for career growth, and an improved rate of innovation adoption (Chen, Chen and Lin, 2020). These benefits are achievable because AI makes it possible to automate learning activities. This process gives teachers the freedom to complete supplementary tasks that support their core activities. At the same time, the freedom they enjoy may be used to enhance creativity and innovation in their teaching practice. Despite the positive outcomes of AI adoption in learning, it undermines the relevance of teachers as educators (Fengchun et al., 2021). This concern is shared among educators because the increased reliance on robotics and automation through AI adoption has created conditions for learning to occur without human input. Therefore, there is a risk that teacher participation may be replaced by machine input.

Performance Evaluation emerges as a critical area where teachers can benefit from AI adoption. This outcome is feasible because AI empowers teachers to monitor the behaviors of their learners and the differences in their scores over a specific time (Mikropoulos, 2018). This comparative analysis is achievable using advanced data management techniques in AI-backed performance appraisal systems (Fengchun et al., 2021). Researchers have used these systems to enhance adaptive group formation programs where groups of students are formed based on a balance of the strengths and weaknesses of the members (Live Tiles, 2021). The information collected using AI-backed data analysis techniques can be recalibrated to capture different types of data. For example, teachers have used AI to understand students’ learning patterns and the correlation between these configurations with the individual understanding of learning concepts (Rexford, 2018). Furthermore, advanced biometric techniques in AI have made it possible for teachers to assess their student’s learning attentiveness.

Overall, the contributions of AI to the teaching practice empower teachers to redesign their learning programs to fill the gaps identified in the performance assessments. Employing the capabilities of AI in their teaching programs has also made it possible to personalize their curriculums to empower students to learn more effectively (Live Tiles, 2021). Nonetheless, the benefits of AI to teachers could be undermined by the possibility of job losses due to the replacement of human labor with machines and robots (Gulson et al. , 2018). These fears are yet to materialize but indications suggest that AI adoption may elevate the importance of machines above those of human beings in learning.

The benefits of AI to teachers can be replicated in student learning because learners are recipients of the teaching strategies adopted by teachers. In this regard, AI has created unique benefits for different groups of learners based on the supportive role it plays in the education sector (Fengchun et al., 2021). For example, it has created conditions necessary for the use of virtual reality in learning. This development has created an opportunity for students to learn at their pace (Live Tiles, 2021). Allowing students to learn at their pace has enhanced their learning experiences because of varied learning speeds. The creation of virtual reality using AI learning has played a significant role in promoting equality in learning by adapting to different learning needs (Live Tiles, 2021). For example, it has helped students to better track their performances at home and identify areas of improvement in the process. In this regard, the adoption of AI in learning has allowed for the customization of learning styles to improve students’ attention and involvement in learning.

AI also benefits students by personalizing education activities to suit different learning styles and competencies. In this analysis, AI holds the promise to develop personalized learning at scale by customizing tools and features of learning in contemporary education systems (du Boulay, 2016). Personalized learning offers several benefits to students, including a reduction in learning time, increased levels of engagement with teachers, improved knowledge retention, and increased motivation to study (Fengchun et al., 2021). The presence of these benefits means that AI enriches students’ learning experiences. Furthermore, AI shares the promise of expanding educational opportunities for people who would have otherwise been unable to access learning opportunities. For example, disabled people are unable to access the same quality of education as ordinary students do. Today, technology has made it possible for these underserved learners to access education services.

Based on the findings highlighted above, AI has made it possible to customize education services to suit the needs of unique groups of learners. By extension, AI has made it possible for teachers to select the most appropriate teaching methods to use for these student groups (du Boulay, 2016). Teachers have reported positive outcomes of using AI to meet the needs of these underserved learners (Fengchun et al., 2021). For example, through online learning, some of them have learned to be more patient and tolerant when interacting with disabled students (Fengchun et al., 2021). AI has also made it possible to integrate the educational and curriculum development plans of disabled and mainstream students, thereby standardizing the education outcomes across the divide. Broadly, these statements indicate that the expansion of opportunities via AI adoption has increased access to education services for underserved groups of learners.

Overall, AI holds the promise to solve most educational challenges that affect the world today. UNESCO (2021) affirms this statement by saying that AI can address most problems in learning through innovation. Therefore, there is hope that the adoption of new technology would accelerate the process of streamlining the education sector. This outcome could be achieved by improving the design of AI learning programs to make them more effective in meeting student and teachers’ needs. This contribution to learning will help to maximize the positive impact and minimize the negative effects of AI on both parties.

The findings of this study demonstrate that the application of AI in education has a largely positive impact on students and teachers. The positive effects are summarized as follows: improved access to education for underserved populations improved teaching practices/instructional learning, and enhanced enthusiasm for students to stay in school. Despite the existence of these positive views, negative outcomes have also been highlighted in this paper. They include the potential for job losses, an increase in education inequalities, and the high cost of installing AI systems. These concerns are relevant to the adoption of AI in the education sector but the benefits of integration outweigh them. Therefore, there should be more support given to educational institutions that intend to adopt AI. Overall, this study demonstrates that AI is beneficial to the education sector. It will improve the quality of teaching, help students to understand knowledge quickly, and spread knowledge via the expansion of educational opportunities.

Chen, L., Chen, P. and Lin, Z. (2020) ‘Artificial intelligence in education: a review’, Institute of Electrical and Electronics Engineers Access , 8(1), pp. 75264-75278.

du Boulay, B. (2016) Artificial intelligence as an effective classroom assistant. Institute of Electrical and Electronics Engineers Intelligent Systems , 31(6), pp.76–81.

Fengchun, M. et al. (2021) AI and education: a guide for policymakers . Paris: UNESCO Publishing.

Gulson, K . et al. (2018) Education, work and Australian society in an AI world . Web.

IBM. (2020) Artificial intelligence . Web.

Live Tiles. (2021) 15 pros and 6 cons of artificial intelligence in the classroom . Web.

Mikropoulos, T. A. (2018) Research on e-Learning and ICT in education: technological, pedagogical and instructional perspectives . New York, NY: Springer.

Rexford, J. (2018) The role of education in AI (and vice versa). Web.

Seo, K. et al. (2021) The impact of artificial intelligence on learner–instructor interaction in online learning. International Journal of Educational Technology in Higher Education , 18(54), pp. 1-12.

UNESCO. (2021) Artificial intelligence in education . Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, October 1). Artificial Intelligence and Its Impact on Education. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/

"Artificial Intelligence and Its Impact on Education." IvyPanda , 1 Oct. 2023, ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

IvyPanda . (2023) 'Artificial Intelligence and Its Impact on Education'. 1 October.

IvyPanda . 2023. "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

1. IvyPanda . "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

Bibliography

IvyPanda . "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

  • The Age of Artificial Intelligence (AI)
  • The Importance of Trust in AI Adoption
  • Working With Artificial Intelligence (AI)
  • Effects of AI on the Accounting Profession
  • Artificial Intelligence and the Associated Threats
  • Artificial Intelligence in Cybersecurity
  • Leaders’ Attitude Toward AI Adoption in the UAE
  • Artificial Intelligence in “I, Robot” by Alex Proyas
  • The Aspects of the Artificial Intelligence
  • Robotics and Artificial Intelligence in Organizations
  • Machine Learning: Bias and Variance
  • Machine Learning and Regularization Techniques
  • Would Artificial Intelligence Reduce the Shortage of the Radiologists
  • Artificial Versus Human Intelligence
  • Artificial Intelligence: Application and Future

Talk to our experts

1800-120-456-456

  • Artificial Intelligence Essay

ffImage

Essay on Artificial Intelligence

Artificial Intelligence is the intelligence possessed by the machines under which they can perform various functions with human help. With the help of A.I, machines will be able to learn, solve problems, plan things, think, etc. Artificial Intelligence, for example, is the simulation of human intelligence by machines. In the field of technology, Artificial Intelligence is evolving rapidly day by day and it is believed that in the near future, artificial intelligence is going to change human life very drastically and will most probably end all the crises of the world by sorting out the major problems. 

Our life in this modern age depends largely on computers. It is almost impossible to think about life without computers. We need computers in everything that we use in our daily lives. So it becomes very important to make computers intelligent so that our lives become easy. Artificial Intelligence is the theory and development of computers, which imitates the human intelligence and senses, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial Intelligence has brought a revolution in the world of technology. 

Artificial Intelligence Applications

AI is widely used in the field of healthcare. Companies are attempting to develop technologies that will allow for rapid diagnosis. Artificial Intelligence would be able to operate on patients without the need for human oversight. Surgical procedures based on technology are already being performed.

Artificial Intelligence would save a lot of our time. The use of robots would decrease human labour. For example, in industries robots are used which have saved a lot of human effort and time. 

In the field of education, AI has the potential to be very effective. It can bring innovative ways of teaching students with the help of which students will be able to learn the concepts better. 

Artificial intelligence is the future of innovative technology as we can use it in many fields. For example, it can be used in the Military sector, Industrial sector, Automobiles, etc. In the coming years, we will be able to see more applications of AI as this technology is evolving day by day. 

Marketing: Artificial Intelligence provides a deep knowledge of consumers and potential clients to the marketers by enabling them to deliver information at the right time. Through AI solutions, the marketers can refine their campaigns and strategies.

Agriculture: AI technology can be used to detect diseases in plants, pests, and poor plant nutrition. With the help of AI, farmers can analyze the weather conditions, temperature, water usage, and condition of the soil.

Banking: Fraudulent activities can be detected through AI solutions. AI bots, digital payment advisers can create a high quality of service.

Health Care: Artificial Intelligence can surpass human cognition in the analysis, diagnosis, and complication of complicated medical data.

History of Artificial Intelligence

Artificial Intelligence may seem to be a new technology but if we do a bit of research, we will find that it has roots deep in the past. In Greek Mythology, it is said that the concepts of AI were used. 

The model of Artificial neurons was first brought forward in 1943 by Warren McCulloch and Walter Pits. After seven years, in 1950, a research paper related to AI was published by Alan Turing which was titled 'Computer Machinery and Intelligence. The term Artificial Intelligence was first coined in 1956 by John McCarthy, who is known as the father of Artificial Intelligence. 

To conclude, we can say that Artificial Intelligence will be the future of the world. As per the experts, we won't be able to separate ourselves from this technology as it would become an integral part of our lives shortly. AI would change the way we live in this world. This technology would prove to be revolutionary because it will change our lives for good. 

Branches of Artificial Intelligence:

Knowledge Engineering

Machines Learning

Natural Language Processing

Types of Artificial Intelligence

Artificial Intelligence is categorized in two types based on capabilities and functionalities. 

Artificial Intelligence Type-1

Artificial intelligence type-2.

Narrow AI (weak AI): This is designed to perform a specific task with intelligence. It is termed as weak AI because it cannot perform beyond its limitations. It is trained to do a specific task. Some examples of Narrow AI are facial recognition (Siri in Apple phones), speech, and image recognition. IBM’s Watson supercomputer, self-driving cars, playing chess, and solving equations are also some of the examples of weak AI.

General AI (AGI or strong AI): This system can perform nearly every cognitive task as efficiently as humans can do. The main characteristic of general AI is to make a system that can think like a human on its own. This is a long-term goal of many researchers to create such machines.

Super AI: Super AI is a type of intelligence of systems in which machines can surpass human intelligence and can perform any cognitive task better than humans. The main features of strong AI would be the ability to think, reason, solve puzzles, make judgments, plan and communicate on its own. The creation of strong AI might be the biggest revolution in human history.

Reactive Machines: These machines are the basic types of AI. Such AI systems focus only on current situations and react as per the best possible action. They do not store memories for future actions. IBM’s deep blue system and Google’s Alpha go are the examples of reactive machines.

Limited Memory: These machines can store data or past memories for a short period of time. Examples are self-driving cars. They can store information to navigate the road, speed, and distance of nearby cars.

Theory of Mind: These systems understand emotions, beliefs, and requirements like humans. These kinds of machines are still not invented and it’s a long-term goal for the researchers to create one. 

Self-Awareness: Self-awareness AI is the future of artificial intelligence. These machines can outsmart the humans. If these machines are invented then it can bring a revolution in human society. 

Artificial Intelligence will bring a huge revolution in the history of mankind. Human civilization will flourish by amplifying human intelligence with artificial intelligence, as long as we manage to keep the technology beneficial.

arrow-right

FAQs on Artificial Intelligence Essay

1. What is Artificial Intelligence?

Artificial Intelligence is a branch of computer science that emphasizes the development of intelligent machines that would think and work like humans.

2. How is Artificial Intelligence Categorised?

Artificial Intelligence is categorized in two types based on capabilities and functionalities. Based on capabilities, AI includes Narrow AI (weak AI), General AI, and super AI. Based on functionalities, AI includes Relative Machines, limited memory, theory of mind, self-awareness.

3. How Does AI Help in Marketing?

AI helps marketers to strategize their marketing campaigns and keep data of their prospective clients and consumers.

4. Give an Example of a Relative Machine?

IBM’s deep blue system and Google’s Alpha go are examples of reactive machines.

5. How can Artificial Intelligence help us?

Artificial Intelligence can help us in many ways. It is already helping us in some cases. For example, if we think about the robots used in a factory, they all run on the principle of Artificial Intelligence. In the automobile sector, some vehicles have been invented that don't need any humans to drive them, they are self-driving. The search engines these days are also AI-powered. There are many other uses of Artificial Intelligence as well.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 18 January 2024

The impact of artificial intelligence on employment: the role of virtual agglomeration

  • Yang Shen   ORCID: orcid.org/0000-0002-6781-6915 1 &
  • Xiuwu Zhang 1  

Humanities and Social Sciences Communications volume  11 , Article number:  122 ( 2024 ) Cite this article

37k Accesses

3 Citations

12 Altmetric

Metrics details

  • Development studies

Sustainable Development Goal 8 proposes the promotion of full and productive employment for all. Intelligent production factors, such as robots, the Internet of Things, and extensive data analysis, are reshaping the dynamics of labour supply and demand. In China, which is a developing country with a large population and labour force, analysing the impact of artificial intelligence technology on the labour market is of particular importance. Based on panel data from 30 provinces in China from 2006 to 2020, a two-way fixed-effect model and the two-stage least squares method are used to analyse the impact of AI on employment and to assess its heterogeneity. The introduction and installation of artificial intelligence technology as represented by industrial robots in Chinese enterprises has increased the number of jobs. The results of some mechanism studies show that the increase of labour productivity, the deepening of capital and the refinement of the division of labour that has been introduced into industrial enterprises through the introduction of robotics have successfully mitigated the damaging impact of the adoption of robot technology on employment. Rather than the traditional perceptions of robotics crowding out labour jobs, the overall impact on the labour market has exerted a promotional effect. The positive effect of artificial intelligence on employment exhibits an inevitable heterogeneity, and it serves to relatively improves the job share of women and workers in labour-intensive industries. Mechanism research has shown that virtual agglomeration, which evolved from traditional industrial agglomeration in the era of the digital economy, is an important channel for increasing employment. The findings of this study contribute to the understanding of the impact of modern digital technologies on the well-being of people in developing countries. To give full play to the positive role of artificial intelligence technology in employment, we should improve the social security system, accelerate the process of developing high-end domestic robots and deepen the reform of the education and training system.

Similar content being viewed by others

development of artificial intelligence essay

Automation and labour market inequalities: a comparison between cities and non-cities

The impact of industrial robot adoption on corporate green innovation in china.

development of artificial intelligence essay

Impact of industrial robots on environmental pollution: evidence from China

Introduction.

Ensuring people’s livelihood requires diligence, but diligence is not scarce. Diversification, technological upgrading, and innovation all contribute to achieving the Sustainable Development Goal of full and productive employment for all (SDGs 8). Since the outbreak of the industrial revolution, human society has undergone four rounds of technological revolution, and each technological change can be regarded as the deepening of automation technology. The conflict and subsequent rebalancing of efficiency and employment are constantly being repeated in the process of replacing people with machines (Liu 2018 ; Morgan 2019 ). When people realize the new wave of human economic and social development that is created by advanced technological innovation, they must also accept the “creative destruction” brought by the iterative renewal of new technologies (Michau 2013 ; Josifidis and Supic 2018 ; Forsythe et al. 2022 ). The questions of where technology will eventually lead humanity, to what extent artificial intelligence will change the relationship between humans and work, and whether advanced productivity will lead to large-scale structural unemployment have been hotly debated. China has entered a new stage of deep integration and development of the “new technology cluster” that is represented by the internet and the real economy. Physical space, cyberspace, and biological space have become fully integrated, and new industries, new models, and new forms of business continue to emerge. In the process of the vigorous development of digital technology, its characteristics in terms of employment, such as strong absorption capacity, flexible form, and diversified job demands are more prominent, and many new occupations have emerged. The new practice of digital survival that is represented by the platform economy, sharing economy, full-time economy, and gig economy, while adapting to, leading to, and innovating the transformation and development of the economy, has also led to significant changes in employment carriers, employment forms, and occupational skill requirements (Dunn 2020 ; Wong et al. 2020 ; Li et al. 2022 ).

Artificial intelligence (AI) is one of the core areas of the fourth industrial revolution, along with the transformation of the mechanical technology, electric power technology, and information technology, and it serves to promote the transformation and upgrading of the digital economy industry. Indeed, the rapid iteration and cross-border integration of general information technology in the era of the digital economy has made a significant contribution to the stabilization of employment and the promotion of growth, but this is due only to the “employment effect” caused by the ongoing development of the times and technological progress in the field of social production. Digital technology will inevitably replace some of the tasks that were once performed by human labour. In recent years, due to the influence of China’s labour market and employment structure, some enterprises have needed help in recruiting workers. Driven by the rapid development of artificial intelligence technology, some enterprises have accelerated the pace of “machine replacement,” resulting in repetitive and standardized jobs being performed by robots. Deep learning and AI enable machines and operating systems to perform more complex tasks, and the employment prospects of enterprise employees face new challenges in the digital age. According to the Future of Jobs 2020 report released by the World Economic Forum, the recession caused by the COVID-19 pandemic and the rapid development of automation technology are changing the job market much faster than expected, and automation and the new division of labour between humans and machines will disrupt 85 million jobs in 15 industries worldwide over the next five years. The demand for skilled jobs, such as data entry, accounting, and administrative services, has been hard hit. Thanks to the wave of industrial upgrading and the vigorous development of digitalization, the recruitment demand for AI, big data, and manufacturing industries in China has maintained high growth year-on-year under the premise of macroenvironmental uncertainty during the period ranging from 2019 to 2022, and the average annual growth rate of new jobs was close to 30%. However, this growth has also aggravated the sense of occupational crisis among white-collar workers. The research shows that the agriculture, forestry, animal husbandry, fishery, mining, manufacturing, and construction industries, which are expected to adopt a high level of intelligence, face a high risk of occupational substitution, and older and less educated workers are faced with a very high risk of substitution (Wang et al. 2022 ). Whether AI, big data, and intelligent manufacturing technology, as brand-new forms of digital productivity, will lead to significant changes in the organic composition of capital and effectively decrease labour employment has yet to reach consensus. As the “pearl at the top of the manufacturing crown,” a robot is an essential carrier of intelligent manufacturing and AI technology as materialized in machinery and equipment, and it is also an important indicator for measuring a country’s high-end manufacturing industry. Due to the large number of manufacturing employees in China, the challenge of “machine substitution” to the labour market is more severe than that in other countries, and the use of AI through robots is poised to exert a substantial impact on the job market (Xie et al. 2022 ). In essence, the primary purpose of the digital transformation of industrial enterprises is to improve quality and efficiency, but the relationship between machines and workers has been distorted in the actual application of digital technology. Industrial companies use robots as an entry point, and the study delves into the impact of AI on the labour market to provide experience and policy suggestions on the best ways of coordinating the relationship between enterprise intelligent transformation and labour participation and to help realize Chinese-style modernization.

As a new general technology, AI technology represents remarkable progress in productivity. Objectively analysing the dual effects of substitution and employment creation in the era of artificial intelligence to actively integrate change and adapt to development is essential to enhancing comprehensive competitiveness and better qualifying workers for current and future work. This research is organized according to a research framework from the published literature (Luo et al. 2023 ). In this study, we used data published by the International Federation of Robotics (IFR) and take the installed density of industrial robots in China as the main indicator of AI. Based on panel data from 30 provinces in China covering the period from 2006–2020, the impact of AI technology on employment in a developing country with a large population size is empirically examined. The issues that need to be solved in this study include the following: The first goal is to examine the impact of AI on China’s labour market from the perspective of the economic behaviour of those enterprises that have adopted the use of industrial robots in production. The realistic question we expect to answer is whether the automated processing of daily tasks has led to unemployment in China during the past fifteen years. The second goal is to answer the question of how AI will continue to affect the employment market by increasing labour productivity, changing the technical composition of capital, and deepening the division of labour. The third goal is to examine how the transformation of industrial organization types in the digital economy era affects employment through digital industrial clusters or virtual clusters. The fourth goal is to test the role of AI in eliminating gender discrimination, especially in regard to whether it can improve the employment opportunities of female employees. Then, whether workers face different employment difficulties in different industry attributes is considered. The final goal is to provide some policy insights into how a developing country can achieve full employment in the face a new technological revolution in the context of a large population and many low-skilled workers.

The remainder of the paper is organized as follows. In Section Literature Review, we summarize the literature on the impact of AI on the labour market and employment and classify it from three perspectives: pessimistic, negative, and neutral. Based on a literature review, we then summarize the marginal contribution of this study. In Section Theoretical mechanism and research hypothesis, we provide a theoretical analysis of AI’s promotion of employment and present the research hypotheses to be tested. In Section Study design and data sources, we describe the data source, variable setting and econometric model. In Section Empirical analysis, we test Hypothesis 1 and conduct a robustness test and the causal identification of the conclusion. In Section Extensibility analysis, we test Hypothesis 2 and Hypothesis 3, as well as testing the heterogeneity of the baseline regression results. The heterogeneity test employee gender and industry attributes increase the relevance of the conclusions. Finally, Section Conclusions and policy implications concludes.

Literature review

The social effect of technological progress has the unique characteristics of the times and progresses through various stages, and there is variation in our understanding of its development and internal mechanism. A classic argument of labour sociology and labour economics is that technological upgrading objectively causes workers to lose their jobs, but the actual historical experience since the industrial revolution tells us that it does not cause large-scale structural unemployment (Zhang 2023a ). While neoclassical liberals such as Adam Smith claimed that technological progress would not lead to unemployment, other scholars such as Sismondi were adamant that it would. David Ricardo endorsed the “Luddite fear” in his book On Machinery, and Marx argued that technological progress can increase labour productivity while also excluding labour participation, thus leaving workers in poverty. The worker being turned ‘into a crippled monstrosity’ by modern machinery. Technology is not used to reduce working hours and improve the quality of work, rather, it is used to extend working hours and speed up work (Spencer 2023 ). According to Schumpeter’s innovation theory, within a unified complex system, the essence of technological innovation forms from the unity of positive and negative feedback and the oneness of opposites such as “revolutionary” and “destructive.” Even a tiny technological impact can cause drastic consequences. The impact of AI on employment is different from the that of previous industrial revolutions, and it is exceptional in that “machines” are no longer straightforward mechanical tools but have assumed more of a “worker” role, just as people who can learn and think tend to do (Boyd and Holton 2018 ). AI-related technologies continue to advance, the industrialization and commercialization process continues to accelerate, and the industry continues to explore the application of AI across multiple fields. Since AI was first proposed at the Dartmouth Conference in 1956, discussions about “AI replacing human labor” and “AI defeating humans” have endlessly emerged. This dynamic has increased in intensity since the emergence of ChatGPT, which has aroused people’s concerns about technology replacing the workforce. Summarizing the literature, we can find three main arguments concerning the relationship between AI and employment:

First, AI has the effect of creating and filling jobs. The intelligent manufacturing industry paradigm characterized by AI technology will assist in forming a high-quality “human‒machine cooperation” employment mode. In an enlightened society, the social state of shared prosperity benefits the lowest class of people precisely because of the advanced productive forces and higher labour efficiency created through the refinement of the division of labour. By improving production efficiency, reducing the sales price of final products, and stimulating social consumption, technological progress exerts both price effects and income effects, which in turn drive related enterprises to expand their production scale, which, in turn, increases the demand for labour (Li et al. 2021 ; Ndubuisi et al. 2021 ; Yang 2022 ; Sharma and Mishra 2023 ; Li et al. 2022 ). People habitually regard robots as competitors for human beings, but this view only represents the materialistic view of traditional machinery. The coexistence of man and machine is not a zero-sum game. When the task evolves from “cooperation for all” to “cooperation between man and machine,” it results in fewer production constraints and maximizes total factor productivity, thus creating more jobs and generating novel collaborative tasks (Balsmeier and Woerter 2019 ; Duan et al. 2023 ). At the same time, materialized AI technology can improve the total factor production efficiency in ways that are suitable for its factor endowment structure and improve the production efficiency between upstream and downstream enterprises in the industrial chain and the value chain. This increase in the efficiency of the entire market will subsequently drive the expansion of the production scale of enterprises and promote reproduction, and its synergy will promote the synchronous growth of the labour demand involving various skills, thus resulting in a creative effect (Liu et al. 2022 ). As an essential force in the fourth industrial revolution, AI inevitably affects the social status of humans and changes the structure of the labour force (Chen 2023 ). AI and machines increase labour productivity by automating routine tasks while expanding employee skills and increasing the value of work. As a result, in a machine-for-machine employment model, low-skilled jobs will disappear, while new and currently unrealized job roles will emerge (Polak 2021 ). We can even argue that digital technology, artificial intelligence, and robot encounters are helping to train skilled robots and raise their relative wages (Yoon 2023 ).

Second, AI has both a destructive effect and a substitution effect on employment. As soon as machines emerged as the means of labour, they immediately began to compete with the workers themselves. As a modern new technology, artificial intelligence is essentially humanly intelligent labour that condenses complex labour. Like the disruptive general-purpose technologies of early industrialization, automation technologies such as AI offer both promise and fear in regard to “machine replacement.” Technological progress leads to an increase in the organic composition of capital and the relative surplus population. The additional capital formed in capital accumulation comes to absorb fewer and fewer workers compared to its quantity. At the same time, old capital, which is periodically reproduced according to the new composition, will begin to increasingly exclude the workers it previously employed, resulting in severe “technological unemployment.” The development of productivity creates more free time, especially in industries such as health care, transportation, and production environment control, which have seen significant benefits from AI. In recent years, however, some industrialized countries have faced the dilemma of declining income from labour and the slow growth of total labour productivity while applying AI on a large scale (Autor 2019 ). Low-skilled and incapacitated workers enjoy a high probability of being replaced by automation (Ramos et al. 2022 ; Jetha et al. 2023 ). It is worth noting that with the in-depth development of digital technologies, such as deep learning and big data analysis, some complex, cognitive, and creative jobs that are currently considered irreplaceable in the traditional view will also be replaced by AI, which indicates that automation technology is not only a substitute for low-skilled labour (Zhao and Zhao 2017 ; Dixon et al. 2021 ; Novella et al. 2023 ; Nikitas et al. 2021 ). Among factors, AI and robotics exert a particularly significant impact on the manufacturing job market, and industry-related jobs will face a severe unemployment problem due to the disruptive effect of AI and robotics (Zhou and Chen 2022 ; Sun and Liu 2023 ). At this stage, most of the world’s economies are facing the deep integration of the digital wave in their national economy, and any work, including high-level tasks, is being affected by digitalization and AI (Gardberg et al. 2020 ). The power of AI models is growing exponentially rather than linearly, and the rapid development and rapid diffusion of technology will undoubtedly have a devastating effect on knowledge workers, as did the industrial revolution (Liu and Peng 2023 ). In particular, the development and improvement of AI-generated content in recent years poses a more significant threat to higher-level workers, such as researchers, data analysts, and product managers, than to physical labourers. White collar workers are facing unprecedented anxiety and unease (Nam 2019 ; Fossen and Sorgner 2022 ; Wang et al. 2023 ). A classic study suggests that AI could replace 47% of the 702 job types in the United States within 20 years (Frey and Osborne 2017 ). Since the 2020 epidemic, digitization has accelerated, and online and digital resources have become a must for enterprises. Many occupations are gradually moving away from humans (Wu and Yang 2022 ; Männasoo et al. 2023 ). It is obvious that the intelligent robot arm on the factory assembly line is poised to allow factory assembly line workers to exit the stage and move into history. Career guides are being replaced by mobile phone navigation software.

Third, the effect of AI on employment is uncertain, and its impact on human work does not fall into a simple “utopian” or “dystopian” scene, but rather leads to a combination of “utopia” and “dystopia” (Kolade and Owoseni 2022 ). The job-creation effects of robotics and the emergence of new jobs that result from technological change coexist at the enterprise level (Ni and Obashi 2021 ). Adopting a suitable AI operation mode can adjust for the misallocation of resources by the market, enterprises, and individuals to labour-intensive tasks, reverse the nondirectional allocation of robots in the labour sector, and promote their reallocation in the manufacturing and service industries. The size of the impact on employment through the whole society is uncertain (Fabo et al. 2017 ; Huang and Rust 2018 ; Berkers et al. 2020 ; Tschang and Almirall 2021 ; Reljic et al. 2021 ). For example, Oschinski and Wyonch ( 2017 ) claimed that those jobs that are easily replaced by AI technology in Canada account for only 1.7% of the total labour market, and they have yet to find evidence that automation technology will cause mass unemployment in the short term. Wang et al. ( 2022 ) posited that the impact of industrial robots on labour demand in the short term is mainly negative, but in the long run, its impact on employment is mainly that of job creation. Kirov and Malamin ( 2022 ) claimed that the pessimism underlying the idea that AI will destroy the jobs and quality of language workers on a large scale is unjustified. Although some jobs will be eliminated as such technology evolves, many more will be created in the long run.

In the view that modern information technology and digital technology increase employment, the literature holds that foreign direct investment (Fokam et al. 2023 ), economic systems (Bouattour et al. 2023 ), labour skills and structure (Yang 2022 ), industrial technological intensity (Graf and Mohamed 2024 ), and the easing of information friction (Jin et al. 2023 ) are important mechanisms. The research on whether AI technology crowds out jobs is voluminous, but the conclusions are inconsistent (Filippi et al. 2023 ). This paper is focused on the influence of AI on the employment scale of the manufacturing industry, examines the job creation effect of technological progress from the perspectives of capital deepening, labour refinement, and labour productivity, and systematically examines the heterogeneous impact of the adoption of industrial robots on employment demand, structure, and different industries. The marginal contributions of this paper are as follows: first, the installation density of industrial robots is used as an indicator to measure AI, and the question of whether AI has had negative effects on employment in the manufacturing sector from the perspective of machine replacement is examined. The second contribution is the analysis of the heterogeneity of AI’s employment creation effect from the perspective of gender and industry attributes and the claim that women and the employees of labour-intensive enterprises are more able to obtain additional work benefits in the digital era. Most importantly, in contrast to the literature, this paper innovatively introduces virtual agglomeration into the path mechanism of the effect of robots on employment and holds that information technologies such as the internet, big data, and the industrial Internet of Things, which rely upon AI, have reshaped the management mode and organizational structure of enterprises. Online and offline integration work together, and information, knowledge, and technology are interconnected. In the past, the job matching mode of one person, one post, and specific individuals has changed into a multiple faceted set of tasks involving one person, many posts, and many types of people. The internet platform spawned by digital technology frees the employment mode of enterprises from being limited to single enterprises and specific gathering areas. Traditional industrial geographical agglomeration has gradually evolved into virtual agglomeration, which geometrically enlarges the agglomeration effect and mechanism and enhances the spillover effect. In the online world, individual practitioners and entrepreneurs can obtain orders, receive training, connect resources and employment needs more widely and efficiently, and they can achieve higher-quality self-employment. Virtual agglomeration has become a new path by which AI affects employment. Another literature contribution is that this study used the linear regression model of the machine learning model in the robustness test part, which verified the employment creation effect of AI from the perspective of positive contribution proportion. In causal identification, this study innovatively uses the industrial feed-in price as a tool variable to analyse the causal path of AI promoting employment.

Theoretical mechanism and research hypothesis

The direct influence of ai on employment.

With advances in machine learning, big data, artificial intelligence, and other technologies, a new generation of intelligent robots that can perform routine, repetitive, and regular production tasks requiring human judgement, problem-solving, and analytical skills has emerged. Robotic process automation technology can learn and imitate the way that workers perform repeated new tasks regarding the collecting of data, running of reports, copying of data, checking of data integrity, reading, processing, and the sending of emails, and it can play an essential role in processing large amounts of data (Alan 2023 ). In the context of an informatics- and technology-oriented economy, companies are asking employees to transition into creative jobs. According to the theory of the combined task framework, the most significant advantage of the productivity effect produced by intelligent technology is creation of new demands, that is, the creation of new tasks (Acemoglu and Restrepo 2018 ). These new task packages update the existing tasks and create new task combinations with more complex technical difficulties. Although intelligent technology is widely used in various industries, it may have a substitution effect on workers and lead to technical unemployment. However, with the rise of a new round of technological innovation and revolution, high efficiency leads to the development and growth of a series of emerging industries and exerts job creation effects. Technological progress has the effect of creating new jobs. That is, such progress creates new jobs that are more in line with the needs of social development and thus increases the demand for labour (Borland and Coelli 2017 ). Therefore, the intelligent development of enterprises will come to replace their initial programmed tasks and produce more complex new tasks, and human workers in nonprogrammed positions, such as technology and knowledge, will have more comparative advantages.

Generally, the “new technology-economy” paradigm that is derived from automation machine and AI technology is affecting the breadth and depth of employment, which is manifested as follows:

It reduces the demand for coded jobs in enterprises while increasing the demand for nonprogrammed complex labour.

The development of digital technology has deepened and refined the division of labour, accelerated the service trend of the manufacturing industry, increased the employment share of the modern service industry and created many emerging jobs.

Advanced productive forces give workers higher autonomy and increased efficiency in their work, improving their job satisfaction and employment quality. As described in Das Kapital, “Although machines actually crowd out and potentially replace a large number of workers, with the development of machines themselves (which is manifested by the increase in the number of the same kind of factories or the expansion of the scale of existing factories), the number of factory workers may eventually be more than the number of handicraft workers in the workshops or handicrafts that they crowd out… It can be seen that the relative reduction and absolute increase of employed workers go hand in hand” (Li and Zhang 2022 ).

Internet information technology reduces the distance between countries in both time and space, promotes the transnational flow of production factors, and deepens the international division of labour. The emergence of AI technology leads to the decline of a country’s traditional industries and departments. Under the new changes to the division of labour, these industries and departments may develop in late-developing countries and serve to increase their employment through international labour export.

From a long-term perspective, AI will create more jobs through the continuous expansion of the social production scale, the continuous improvement of production efficiency, and the more detailed industrial categories that it engenders. With the accumulation of human capital under the internet era, practitioners are gradually becoming liberated from heavy and dangerous work, and workers’ skills and job adaptability will undergo continuous improvement. The employment creation and compensation effects caused by technological and industrial changes are more significant than the substitution effects (Han et al. 2022 ). Accordingly, the article proposes the following two research hypotheses:

Hypothesis 1 (H1): AI increases employment .

Hypothesis 2 (H2): AI promotes employment by improving labour productivity, deepening capital, and refining the division of labour .

Role of virtual agglomeration

The research on economic geography and “new” economic geography agglomeration theory focuses on industrial agglomeration in the traditional sense. This model is a geographical agglomeration model that depends on spatial proximity from a geographical perspective. Assessing the role of externalities requires a particular geographical scope, as it has both physical and scope limitations. Virtual agglomeration transcends Marshall’s theory of economies of scale, which is not limited to geographical agglomeration from the perspective of natural territory but rather takes on more complex and multidimensional forms (such as virtual clusters, high-tech industrial clusters, and virtual business circles). Under the influence of a new generation of digital technology that is characterized by big data, the Internet of Things, and the industrial internet, the digital, intelligent, and platform transformation trend is prominent in some industries and enterprises, and industrial digitalization and digital industrialization jointly promote industrial upgrading. The innovation of information technology leads to “distance death” (Schultz 1998 ). With the further materialization of digital and networked services of enterprises, the trading mode of digital knowledge and services, such as professional knowledge, information combination, cultural products, and consulting services, has transitioned from offline to digital trade, and the original geographical space gathering mode between enterprises has gradually evolved into a virtual network gathering that places the real-time exchange of data and information as its core (Wang et al. 2018 ). Tan and Xia ( 2022 ) stated that virtual agglomeration geometrically magnifies the social impact of industrial agglomeration mechanisms and agglomeration effects, and enterprises in the same industry and their upstream and downstream affiliated enterprises can realize low-cost long-distance transactions, services, and collaborative production through digital trade, resulting in large-scale zero-distance agglomeration along with neighbourhood-style production, service, circulation, and consumption. First, the knowledge and information underlying the production, design, research and development, organization, and trading of all kinds of enterprises are increasingly being completed by digital technology. The tacit knowledge that used to require face-to-face communication has become codable, transmissible, and reproducible under digital technology. Tacit knowledge has gradually become explicit, and knowledge spillover and technology diffusion have become more pronounced, which further leads to an increase in the demand for unconventional task labour (Zhang and Li 2022 ). Second, the cloud platform causes the labour pool effect of traditional geographical agglomeration to evolve into the labour “conservation land” of virtual agglomeration, and employment is no longer limited to the internal organization or constrained within a particular regional scope. Digital technology allows enterprises to hire “ghost workers” for lower wages to compensate for the possibility of AI’s “last mile.” Information technology and network platforms seek connections with all social nodes, promoting the time and space for work in a way that transcends standardized fixed frameworks. At the same time, joining or quitting work tasks, indirectly increasing the temporary and transitional nature of work and forming a decentralized management organization model of supplementary cooperation, social networks, industry experts, and skilled labour all become more convenient for workers (Wen and Liu 2021 ). With a mobile phone and a computer, labourers worldwide can create value for enterprises or customers, and the forms of labour are becoming more flexible and diverse. Workers can provide digital real-time services to employers far away from their residence, and they can also obtain flexible employment information and improve their digital skills through the leveraging of digital resources, resulting in the odd-job economy, crowdsourcing economy, sharing economy, and other economic forms. Finally, the network virtual space can accommodate almost unlimited enterprises simultaneously. In the commercial background of digital trade, while any enterprise can obtain any intermediate supply in the online market, its final product output can instantly become the intermediate input of other enterprises. Therefore, enterprises’ raw material supply and product sales rely on the whole market. At this time, the market scale effect of intermediate inputs can be infinitely amplified, as it is no longer confined to the limited space of geographical agglomeration (Duan and Zhang 2023 ). Accordingly, the following research hypothesis is proposed:

Hypothesis 3 (H3): AI promotes employment by improving the VA of enterprises .

Study design and data sources

Variable setting, explained variable.

Employment scale (ES). Compared with the agriculture and service industry, the industrial sector accommodates more labour, and robot technology is mainly applied in the industrial sector, which has the greatest demand shock effect on manufacturing jobs. In this paper, we select the number of employees in manufacturing cities and towns as the proxy variable for employment scale.

Core explanatory variable

Artificial intelligence (AI). Emerging technologies endow industrial robots with more complete technical attributes, which increases their ability to act as human beings in many work projects, enabling them to either independently complete production tasks or to assist humans in completing such tasks. This represents an important form of AI technology embedded into machinery and equipment. In this paper, the installation density of industrial robots is selected as the proxy variable for AI. Robot data mainly come from the number of robots installed in various industries at various national levels as published by the International Federation of Robotics (IFR). Because the dataset published by the IFR provides the dataset at the national-industry level and its industry classification standards are significantly different from those in China, the first lessons for this paper are drawn from the practices of Yan et al. ( 2020 ), who matches the 14 manufacturing categories published by the IFR with the subsectors in China’s manufacturing sector, and then uses the mobile share method to merge and sort out the employment numbers of various industries in various provinces. First, the national subsector data provided by the IFR are matched with the second National Economic Census data. Next, the share of employment in different industries to the total employment in the province is used to develop weights and decompose the industry-level robot data into the local “provincial-level industry” level. Finally, the application of robots in various industries at the provincial level is summarized. The Bartik shift-share instrumental variable is now widely used to measure robot installation density at the city (province) level (Wu 2023 ; Yang and Shen, 2023 ; Shen and Yang 2023 ). The calculation process is as follows:

In Eq. ( 1 ), N is a collection of manufacturing industries, Robot it is the robot installation density of province i in year t, \({{{\mathrm{employ}}}}_{{{{\mathrm{ij}}}},{{{\mathrm{t}}}} = 2006}\) is the number of employees in industry j of province i in 2006, \({{{\mathrm{employ}}}}_{{{{\mathrm{i}}}},{{{\mathrm{t}}}} = 2006}\) is the total number of employees in province i in 2006, and \({{{\mathrm{Robot}}}}_{{{{\mathrm{jt}}}}}{{{\mathrm{/employ}}}}_{{{{\mathrm{i}}}},{{{\mathrm{t}}}} = 2006}\) represents the robot installation density of each year and industry level.

Mediating variables

Labour productivity (LP). According to the definition and measurement method proposed by Marx’s labour theory of value, labour productivity is measured by the balance of the total social product minus the intermediate goods and the amount of labour consumed by the pure production sector. The specific calculation process is \(AL = Y - k/l\) , where Y represents GDP, l represents employment, k represents capital depreciation, and AL represents labour productivity. Capital deepening (CD). The per capita fixed capital stock of industrial enterprises above a designated size is used in this study as a proxy variable for capital deepening. The division of labour refinement (DLR) is refined and measured by the number of employees in producer services. Virtual agglomeration (VA) is mainly a continuation of the location entropy method in the traditional industrial agglomeration measurement idea, and weights are assigned according to the proportion of the number of internet access ports in the country. Because of the dependence of virtual agglomeration on digital technology and network information platforms, the industrial agglomeration degree of each region is first calculated in this paper by using the number of information transmissions, computer services, and software practitioners and then multiplying that number by the internet port weight. The specific expression is \(Agg_{it} = \left( {M_{it}/M_t} \right)/\left( {E_{it}/E_t} \right) \times \left( {Net_{it}/Net_t} \right)\) , where \(M_{it}\) represents the number of information transmissions, computer services and software practitioners in region i in year t, \(M_t\) represents the total number of national employees in this industry, \(E_{it}\) represents the total number of employees in region i, \(E_t\) represents the total number of national employees, \(Net_{it}\) represents the number of internet broadband access ports in region i, and \(Net_t\) represents the total number of internet broadband access ports in the country. VA represents the degree of virtual agglomeration.

Control variables

To avoid endogeneity problems caused by unobserved variables and to obtain more accurate estimation results, seven control variables were also selected. Road accessibility (RA) is measured by the actual road area at the end of the year. Industrial structure (IS) is measured by the proportion of the tertiary industry’s added value and the secondary industry’s added value. The full-time equivalent of R&D personnel is used to measure R&D investment (RD). Wage cost (WC) is calculated using city average salary as a proxy variable; Marketization (MK) is determined using Fan Gang marketization index as a proxy variable; Urbanization (UR) is measured by the proportion of the urban population to the total population at the end of the year; and the proportion of general budget expenditure to GDP is used to measure Macrocontrol (MC).

Econometric model

To investigate the impact of AI on employment, based on the selection and definition of the variables detailed above and by mapping the research ideas to an empirical model, the following linear regression model is constructed:

In Eq. ( 2 ), ES represents the scale of manufacturing employment, AI represents artificial intelligence, and subscripts t, i and m represent time t, individual i and the m th control variable, respectively. \(\mu _i\) , \(\nu _t\) and \(\varepsilon _{it}\) represent the individual effect, time effect and random disturbance terms, respectively. \(\delta _0\) is the constant term, a is the parameter to be fitted, and Control represents a series of control variables. To further test whether there is a mediating effect of mechanism variables in the process of AI affecting employment, only the influence of AI on mechanism variables is tested in the empirical part according to the modelling process and operational suggestions of the intermediary effects as proposed by Jiang ( 2022 ) to overcome the inherent defects of the intermediary effects. On the basis of Eq. ( 2 ), the following econometric model is constructed:

In Eq. ( 3 ), Media represents the mechanism variable. β 1 represents the degree of influence of AI on mechanism variables, and its significance and symbolic direction still need to be emphasized. The meanings of the remaining symbols are consistent with those of Eq. ( 2 ).

Data sources

Following the principle of data availability, the panel data of 30 provinces (municipalities and autonomous regions) in China from 2006 to 2020 (samples from Tibet and Hong Kong, Macao, and Taiwan were excluded due to data availability) were used as statistical investigation samples. The raw data on the installed density of industrial robots and the number of workers in the manufacturing industry come from the International Federation of Robotics and the China Labour Statistics Yearbook. The original data for the remaining indicators came from the China Statistical Yearbook, China Population and Employment Statistical Yearbook, China’s Marketization Index Report by Province (2021), the provincial and municipal Bureau of Statistics, and the global statistical data analysis platform of the Economy Prediction System (EPS). The few missing values are supplemented through linear interpolation. It should be noted that although the IFR has yet to release the number of robots installed at the country-industry level in 2020, it has published the overall growth rate of new robot installations, which is used to calculate the robot stock in 2020 for this study. The descriptive statistical analysis of relevant variables is shown in Table 1 .

Empirical analysis

To reduce the volatility of the data and address the possible heteroscedasticity problem, all the variables are located. The results of the Hausmann test and F test both reject the null hypothesis at the 1% level, indicating that the fixed effect model is the best-fitting model. Table 2 reports the fitting results of the baseline regression.

As shown in Table 2 , the results of the two-way fixed-effect (TWFE) model displayed in Column (5) show that the fitting coefficient of AI on employment is 0.989 and is significant at the 1% level. At the same time, the fitting results of other models show that the impact of AI on employment is significantly positive. The results confirm that the effect of AI on employment is positive and the effect of job creation is greater than the effect of destruction, and these conclusions are robust, thus verifying the employment creation mechanism of technological progress. Research Hypothesis 1 (H1) is supported. The new round of scientific and technological revolution represented by artificial intelligence involves the upgrading of traditional industries, the promotion of major changes in the economy and society, the driving of rapid development of the “unmanned economy,” the spawning a large number of new products, new technologies, new formats, and new models, and the provision of more possibilities for promoting greater and higher quality employment. Classical and neoclassical economics view the market mechanism as a process of automatic correction that can offset the job losses caused by labour-saving technological innovation. Under the premise of the “employment compensation” theory, the new products, new models, and new industrial sectors created by the progress of AI technology can directly promote employment. At the same time, the scale effect caused by advanced productivity results in lower product prices and higher worker incomes, which drives increased demand and economic growth, increasing output growth and employment (Ge and Zhao 2023 ). In conjunction with the empirical results of this paper, we have reason to believe that enterprises adopt the strategy of “machine replacement” to replace procedural and repetitive labour positions in the pursuit of high efficiency and high profits. However, AI improves not only enterprises’ production efficiency but also their production capacity and scale economy. To occupy a favourable share of market competition, enterprises expand the scale of reproduction. At this point, new and more complex tasks continue to emerge, eventually leading companies to hire more labour. At this stage, robot technology and application in developing countries are still in their infancy. Whether regarding the application scenario or the application scope of robots, the automation technology represented by industrial robots has not yet been widely promoted, which increases the time required for the automation technology to completely replace manual tasks, so the destruction effect of automation technology on jobs is not apparent. The fundamental market situation of the low cost of China’s labour market drives enterprises to pay more attention to technology upgrading and efficiency improvement when introducing industrial robots. The implementation of the machine replacement strategy is mainly caused by the labour shortage driven by high work intensity, high risk, simple process repetition, and poor working conditions. The intelligent transformation of enterprises points to more than the simple saving of labour costs (Dixon et al. 2021 ).

Robustness test

The above results show that the effect of AI on job creation is greater than the effect of substitution and the overall promotion of enterprises for the enhancement of employment demand. To verify the robustness of the benchmark results, the following three means of verifying the results are adopted in this study. First, we replace the explained variables. In addition to industrial manufacturing, robots are widely used in service industries, such as medical care, finance, catering, and education. To reflect the dynamic change relationship between the employment share of the manufacturing sector and the employment number of all sectors, the absolute number of manufacturing employees is replaced by the ratio of the manufacturing industry to all employment numbers. The second means is increasing the missing variables. Since many factors affect employment, this paper considers the living cots, human capital, population density, and union power in the basic regression model. The impact of these variables on employment is noticeable; for example, the existence of trade unions improves employee welfare and the working environment but raises the entry barrier for workers in the external market. The new missing variables are the average selling price of commercial and residential buildings, urban population density (person/square kilometre), nominal human capital stock, and the number of grassroots trade union organizations in the China Human Capital Report 2021 issued by Central University of Finance and Economics, which are used as proxy variables. The third means involves the use of linear regression (the gradient descent method) in machine learning regression to calculate the importance of AI to the increase in employment size. The machine learning model has a higher goodness of fit and fitting effect on the predicted data, and its mean square error and mean absolute error are more minor (Wang Y et al. 2022 ).

As seen from the robustness part of Table 3 , the results of Method 1 show that AI exerts a positive impact on the employment share in the manufacturing industry; that is, AI can increase the proportion of employment in the manufacturing industry, the use of AI creates more derivative jobs for the manufacturing industry, and the demand for the labour force of enterprises further increases. The results of method 2 show that after increasing the number of control variables, the influence of robots on employment remains significantly positive, indicating no social phenomenon of “machine replacement.” The results of method 3 show that the weight of AI is 84.3%, indicating that AI can explain most of the increase in the manufacturing employment scale and has a positive promoting effect. The above three methods confirm the robustness of the baseline regression results.

Endogenous problem

Although further control variables are used to alleviate the endogeneity problem caused by missing variables to the greatest extent possible, the bidirectional causal relationship between labour demand and robot installation (for example, enterprises tend to passively adopt the machine replacement strategy in the case of labour shortages and recruitment difficulties) still threatens the accuracy of the statistical inference results in this paper. To eliminate the potential endogeneity problem of the model, the two-stage least squares method (2SLS) was applied. In general, the cost factor that enterprises need to consider when introducing industrial robots is not only the comparative advantage between the efficiency cost of machinery and the costs of equipment and labour wages but also the cost of electricity to maintain the efficient operation of machinery and equipment. Changes in industrial electricity prices indicate that the dynamic conditions between installing robots and hiring workers have changed, and decision-makers need to reweigh the costs and profits of intelligent transformation. Changes in industrial electricity prices can impact the demand for labour by enterprises; this path does not directly affect the labour market but is rather based on the power consumption, work efficiency, and equipment prices of robots. Therefore, industrial electricity prices are exogenous relative to employment, and the demand for robots is correlated.

Electricity production and operation can be divided into power generation, transmission, distribution, and sales. China has realized the integration of exports and distribution, so there are two critical prices in practice: on-grid and sales tariffs (Yu and Liu 2017 ). The government determines the on-grid tariff according to different cost-plus models, and its regulatory policy has roughly proceeded from that of principal and interest repayment, through operating period pricing, to benchmark pricing. The sales price (also known as the catalogue price) is the price of electric energy sold by power grid operators to end users, and its price structure is formed based on the “electric heating price” that was implemented in 1976. There is differentiated pricing between industrial and agricultural electricity. Generally, government departments formulate on-grid tariffs, integrating the interests of power plants, grid enterprises, and end users. As China’s thermal power installed capacity accounts for more than 70% of the installed capacity of generators, the price of coal becomes an essential factor affecting the price of industrial internet access. The pricing strategy for electricity sales is not determined by market-oriented transmission and distribution electricity price, on-grid electricity price, or tax but rather by the goal of “stable growth and ensuring people’s livelihood” (Tang and Yang 2014 ). The externality of the feed-in price is more robust, so the paper chooses the feed-in price as an instrumental variable.

It can be seen from Table 3 that the instrumental variables in the first stage positively affect the robot installation density at the level of 1%. Meanwhile, the results of the validity test of the instrumental variables show that there are no weak instrumental variables or unidentifiable problems with this variable, thus satisfying the principle of correlation and exclusivity. The second-stage results show that robots still positively affect the demand for labour at the 1% level, but the fitting coefficient is smaller than that of the benchmark regression model. In summary, the results of fitting the calculation with the causal inference paradigm still support the conclusion that robots create more jobs and increase the labour demand of enterprises.

Extensibility analysis

Robot adoption and gender bias.

The quantity and quality of labour needed by various industries in the manufacturing sector vary greatly, and labour-intensive and capital-intensive industries have different labour needs. Over the past few decades, the demand for female employees has grown. Female employees obtain more job opportunities and better salaries today (Zhang et al. 2023 ). Female employees may benefit from reducing the content of manual labour jobs, meaning that further study of AI heterogeneity from the perspective of gender bias may be needed. As seen from Table 4 , AI has a significant positive impact on the employment of both male and female practitioners, indicating that AI technology does not have a heterogeneous effect on the dynamic gender structure. By comparing the coefficients of the two (the estimated results for men and those for women), it can be found that robots have a more significant promotion effect on female employees. AI has significantly improved the working environment of front-line workers, reduced the level of labour intensity, enabled people to free themselves of dirty and heavy work tasks, and indirectly improved the job adaptability of female workers. Intellectualization increases the flexibility of the time, place, and manner of work for workers, correspondingly improves the working freedom of female workers, and alleviates the imbalance in the choice between family and career for women to a certain extent (Lu et al. 2023 ). At the same time, women are born with the comparative advantage of cognitive skills that allow them to pay more nuanced attention to work details. By introducing automated technology, companies are increasing the demand for cognitive skills such as mental labour and sentiment analysis, thus increasing the benefits for female workers (Wang and Zhang 2022 ). Flexible employment forms, such as online car hailing, community e-commerce, and online live broadcasting, provide a broader stage for women’s entrepreneurship and employment. According to the “Didi Digital Platform and Female Ecology Research Report”, the number of newly registered female online taxi drivers in China has exceeded 265,000 since 2020, and approximately 60 percent of the heads of the e-commerce platform, Orange Heart, are women.

Industry heterogeneity

Given the significant differences in the combination of factors across the different industries in China’s manufacturing sector, there is also a significant gap in the installation density of robots; even compared to AI density, in industries with different production characteristics, indicating that there may be an opposite employment phenomenon at play. According to the number of employees and their salary level, capital stock, R&D investment, and patent technology, the manufacturing industry is divided into labour-intensive (LI), capital-intensive (CI), and technology-intensive (TI) industries.

As seen from the industry-specific test results displayed in Table 4 , the impact of AI on employment in the three attribute industries is significantly positive, which is consistent with the results of Beier et al. ( 2022 ). In contrast, labour-intensive industries can absorb more workers, and industry practitioners are better able to share digital dividends from these new workers, which is generally in line with expectations (in the labour-intensive case, the regression coefficient of AI on employment is 0.054, which is significantly larger than the regression coefficient of the other two industries). This conclusion shows that enterprises use AI to replace the labour force of procedural and process-based positions in pursuit of cost-effective performance. However, the scale effect generated by improving enterprise production efficiency leads to increased labour demand, namely, productivity and compensation effects. For example, AGV-handling robots are used to replace porters in monotonous and repetitive high-intensity work, thus realizing the uncrewed operation of storage links and the automatic handling of goods, semifinished products, and raw materials in the production process. This reduces the cost of goods storage while improving the efficiency of logistics handling, increasing the capital investment of enterprises in the expansion of market share and extension of the industrial chain.

Mechanism test

To reveal the path mechanism through which AI affects employment, in combination with H2 and H3 and the intermediary effect model constructed with Eq. ( 3 ), the TWFE model was used to fit the results shown in Table 5 .

It can be seen from Table 5 that the fitting coefficients of AI for capital deepening, labour productivity, and division of labour are 0.052, 0.071, and 0.302, respectively, and are all significant at the 1% level, indicating that AI can promote employment through the above three mechanisms, and thus research Hypothesis 2 (H2) is supported. Compared with the workshop and handicraft industry, machine production has driven incomparably broad development in the social division of labour. Intelligent transformation helps to open up the internal and external data chain, improve the combination of production factors, reduce costs and increase efficiency to enable the high-quality development of enterprises. At the macro level, the impact of robotics on social productivity, industrial structure, and product prices affects the labour demand of enterprises. At the micro level, robot technology changes the employment carrier, skill requirements, and employment form of labour and impacts the matching of labour supply and demand. The combination of the price and income effects can drive the impact of technological progress on employment creation. While improving labour productivity, AI technology reduces product production costs. In the case of constant nominal income, the market increases the demand for the product, which in turn drives the expansion of the industrial scale and increases output, resulting in an increase in the demand for labour. At the same time, the emergence of robotics has refined the division of labour. Most importantly, the development of AI technology results in productivity improvements that cannot be matched by pure labour input, which not only enables 24 h automation but also reduces error rates, improves precision, and accelerates production speeds.

Table 5 also shows that the fitting coefficient of AI to virtual agglomeration is 0.141 and significant at the 5% level, indicating that AI and digital technology can promote employment by promoting the agglomeration degree of enterprises in the cloud and network. Research Hypothesis 3 is thus supported. Industrial internet, AI, collaborative robots, and optical fidelity information transmission technology are necessary for the future of the manufacturing industry, and smart factories will become the ultimate direction of manufacturing. Under the intelligent manufacturing model, by leveraging cloud links, industrial robots, and the technological depth needed to achieve autonomous management, the proximity advantage of geographic spatial agglomeration gradually begins to fade. The panconnective features of digital technology break through the situational constraints of work, reshaping the static, linear, and demarcated organizational structure and management modes of the industrial era and increasingly facilitates dynamic, network-based, borderless organizational forms, despite the fact that traditional work tasks can be carried out on a broader network platform employing online office platforms and online meetings. While promoting cost reduction and efficiency increase, such connectivity also creates new occupations that rely on this network to achieve efficient virtual agglomeration. On the other hand, robot technology has also broken the fixed connection between people and jobs, and the previous post matching mode of one person and one specific individual has gradually evolved into an organizational structure involving multiple posts and multiple people, thus providing more diverse and inclusive jobs for different groups.

Conclusions and policy implications

Research conclusion.

The decisive impact of digitization and automation on the functioning of all society’s social subsystems is indisputable. Technological progress alone does not impart any purpose to technology, and its value (consciousness) can only be defined by its application in the social context in which it emerges (Rakowski et al. 2021 ). The recent launch of the intelligent chatbot ChatGPT by the US artificial intelligence company OpenAI, with its powerful word processing capabilities and human-computer interaction, has once again sparked global concerns about its potential impact on employment in related industries. Automation technology represented by intelligent manufacturing profoundly affects the labour supply and demand map and significantly impacts economic and social development. The application of industrial robots is a concrete reflection of the integration of AI technology and industry, and its widespread promotion and popularization in the manufacturing field have resulted in changes in production methods and exerted impacts on the labour market. In this paper, the internal mechanism of AI’s impact on employment is first delineated and then empirical tests based on panel data from 30 provinces (municipalities and autonomous regions, excluding Hong Kong, Macao, Taiwan, and Xizang) in China from 2006 to 2020 are subsequently conducted. As mentioned in relation to the theory of “employment compensation,” the research described in this paper shows that the overall impact of AI on employment is positive, revealing a pronounced job creation effect, and the impact of automation technology on the labour market is mainly positively manifested as “icing on the cake.” Our conclusion is consistent with the literature (Sharma and Mishra 2023 ; Feng et al. 2024 ). This conclusion remains after replacing variables, adding missing variables, and controlling for endogeneity problems. The positive role of AI in promoting employment does not have exert opposite effects resulting from gender and industry differences. However, it brings greater digital welfare to female practitioners and workers in labour-intensive industries while relatively reducing the overall proportion of male practitioners in the manufacturing industry. Mechanism analysis shows that AI drives employment through mechanisms that promote capital deepening, the division of labour, and increased labour productivity. The digital trade derived from digital technology and internet platforms has promoted the transformation of traditional industrial agglomeration into virtual agglomeration, the constructed network flow space system is more prone to the free spillover of knowledge, technology, and creativity, and the agglomeration effect and agglomeration mechanism are amplified by geometric multiples. Industrial virtual agglomeration has become a new mechanism and an essential channel through which AI promotes employment, which helps to enhance labour autonomy, improve job suitability and encourage enterprises to share the welfare of labour among “cultivation areas.”

Policy implications

Technology is neutral, and its key lies in its use. Artificial intelligence technology, as an open new general technology, represents significant progress in productivity and is an essential driving force with the potential to boost economic development. However, it also inevitably poses many potential risks and social problems. This study helps to clarify the argument that technology replaces jobs by revealing the impact of automation technology on China’s labour market at the present stage, and its findings alleviate the social anxiety caused by the fear of machine replacement. According to the above research conclusions, the following valuable implications can be obtained.

Investment in AI research and development should be increased, and the high-end development of domestic robots should be accelerated. The development of AI has not only resulted in the improvement of production efficiency but has also triggered a change in industrial structure and labour structure, and it has also generated new jobs as it has replaced human labour. Currently, the impact of AI on employment in China is positive and helps to stabilize employment. Speeding up the development of the information infrastructure, accelerating the intelligent upgrade of the traditional physical infrastructure, and realizing the inclusive promotion of intelligent infrastructure are necessary to ensure efficient development. 5G technology and the development dividend of the digital economy can be used to increase the level of investment in new infrastructure such as cloud computing, the Internet of Things, blockchain, and the industrial internet and to improve the level of intelligent application across the industry. We need to implement the intelligent transformation of old infrastructure, upgrade traditional old infrastructure to smart new infrastructure, and digitally transform traditional forms of infrastructure such as power, reservoirs, rivers, and urban sewer pipes through the employment of sensors and access algorithms to solve infrastructure problems more intelligently. Second, the diversification and agglomeration of industrial lines are facilitated through the transformation of industrial intelligence and automation. At the same time, it is necessary to speed up the process of industrial intelligence and cultivate the prospects of emerging industries and employment carriers, particularly in regard to the development and growth of emerging producer services. The development of domestic robots should be task-oriented and application-oriented, should adhere to the effective transformation of scientific and technological achievements under the guidance of the development of the service economy. A “1 + 2 + N” collaborative innovation ecosystem should be constructed with a focus on cultivating, incubating, and supporting critical technological innovation in each subindustry of the manufacturing industry, optimizing the layout, and forming a matrix multilevel achievement transformation service. We need to improve the mechanisms used for complementing research and production, such as technology investment and authorization. To move beyond standard robot system development technology, the research and development of bionic perception and knowledge, as well as other cutting-edge technologies need to be developed to overcome the core technology “bottleneck” problem.

It is suggested that government departments improve the social security system and stabilize employment through multiple channels. The first channel is the evaluation and monitoring of the potential destruction of the low-end labour force by AI, enabled through the cooperation of the government and enterprises, to build relevant information platforms, improve the transparency of the labour market information, and reasonably anticipate structural unemployment. Big data should be fully leveraged, a sound national employment information monitoring platform should be built, real-time monitoring of the dynamic changes in employment in critical regions, fundamental groups, and key positions should be implemented, employment status information should be released, and employment early warning, forecasting, and prediction should be provided. Second, the backstop role of public service, including human resources departments and social security departments at all levels, should improve the relevant social security system in a timely manner. A mixed-guarantee model can be adopted for the potential unemployed and laws and regulations to protect the legitimate rights and interests of entrepreneurs and temporary employees should be improved. We can gradually expand the coverage of unemployment insurance and basic living allowances. For the extremely poor, unemployed or extreme labour shortage groups, public welfare jobs or special subsidies can be used to stabilize their basic lifestyles. The second is to understand the working conditions of the bottom workers at the grassroots level in greater depth, strengthen the statistical investigation and professional evaluation of AI technology and related jobs, provide skills training, employment assistance, and unemployment subsidies for workers who are unemployed due to the use of AI, and encourage unemployed groups to participate in vocational skills training to improve their applicable skillsets. Workers should be encouraged to use their fragmented time to participate in the gig and sharing economies and achieve flexible employment according to dominant conditions. Finally, a focus should be established on the impact of AI on the changing demand for jobs in specific industries, especially transportation equipment manufacturing and communications equipment, computers, and other electronic equipment manufacturing.

It is suggested that education departments promote the reform of the education and training system and deepen the coordinated development of industry-university research. Big data, the Internet of Things, and AI, as new digital production factors, have penetrated daily economic activities, driving industrial changes and changes in the supply and demand dynamics of the job market. Heterogeneity analysis results confirmed that AI imparts a high level of digital welfare for women and workers in labour-intensive industrial enterprises, but to stimulate the spillover of technology dividends in the whole society, it is necessary to dynamically optimize human capital and improve the adaptability of man-machine collaborative work; otherwise, the disruptive effect of intelligent technology on low-end, routine and programmable work will be obscured. AI has a creativity promoting effect on irregular, creative, and stylized technical positions. Hence, the contradiction between supply and demand in the labour market and the slow transformation of the labour skill structure requires attention. The relevant administrative departments of the state should take the lead in increasing investment in basic research and forming a scientific research division system in which enterprises increase their levels of investment in experimental development and multiple subjects participate in R&D. Relevant departments should clarify the urgent need for talent in the digital economy era, deepen the reform of the education system as a guide, encourage all kinds of colleges and universities to add related majors around AI and big data analysis, accelerate the research on the skill needs of new careers and jobs, and establish a lifelong learning and employment training system that meets the needs of the innovative economy and intelligent society. We need to strengthen the training of innovative, technical, and professional technical personnel, focus on cultivating interdisciplinary talent and AI-related professionals to improve worker adaptability to new industries and technologies, deepen the adjustment of the educational structure, increase the skills and knowledge of perceptual, creative, and social abilities of the workforce, and cultivate the skills needed to perform complex jobs in the future that are difficult to replace by AI. The lifelong education and training system should be improved, and enterprise employees should be encouraged to participate in vocational skills training and cultural knowledge learning through activities such as vocational and technical schools, enterprise universities, and personnel exchanges.

Research limitations

The study used panel data from 30 provinces in China from 2006 to 2020 to examine the impact of AI on employment using econometric models. Therefore, the conclusions obtained in this study are only applicable to the economic reality in China during the sample period. There are three shortcomings in this study. First, only the effect and mechanism of AI in promoting employment from a macro level are investigated in this study, which is limited by the large data particles and small sample data that are factors that reduce the reliability and validity of statistical inference. The digital economy has grown rapidly in the wake of the COVID-19 pandemic, and the related industrial structures and job types have been affected by sudden public events. An examination of the impact of AI on employment based on nearly three years of micro-data (particularly the data obtained from field research) is urgent. When conducting empirical analysis, combining case studies of enterprises that are undergoing digital transformation is very helpful. Second, although the two-way fixed effect model and instrumental variable method can reveal conclusions regarding causality to a certain extent, these conclusions are not causal inference in the strict sense. Due to the lack of good policy pilots regarding industrial robots and digital parks, the topic cannot be thoroughly evaluated for determining policy and calculating resident welfare. In future research, researchers can look for policies and systems such as big data pilot zones, intelligent industrial parks, and digital economy demonstration zones to perform policy evaluations through quasinatural experiments. The use of difference in differences (DID), regression discontinuity (RD), and synthetic control method (SCM) to perform regression is beneficial. In addition, the diffusion effect caused by introducing and installing industrial robots leads to the flow of labour between regions, resulting in a potential spatial spillover effect. Although the spatial econometric model is used above, it is mainly used as a robustness test, and the direct effect is considered. This paper has yet to discuss the spatial effect from the perspective of the spatial spillover effect. Last, it is important to note that the digital infrastructure, workforce, and industrial structure differ from country to country. The study focused on a sample of data from China, making the findings only partially applicable to other countries. Therefore, the sample size of countries should be expanded in future studies, and the possible heterogeneity of AI should be explored and compared by classifying different countries according to their stage of development.

Data availability

The data generated during and/or analyzed during the current study are provided in Supplementary File “database”.

Acemoglu D, Restrepo P (2018) Low-Skill and High-Skill Automation. J Hum Cap 12(2):204–232. https://doi.org/10.1086/697242

Article   Google Scholar  

Alan H (2023) A systematic bibliometric analysis on the current digital human resources management studies and directions for future research. J Chin Hum Resour Manag 14(1):38–59. https://doi.org/10.47297/wspchrmWSP2040-800502.20231401

Autor D (2019) Work of the past, work of the future. AEA Pap Proc 109(4):1–32. https://doi.org/10.1257/pandp.20191110

Balsmeier B, Woerter M (2019) Is this time different? How digitalization influences job creation and destruction. Res Policy 48(8):103765. https://doi.org/10.1016/j.respol.2019.03.010

Beier G, Matthess M, Shuttleworth L, Guan T, Grudzien DIDP, Xue B et al. (2022) Implications of Industry 4.0 on industrial employment: A comparative survey from Brazilian, Chinese, and German practitioners. Technol Soc 70:102028. https://doi.org/10.1016/j.techsoc.2022.102028

Berkers H, Smids J, Nyholm SR, Le Blanc PM (2020) Robotization and meaningful work in logistic warehouses: threats and opportunities. Gedrag Organisatie 33(4):324–347

Google Scholar  

Borland J, Coelli M (2017) Are robots taking our jobs? Aust Economic Rev 50(4):377–397. https://doi.org/10.1111/1467-8462.12245

Bouattour A, Kalai M, Helali K (2023) The nonlinear impact of technologies import on industrial employment: A panel threshold regression approach. Heliyon 9(10):e20266. https://doi.org/10.1016/j.heliyon.2023.e20266

Article   PubMed   PubMed Central   Google Scholar  

Boyd R, Holton RJ (2018) Technology, innovation, employment and power: Does robotics and artificial intelligence really mean social transformation? J Sociol 54(3):331–345. https://doi.org/10.1177/1440783317726591

Chen Z (2023) Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities Soc Sci Commun 10:567. https://doi.org/10.1057/s41599-023-02079-x

Dixon J, Hong B, Wu L (2021) The robot revolution: Managerial and employment consequences for firms. Manag Sci 67(9):5586–5605. https://doi.org/10.1287/mnsc.2020.3812

Duan SX, Deng H, Wibowo S (2023) Exploring the impact of digital work on work-life balance and job performance: a technology affordance perspective. Inf Technol People 36(5):2009–2029. https://doi.org/10.1108/ITP-01-2021-0013

Duan X, Zhang Q (2023) Industrial digitization, virtual agglomeration and total factor productivity. J Northwest Norm Univ(Soc Sci) 60(1):135–144. https://doi.org/10.16783/j.cnki.nwnus.2023.01.016

Dunn M (2020) Making gigs work: digital platforms, job quality and worker motivations. N. Technol Work Employ 35(2):232–249. https://doi.org/10.1111/ntwe.12167

Fabo B, Karanovic J, Dukova K (2017) In search of an adequate European policy response to the platform economy. Transf: Eur Rev Labour Res 23(2):163–175. https://doi.org/10.1177/1024258916688861

Feng R, Shen C, Guo Y (2024) Digital finance and labor demand of manufacturing enterprises: Theoretical mechanism and heterogeneity analysis. Int Rev Econ Financ 89(Part A):17–32. https://doi.org/10.1016/j.iref.2023.07.065

Filippi E, Bannò M, Trento S (2023) Automation technologies and their impact on employment: A review, synthesis and future research agenda. Technol Forecast Soc Change 191:122448. https://doi.org/10.1016/j.techfore.2023.122448

Fokam DNDT, Kamga BF, Nchofoung TN (2023) Information and communication technologies and employment in developing countries: Effects and transmission channels. Telecommun Policy 47(8):102597. https://doi.org/10.1016/j.telpol.2023.102597

Forsythe E, Kahn LB, Lange F, Wiczer D (2022) Where have all the workers gone? Recalls, retirements, and reallocation in the COVID recovery. Labour Econ 78:102251. https://doi.org/10.1016/j.labeco.2022.102251

Fossen FM, Sorgner A (2022) New digital technologies and heterogeneous wage and employment dynamics in the United States: Evidence from individual-level data. Technol Forecast Soc Change 175:121381. https://doi.org/10.1016/j.techfore.2021.121381

Frey CB, Osborne MA (2017) The future of employment: How susceptible are jobs to computerisation? Technol Forecast Soc Change 114:254–280. https://doi.org/10.1016/j.techfore.2016.08.019

Gardberg M, Heyman F, Norbäck P, Persson L (2020) Digitization-based automation and occupational dynamics. Econ Lett 189:109032. https://doi.org/10.1016/j.econlet.2020.109032

Ge P, Zhao Z (2023) The rise of robots and employment change: 2009–2017. J Renmin Univ China 37(1):102–115

Graf H, Mohamed H (2024) Robotization and employment dynamics in German manufacturing value chains. Struct Change Economic Dyn 68:133–147. https://doi.org/10.1016/j.strueco.2023.10.014

Han J, Yan X, Wei N (2022) Study on regional differences of the impact of artificial intelligence on China’s employment skill structure. Northwest Popul J 43(3):45–57. https://doi.org/10.15884/j.cnki.issn.1007-0672.2022.03.004

Article   CAS   Google Scholar  

Huang M, Rust RT (2018) Artificial intelligence in service. J Serv Res 21(2):155–172. https://doi.org/10.1177/1094670517752459

Jiang T (2022) Mediating effects and moderating effects in causal inference. China Ind Econ 410(5):100–120. https://doi.org/10.19581/j.cnki.ciejournal.2022.05.005

Article   MathSciNet   Google Scholar  

Jin X, Ma B, Zhang H (2023) Impact of fast internet access on employment: Evidence from a broadband expansion in China. China Econ Rev 81:102038. https://doi.org/10.1016/j.chieco.2023.102038

Jetha A, Bonaccio S, Shamaee A, Banks CG, Bültmann U, Smith PM et al. (2023) Divided in a digital economy: Understanding disability employment inequities stemming from the application of advanced workplace technologies. SSM - Qual Res Health 3:100293. https://doi.org/10.1016/j.ssmqr.2023.100293

Josifidis K, Supic N (2018) Income polarization of the US working class: An institutionalist view. J Econ Issues 52(2):498–508. https://doi.org/10.1080/00213624.2018.1469929

Kirov V, Malamin B (2022) Are translators afraid of artificial intelligence? Societies 12(2):70. https://doi.org/10.3390/soc12020070

Kolade O, Owoseni A (2022) Employment 5.0: The work of the future and the future of work. Technol Soc 71:102086. https://doi.org/10.1016/j.techsoc.2022.102086

Li L, Mo Y, Zhou G (2022) Platform economy and China’ s labor market: structural transformation and policy challenges. China Econ J 15(2):139–152. https://doi.org/10.1080/17538963.2022.2067685

Li Q, Zhang R (2022) Study on the challenges and countermeasures of coordinated development of quantity and quality of employment under the new technology-economy paradigm. J Xiangtan Univ(Philos Soc Sci) 46(5):42–45+58. https://doi.org/10.13715/j.cnki.jxupss.2022.05.019

Li Z, Hong Y, Zhang Z (2021) The empowering and competition effects of the platform-based sharing economy on the supply and demand sides of the labor market. J Manag Inf Syst 38(1):140–165. https://doi.org/10.1080/07421222.2021.1870387

Liu L (2018) Occupational therapy in the fourth industrial revolution. Can J Occup Ther 85(4):272–283. https://doi.org/10.1177/0008417418815179

Article   ADS   Google Scholar  

Liu N, Gu X, Lei CK (2022) The equilibrium effects of digital technology on banking, production, and employment. Financ Res Lett 49:103196. https://doi.org/10.1016/j.frl.2022.103196

Liu Y, Peng J (2023) The impact of “AI unemployment” on contemporary youth and its countermeasures. Youth Exploration 241(1):43–51. https://doi.org/10.13583/j.cnki.issn1004-3780.2023.01.004

Lu J, Xiao Q, Wang T (2023) Does the digital economy generate a gender dividend for female employment? Evidence from China. Telecommun Policy 47(6):102545. https://doi.org/10.1016/j.telpol.2023.102545

Luo J, Zhuo W, Xu B (2023). The bigger, the better? Optimal NGO size of human resources and governance quality of entrepreneurship in circular economy. Management Decision ahead-of-print. https://doi.org/10.1108/MD-03-2023-0325

Männasoo K, Pareliussen JK, Saia A (2023) Digital capacity and employment outcomes: Microdata evidence from pre- and post-COVID-19 Europe. Telemat Inform 83:102024. https://doi.org/10.1016/j.tele.2023.102024

Michau JB (2013) Creative destruction with on-the-job search. Rev Econ Dyn 16(4):691–707. https://doi.org/10.1016/j.red.2012.10.011

Morgan J (2019) Will we work in twenty-first century capitalism? A critique of the fourth industrial revolution literature. Econ Soc 48(3):371–398. https://doi.org/10.1080/03085147.2019.1620027

Nam T (2019) Technology usage, expected job sustainability, and perceived job insecurity. Technol Forecast Soc Change 138:155–165. https://doi.org/10.1016/j.techfore.2018.08.017

Ndubuisi G, Otioma C, Tetteh GK (2021) Digital infrastructure and employment in services: Evidence from Sub-Saharan African countries. Telecommun Policy 45(8):102153. https://doi.org/10.1016/j.telpol.2021.102153

Ni B, Obashi A (2021) Robotics technology and firm-level employment adjustment in Japan. Jpn World Econ 57:101054. https://doi.org/10.1016/j.japwor.2021.101054

Nikitas A, Vitel AE, Cotet C (2021) Autonomous vehicles and employment: An urban futures revolution or catastrophe? Cities 114:103203. https://doi.org/10.1016/j.cities.2021.103203

Novella R, Rosas-Shady D, Alvarado A (2023) Are we nearly there yet? New technology adoption and labor demand in Peru. Sci Public Policy 50(4):565–578. https://doi.org/10.1093/scipol/scad007

Oschinski A, Wyonch R (2017). Future shock?: the impact of automation on Canada’s labour market.C.D. Howe Institute Commentary working paper

Polak P (2021) Welcome to the digital era—the impact of AI on business and society. Society 58:177–178. https://doi.org/10.1007/s12115-021-00588-6

Rakowski R, Polak P, Kowalikova P (2021) Ethical aspects of the impact of AI: the status of humans in the era of artificial intelligence. Society 58:196–203. https://doi.org/10.1007/s12115-021-00586-8

Ramos ME, Garza-Rodríguez J, Gibaja-Romero DE (2022) Automation of employment in the presence of industry 4.0: The case of Mexico. Technol Soc 68:101837. https://doi.org/10.1016/j.techsoc.2021.101837

Reljic J, Evangelista R, Pianta M (2021). Digital technologies, employment, and skills. Industrial and Corporate Change dtab059. https://doi.org/10.1093/icc/dtab059

Schultz DE (1998) The death of distance—How the communications revolution will change our lives. Int Mark Rev 15(4):309–311. https://doi.org/10.1108/imr.1998.15.4.309.1

Sharma C, Mishra RK (2023) Imports, technology, and employment: Job creation or creative destruction. Manag Decis Econ 44(1):152–170. https://doi.org/10.1002/mde.3671

Shen Y, Yang Z (2023) Chasing green: The synergistic effect of industrial intelligence on pollution control and carbon reduction and its mechanisms. Sustainability 15(8):6401. https://doi.org/10.3390/su15086401

Spencer DA (2023) Technology and work: Past lessons and future directions. Technol Soc 74:102294. https://doi.org/10.1016/j.techsoc.2023.102294

Sun W, Liu Y (2023) Research on the influence mechanism of artificial intelligence on labor market. East China Econ Manag 37(3):1–9. https://doi.org/10.19629/j.cnki.34-1014/f.220706008

Tan H, Xia C (2022) Digital trade reshapes the theory and model of industrial agglomeration — From geographic agglomeration to online agglomeration. Res Financial Econ Issues 443(6):43–52. https://doi.org/10.19654/j.cnki.cjwtyj.2022.06.004

Tang J, Yang J (2014) Research on the economic impact of the hidden subsidy of sales price and reform. China Ind Econ 321(12):5–17. https://doi.org/10.19581/j.cnki.ciejournal.2014.12.001

Tschang FT, Almirall E (2021) Artificial intelligence as augmenting automation: Implications for employment. Acad Manag Perspect 35(4):642–659. https://doi.org/10.5465/amp.2019.0062

Wang PX, Kim S, Kim M (2023) Robot anthropomorphism and job insecurity: The role of social comparison. J Bus Res 164:114003. https://doi.org/10.1016/j.jbusres.2023.114003

Wang L, Hu S, Dong Z (2022) Artificial intelligence technology, Task attribute and occupational substitutable risk: Empirical evidence from the micro-level. J Manag World 38(7):60–79. https://doi.org/10.19744/j.cnki.11-1235/f.2022.0094

Wang R, Liang Q, Li G (2018) Virtual agglomeration: a new form of spatial organization with the deep integration of new generation information technology and real economy. J Manag World 34(2):13–21. https://doi.org/10.19744/j.cnki.11-1235/f.2018.02.002

Wang X, Zhu X, Wang Y (2022) TheImpactofRobotApplicationonManufacturingEmployment. J Quant Technol Econ 39(4):88–106. https://doi.org/10.13653/j.cnki.jqte.2022.04.002

Wang Y, Zhang Y (2022) Dual employment effect of digital economy and higher quality employment development. Expanding Horiz 231(3):43–50

Wang Y, Zhang Y, Liu J (2022) Digital finance and carbon emissions: an empirical test based on micro data and machine learning model. China Popul,Resour Environ 32(6):1–11

Wen J, Liu Y (2021) Uncertainty of new employment form: Digital labor in platform capital space and the reflection on it. J Zhejiang Gongshang Univ 171(6):92–106. https://doi.org/10.14134/j.cnki.cn33-1337/c.2021.06.009

Wong SI, Fieseler C, Kost D (2020) Digital labourers’ proactivity and the venture for meaningful work: Fruitful or fruitless? J-of-Occup-and-Organ-Psychol 93(4):887–911. https://doi.org/10.1111/joop.12317

Wu B, Yang W (2022) Empirical test of the impact of the digital economy on China’s employment structure. Financ Res Lett 49:103047. https://doi.org/10.1016/j.frl.2022.103047

Wu Q (2023) Sustainable growth through industrial robot diffusion: Quasi-experimental evidence from a Bartik shift-share design. Economics of Transition and Institutional Change Early Access https://doi.org/10.1111/ecot.12367

Xie M, Dong L, Xia Y, Guo J, Pan J, Wang H (2022) Does artificial intelligence affect the pattern of skill demand? Evidence from Chinese manufacturing firms. Econ Model 96:295–309. https://doi.org/10.1016/j.econmod.2021.01.009

Yan X, Zhu K, Ma C (2020) Employment under robot Impact: Evidence from China manufacturing. Stat Res 37(1):74–87. https://doi.org/10.19343/j.cnki.11-1302/c.2020.01.006

Yang CH (2022) How artificial intelligence technology affects productivity and employment: Firm-level evidence from Taiwan. Res Policy 51(6):104536. https://doi.org/10.1016/j.respol.2022.104536

Yang Z, Shen Y (2023) The impact of intelligent manufacturing on industrial green total factor productivity and its multiple mechanisms. Front Environ Sci 10:1058664. https://doi.org/10.3389/fenvs.2022.1058664

Yoon C (2023) Technology adoption and jobs: The effects of self-service kiosks in restaurants on labor outcomes. Technol Soc 74:102336. https://doi.org/10.1016/j.techsoc.2023.102336

Yu L, Liu Y (2017) Consumers’ welfare in China’s electric power industry competition. Res Econ Manag 38(8):55–64. https://doi.org/10.13502/j.cnki.issn1000-7636.2017.08.006

Zhang Q, Zhang F, Mai Q (2023) Robot adoption and labor demand: A new interpretation from external competition. Technol Soc 74:102310. https://doi.org/10.1016/j.techsoc.2023.102310

Zhang Y, Li X (2022) The new digital infrastructure, gig employment and spatial spillover effect. China Bus Mark 36(11):103–117. https://doi.org/10.14089/j.cnki.cn11-3664/f.2022.11.010

Article   MathSciNet   CAS   Google Scholar  

Zhang Z (2023a) The impact of the artificial intelligence industry on the number and structure of employments in the digital economy environment. Technol Forecast Soc Change 197:122881. https://doi.org/10.1016/j.techfore.2023.122881

Zhao L, Zhao X (2017) Is AI endangering human job opportunities?—From a perspective of marxism. J Hebei Univ Econ Bus 38(6):17–22. https://doi.org/10.14178/j.cnki.issn1007-2101.2017.06.004

Zhou S, Chen B (2022) Robots and industrial employment: Based on the perspective of subtask model. Stat Decis 38(23):85–89. https://doi.org/10.13546/j.cnki.tjyjc.2022.23.016

Download references

Acknowledgements

This work was financially supported by the Natural Science Foundation of Fujian Province (Grant No. 2022J01320).

Author information

Authors and affiliations.

Institute of Quantitative Economics, Huaqiao University, Xiamen, 361021, China

Yang Shen & Xiuwu Zhang

You can also search for this author in PubMed   Google Scholar

Contributions

YS: Data analysis, Writing – original draft, Software, Methodology, Formal analysis; XZ: Data collection; Supervision, Project administration, Writing – review & editing, Funding acquisition. All authors substantially contributed to the article and accepted the published version of the manuscript.

Corresponding author

Correspondence to Yang Shen .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Ethical approval

This article does not contain any studies featuring human participants performed by any of the authors.

Informed consent

This study does not contain any study with human participants performed by any of the authors.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Shen, Y., Zhang, X. The impact of artificial intelligence on employment: the role of virtual agglomeration. Humanit Soc Sci Commun 11 , 122 (2024). https://doi.org/10.1057/s41599-024-02647-9

Download citation

Received : 23 August 2023

Accepted : 09 January 2024

Published : 18 January 2024

DOI : https://doi.org/10.1057/s41599-024-02647-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

development of artificial intelligence essay

Artificial Intelligence Essay

500+ words essay on artificial intelligence.

Artificial intelligence (AI) has come into our daily lives through mobile devices and the Internet. Governments and businesses are increasingly making use of AI tools and techniques to solve business problems and improve many business processes, especially online ones. Such developments bring about new realities to social life that may not have been experienced before. This essay on Artificial Intelligence will help students to know the various advantages of using AI and how it has made our lives easier and simpler. Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics.

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and deep learning to solve problems in particular domains without hard coding all possibilities (i.e. algorithmic steps) in software. Due to this, AI started showing promising solutions for industry and businesses as well as our daily lives.

Importance and Advantages of Artificial Intelligence

Advances in computing and digital technologies have a direct influence on our lives, businesses and social life. This has influenced our daily routines, such as using mobile devices and active involvement on social media. AI systems are the most influential digital technologies. With AI systems, businesses are able to handle large data sets and provide speedy essential input to operations. Moreover, businesses are able to adapt to constant changes and are becoming more flexible.

By introducing Artificial Intelligence systems into devices, new business processes are opting for the automated process. A new paradigm emerges as a result of such intelligent automation, which now dictates not only how businesses operate but also who does the job. Many manufacturing sites can now operate fully automated with robots and without any human workers. Artificial Intelligence now brings unheard and unexpected innovations to the business world that many organizations will need to integrate to remain competitive and move further to lead the competitors.

Artificial Intelligence shapes our lives and social interactions through technological advancement. There are many AI applications which are specifically developed for providing better services to individuals, such as mobile phones, electronic gadgets, social media platforms etc. We are delegating our activities through intelligent applications, such as personal assistants, intelligent wearable devices and other applications. AI systems that operate household apparatus help us at home with cooking or cleaning.

Future Scope of Artificial Intelligence

In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is becoming a popular field in computer science as it has enhanced humans. Application areas of artificial intelligence are having a huge impact on various fields of life to solve complex problems in various areas such as education, engineering, business, medicine, weather forecasting etc. Many labourers’ work can be done by a single machine. But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any work by ourselves and get lazy. Another disadvantage is that it cannot give a human-like feeling. So machines should be used only where they are actually required.

Students must have found this essay on “Artificial Intelligence” useful for improving their essay writing skills. They can get the study material and the latest updates on CBSE/ICSE/State Board/Competitive Exams, at BYJU’S.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

development of artificial intelligence essay

  • Share Share

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

close

Counselling

From the world wide web to AI: 11 technology milestones that changed our lives

Laptop half-open.

The world wide web is a key technological milestone in the past 40 years. Image:  Unsplash/Ales Nesetril

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Stephen Holroyd

development of artificial intelligence essay

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, emerging technologies.

  • It’s been 40 years since the launch of the Apple Macintosh personal computer.
  • Since then, technological innovation has accelerated – here are some of the most notable tech milestones over the past four decades.
  • The World Economic Forum’s EDISON Alliance aims to digitally connect 1 billion people to essential services like healthcare, education and finance by 2025.

On 24 January 1984, Apple unveiled the Macintosh 128K and changed the face of personal computers forever.

Steve Jobs’ compact, user-friendly computer introduced the graphical user interface to the world, marking a pivotal moment in the evolution of personal technology.

Since that day, the rate of technological innovation has exploded, with developments in computing, communication, connectivity and machine learning expanding at an astonishing rate.

Here are some of the key technological milestones that have changed our lives over the past 40 years.

Have you read?

9 ways ai is helping tackle climate change, driving trust: paving the road for autonomous vehicles, these are the top 10 emerging technologies of 2023: here's how they can impact the world, 1993: the world wide web.

Although the internet’s official birthday is often debated, it was the invention of the world wide web that drove the democratization of information access and shaped the modern internet we use today.

Created by British scientist Tim Berners-Lee, the World Wide Web was launched to the public in 1993 and brought with it the dawn of online communication, e-commerce and the beginning of the digital economy.

Despite the enormous progress since its invention, 2.6 billion people still lack internet access and global digital inclusion is considered a priority. The World Economic Forum’s EDISON Alliance aims to bridge this gap and digitally connect 1 billion people to essential services like healthcare, education and finance by 2025.

1997: Wi-Fi

The emergence of publicly available Wi-Fi in 1997 changed the face of internet access – removing the need to tether to a network via a cable. Without Wi-Fi, the smartphone and the ever-present internet connection we’ve come to rely on, wouldn’t have been possible, and it has become an indispensable part of our modern, connected world.

1998: Google

The launch of Google’s search engine in 1998 marked the beginning of efficient web search, transforming how people across the globe accessed and navigated online information . Today, there are many others to choose from – Bing, Yahoo!, Baidu – but Google remains the world’s most-used search engine.

2004: Social media

Over the past two decades, the rise of social media and social networking has dominated our connected lives. In 2004, MySpace became the first social media site to reach one million monthly active users. Since then, platforms like Facebook, Instagram and TikTok have reshaped communication and social interaction , nurturing global connectivity and information sharing on an enormous scale, albeit not without controversy .

Most popular social networks worldwide as of January 2024, ranked by number of monthly active users

2007: The iPhone

More than a decade after the first smartphone had been introduced, the iPhone redefined mobile technology by combining a phone, music player, camera and internet communicator in one sleek device. It set new standards for smartphones and ultimately accelerated the explosion of smartphone usage we see across the planet today.

2009: Bitcoin

The foundations for modern digital payments were laid in the late 1950s with the introduction of the first credit and debit cards, but it was the invention of Bitcoin in 2009 that set the stage for a new era of secure digital transactions. The first decentralized cryptocurrency, Bitcoin introduced a new form of digital payment system that operates independently of traditional banking systems. Its underlying technology, blockchain, revolutionized the concept of digital transactions by providing a secure, transparent, and decentralized method for peer-to-peer payments. Bitcoin has not only influenced the development of other cryptocurrencies but has also sparked discussions about the future of money in the digital age.

2014: Virtual reality

2014 was a pivotal year in the development of virtual reality (VR) for commercial applications. Facebook acquired the Oculus VR company for $2 billion and kickstarted a drive for high-quality VR experiences to be made accessible to consumers. Samsung and Sony also announced VR products, and Google released the now discontinued Cardboard – a low-cost, do-it-yourself viewer for smartphones. The first batch of Oculus Rift headsets began shipping to consumers in 2016.

2015: Autonomous vehicles

Autonomous vehicles have gone from science fiction to science fact in the past two decades, and predictions suggest that almost two-thirds of registered passenger cars worldwide will feature partly-assisted driving and steering by 2025 . In 2015, the introduction of Tesla’s Autopilot brought autonomous features to consumer vehicles, contributing to the mainstream adoption of self-driving technology.

Cars Increasingly Ready for Autonomous Driving

2019: Quantum computing

A significant moment in the history of quantum computing was achieved in October 2019 when Google’s Sycamore processor demonstrated “quantum supremacy” by solving a complex problem faster than the world’s most powerful supercomputers. Quantum technologies can be used in a variety of applications and offer transformative impacts across industries. The World Economic Forum’s Quantum Economy Blueprint provides a framework for value-led, democratic access to quantum resources to help ensure an equitable global distribution and avoid a quantum divide.

2020: The COVID-19 pandemic

The COVID-19 pandemic accelerated digital transformation on an unprecedented scale . With almost every aspect of human life impacted by the spread of the virus – from communicating with loved ones to how and where we work – the rate of innovation and uptake of technology across the globe emphasized the importance of remote work, video conferencing, telemedicine and e-commerce in our daily lives.

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forum’s Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance .

The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

2022: Artificial intelligence

Artificial intelligence (AI) technology has been around for some time and AI-powered consumer electronics, from smart home devices to personalized assistants, have become commonplace. However, the emergence of mainstream applications of generative AI has dominated the sector in recent years.

In 2022, OpenAI unveiled its chatbot, ChatGPT. Within a week, it had gained over one million users and become the fastest-growing consumer app in history . In the same year, DALL-E 2, a text-to-image generative AI tool, also launched.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

The Agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} Weekly

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Emerging Technologies .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

development of artificial intelligence essay

Robot rock stars, pocket forests, and the battle for chips - Forum podcasts you should hear this month

Robin Pomeroy and Linda Lacina

April 29, 2024

development of artificial intelligence essay

4 steps for the Middle East and North Africa to develop 'intelligent economies' 

Maroun Kairouz

development of artificial intelligence essay

The future of learning: How AI is revolutionizing education 4.0

Tanya Milberg

April 28, 2024

development of artificial intelligence essay

Shaping the Future of Learning: The Role of AI in Education 4.0

development of artificial intelligence essay

The cybersecurity industry has an urgent talent shortage. Here’s how to plug the gap

Michelle Meineke

development of artificial intelligence essay

Stanford just released its annual AI Index report. Here's what it reveals

April 26, 2024

IMAGES

  1. Development Of Artificial Intelligence

    development of artificial intelligence essay

  2. Artificial Intelligence Essay

    development of artificial intelligence essay

  3. ≫ Present and Future of Artificial Intelligence Free Essay Sample on

    development of artificial intelligence essay

  4. What is Artificial Intelligence Free Essay Example

    development of artificial intelligence essay

  5. Artificial Intelligence Essay: 500+ Words Essay for Students

    development of artificial intelligence essay

  6. Essay on Artificial Intelligence in English 1000 Words

    development of artificial intelligence essay

VIDEO

  1. Speech on Artificial intelligence

  2. Artificial intelligence,essay

  3. Artificial intelligence,essay

  4. 10 Lines Essay on Artificial Intelligence (AI) for Kids

  5. Essay on Artificial Intelligence

  6. Artificial intelligence essay for Students

COMMENTS

  1. The brief history of artificial intelligence: the world has changed

    The rise of artificial intelligence over the last 8 decades: As training computation has increased, AI systems have become more powerful 8 Each small circle in this chart represents one AI system. The circle's position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of ...

  2. How artificial intelligence is transforming the world

    April 24, 2018. Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision ...

  3. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  4. What Is Artificial Intelligence? Definition, Uses, and Types

    What is artificial intelligence? Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language ...

  5. Artificial Intelligence Essay for Students and Children

    500+ Words Essay on Artificial Intelligence. Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence of humans and animals. ... It is probably the fastest-growing development in the World of technology and innovation. Furthermore, many experts believe AI could solve major challenges and ...

  6. The History of Artificial Intelligence

    The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. It's considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy ...

  7. Artificial Intelligence: History, Challenges, and Future Essay

    In the editorial "A Brief History of Artificial Intelligence: On the Past, Present, and Future of Artificial Intelligence" by Michael Haenlein and Andreas Kaplan, the authors explore the history of artificial intelligence (AI), the current challenges firms face, and the future of AI. The authors classify AI into analytical, human-inspired ...

  8. Artificial intelligence (AI)

    Summarize This Article artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. The term is frequently applied to the project of developing systems endowed with the intellectual processes characteristic of humans, such as the ability to reason, discover meaning, generalize, or learn from past ...

  9. Science and the new age of AI

    Scientists worry that ill-informed use of artificial intelligence is driving a deluge of unreliable or useless research. FEATURE ChatGPT has entered the classroom: how LLMs could transform education

  10. Artificial intelligence in academic writing: a paradigm-shifting

    The use of artificial intelligence (AI) in academic writing can be divided into two broad categories: those that assist authors in the writing process; and those that are used to evaluate and ...

  11. Artificial Intelligence and the Future of Humans

    1. Concerns about human agency, evolution and survival. Experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in AI will affect what it means to be human, to be productive and to exercise free will.

  12. Artificial intelligence

    Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems.It is a field of research in computer science that develops and studies methods and software which enable machines to perceive their environment and uses learning and intelligence to take actions that maximize their chances of achieving defined goals.

  13. Development's of Artificial Intelligence, Essay Example

    The next decade will bring in new AI technology such as Snake Like Robots, Robotic Surgery, Underwater Robots, AI learning that imitates children learning and Robots that fix power outages (Science Daily 1). Ethics. The ethics of the technology called artificial intelligence is because artificial intelligence does not have a conscious.

  14. Artificial Intelligence and Its Impact on Education Essay

    Introduction. Rooted in computer science, Artificial Intelligence (AI) is defined by the development of digital systems that can perform tasks, which are dependent on human intelligence (Rexford, 2018). Interest in the adoption of AI in the education sector started in the 1980s when researchers were exploring the possibilities of adopting ...

  15. Using artificial intelligence in academic writing and research: An

    Results. The search identified 24 studies through which six core domains were identified where AI helps academic writing and research: 1) facilitating idea generation and research design, 2) improving content and structuring, 3) supporting literature review and synthesis, 4) enhancing data management and analysis, 5) supporting editing, review, and publishing, and 6) assisting in communication ...

  16. Essay on The Rise of Artificial Intelligence

    Artificial intelligence is an imitation of human knowledge that is programmed in different machines, using algorithms, to simulate the thought process and actions of humans. The first concepts for AI started in the 1950s, where many mathematicians, scientists, and philosophers explored the possibility of machines that problem solved and made ...

  17. Exploring Artificial Intelligence in Academic Essay: Higher Education

    Higher education perceptions of artificial intelligence. Studies have explored the diverse functionalities of these AI tools and their impact on writing productivity, quality, and students' learning experiences. The integration of Artificial Intelligence (AI) in writing academic essays has become a significant area of interest in higher education.

  18. The Future of Artificial Intelligence: Predictions and Challenges

    The forecasts and difficulties facing the development of artificial intelligence (AI) are examined in this essay. It examines how AI will permeate daily life more deeply, transform healthcare and ...

  19. Artificial Intelligence Essay

    Artificial Intelligence is the theory and development of computers, which imitates the human intelligence and senses, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial Intelligence has brought a revolution in the world of technology.

  20. The impact of artificial intelligence on employment: the role of

    Sustainable Development Goal 8 proposes the promotion of full and productive employment for all. Intelligent production factors, such as robots, the Internet of Things, and extensive data analysis ...

  21. Art, Creativity, and the Potential of Artificial Intelligence

    Our essay discusses an AI process developed for making art (AICAN), and the issues AI creativity raises for understanding art and artists in the 21st century. Backed by our training in computer science (Elgammal) and art history (Mazzone), we argue for the consideration of AICAN's works as art, relate AICAN works to the contemporary art context, and urge a reconsideration of how we might ...

  22. 500+ Words Essay on Artificial Intelligence

    Artificial Intelligence Essay. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use ...

  23. The impact of artificial intelligence on human society and bioethics

    The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem.

  24. How AI and other technology changed our lives

    Bitcoin has not only influenced the development of other cryptocurrencies but has also sparked discussions about the future of money in the digital age. 2014: Virtual reality. 2014 was a pivotal year in the development of virtual reality (VR) for commercial applications. ... Artificial intelligence (AI) technology has been around for some time ...

  25. Application and Exploration of Artificial Intelligence Technology in

    This paper study the application of AI technology among the student population in colleges and universities, collect data using a survey, and conduct statistical analysis based on the data to find that colleges and universities are in a position to apply the products of AI technology and have relatively good positive benefits. With the development of artificial intelligence technology, more ...

  26. PDF Module #1: Introduction to Artificial Intelligence

    Summarize the key elements of intelligence: perception, learning, memory, reasoning. Ask students to differentiate between intelligence and artificial intelligence. Provide a basic definition for artificial intelligence: technology that imitates intelligence. Activity 2: Examples of Artificial Intelligence

  27. Interdisciplinary Exploration of Ai and Social Science for ...

    The integration of artificial intelligence (AI) with social science has emerged as a promising avenue for addressing complex societal challenges in Sabah, Malaysia. This interdisciplinary approach seeks to harness the synergies between AI technologies and social science methodologies to promote sustainable development and preserve indigenous ...