How artificial intelligence is transforming the world

Subscribe to techstream, darrell m. west and darrell m. west senior fellow - center for technology innovation , douglas dillon chair in governmental studies john r. allen john r. allen.

April 24, 2018

Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision making—and already it is transforming every walk of life. In this report, Darrell West and John Allen discuss AI’s application across a variety of sectors, address issues in its development, and offer recommendations for getting the most out of AI while still protecting important human values.

Table of Contents I. Qualities of artificial intelligence II. Applications in diverse sectors III. Policy, regulatory, and ethical issues IV. Recommendations V. Conclusion

  • 49 min read

Most people are not very familiar with the concept of artificial intelligence (AI). As an illustration, when 1,500 senior business leaders in the United States in 2017 were asked about AI, only 17 percent said they were familiar with it. 1 A number of them were not sure what it was or how it would affect their particular companies. They understood there was considerable potential for altering business processes, but were not clear how AI could be deployed within their own organizations.

Despite its widespread lack of familiarity, AI is a technology that is transforming every walk of life. It is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decisionmaking. Our hope through this comprehensive overview is to explain AI to an audience of policymakers, opinion leaders, and interested observers, and demonstrate how AI already is altering the world and raising important questions for society, the economy, and governance.

In this paper, we discuss novel applications in finance, national security, health care, criminal justice, transportation, and smart cities, and address issues such as data access problems, algorithmic bias, AI ethics and transparency, and legal liability for AI decisions. We contrast the regulatory approaches of the U.S. and European Union, and close by making a number of recommendations for getting the most out of AI while still protecting important human values. 2

In order to maximize AI benefits, we recommend nine steps for going forward:

  • Encourage greater data access for researchers without compromising users’ personal privacy,
  • invest more government funding in unclassified AI research,
  • promote new models of digital education and AI workforce development so employees have the skills needed in the 21 st -century economy,
  • create a federal AI advisory committee to make policy recommendations,
  • engage with state and local officials so they enact effective policies,
  • regulate broad AI principles rather than specific algorithms,
  • take bias complaints seriously so AI does not replicate historic injustice, unfairness, or discrimination in data or algorithms,
  • maintain mechanisms for human oversight and control, and
  • penalize malicious AI behavior and promote cybersecurity.

Qualities of artificial intelligence

Although there is no uniformly agreed upon definition, AI generally is thought to refer to “machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment and intention.” 3  According to researchers Shubhendu and Vijay, these software systems “make decisions which normally require [a] human level of expertise” and help people anticipate problems or deal with issues as they come up. 4 As such, they operate in an intentional, intelligent, and adaptive manner.

Intentionality

Artificial intelligence algorithms are designed to make decisions, often using real-time data. They are unlike passive machines that are capable only of mechanical or predetermined responses. Using sensors, digital data, or remote inputs, they combine information from a variety of different sources, analyze the material instantly, and act on the insights derived from those data. With massive improvements in storage systems, processing speeds, and analytic techniques, they are capable of tremendous sophistication in analysis and decisionmaking.

Artificial intelligence is already altering the world and raising important questions for society, the economy, and governance.

Intelligence

AI generally is undertaken in conjunction with machine learning and data analytics. 5 Machine learning takes data and looks for underlying trends. If it spots something that is relevant for a practical problem, software designers can take that knowledge and use it to analyze specific issues. All that is required are data that are sufficiently robust that algorithms can discern useful patterns. Data can come in the form of digital information, satellite imagery, visual information, text, or unstructured data.

Adaptability

AI systems have the ability to learn and adapt as they make decisions. In the transportation area, for example, semi-autonomous vehicles have tools that let drivers and vehicles know about upcoming congestion, potholes, highway construction, or other possible traffic impediments. Vehicles can take advantage of the experience of other vehicles on the road, without human involvement, and the entire corpus of their achieved “experience” is immediately and fully transferable to other similarly configured vehicles. Their advanced algorithms, sensors, and cameras incorporate experience in current operations, and use dashboards and visual displays to present information in real time so human drivers are able to make sense of ongoing traffic and vehicular conditions. And in the case of fully autonomous vehicles, advanced systems can completely control the car or truck, and make all the navigational decisions.

Related Content

Jack Karsten, Darrell M. West

October 26, 2015

Makada Henry-Nickie

November 16, 2017

Sunil Johal, Daniel Araya

February 28, 2017

Applications in diverse sectors

AI is not a futuristic vision, but rather something that is here today and being integrated with and deployed into a variety of sectors. This includes fields such as finance, national security, health care, criminal justice, transportation, and smart cities. There are numerous examples where AI already is making an impact on the world and augmenting human capabilities in significant ways. 6

One of the reasons for the growing role of AI is the tremendous opportunities for economic development that it presents. A project undertaken by PriceWaterhouseCoopers estimated that “artificial intelligence technologies could increase global GDP by $15.7 trillion, a full 14%, by 2030.” 7 That includes advances of $7 trillion in China, $3.7 trillion in North America, $1.8 trillion in Northern Europe, $1.2 trillion for Africa and Oceania, $0.9 trillion in the rest of Asia outside of China, $0.7 trillion in Southern Europe, and $0.5 trillion in Latin America. China is making rapid strides because it has set a national goal of investing $150 billion in AI and becoming the global leader in this area by 2030.

Meanwhile, a McKinsey Global Institute study of China found that “AI-led automation can give the Chinese economy a productivity injection that would add 0.8 to 1.4 percentage points to GDP growth annually, depending on the speed of adoption.” 8 Although its authors found that China currently lags the United States and the United Kingdom in AI deployment, the sheer size of its AI market gives that country tremendous opportunities for pilot testing and future development.

Investments in financial AI in the United States tripled between 2013 and 2014 to a total of $12.2 billion. 9 According to observers in that sector, “Decisions about loans are now being made by software that can take into account a variety of finely parsed data about a borrower, rather than just a credit score and a background check.” 10 In addition, there are so-called robo-advisers that “create personalized investment portfolios, obviating the need for stockbrokers and financial advisers.” 11 These advances are designed to take the emotion out of investing and undertake decisions based on analytical considerations, and make these choices in a matter of minutes.

A prominent example of this is taking place in stock exchanges, where high-frequency trading by machines has replaced much of human decisionmaking. People submit buy and sell orders, and computers match them in the blink of an eye without human intervention. Machines can spot trading inefficiencies or market differentials on a very small scale and execute trades that make money according to investor instructions. 12 Powered in some places by advanced computing, these tools have much greater capacities for storing information because of their emphasis not on a zero or a one, but on “quantum bits” that can store multiple values in each location. 13 That dramatically increases storage capacity and decreases processing times.

Fraud detection represents another way AI is helpful in financial systems. It sometimes is difficult to discern fraudulent activities in large organizations, but AI can identify abnormalities, outliers, or deviant cases requiring additional investigation. That helps managers find problems early in the cycle, before they reach dangerous levels. 14

National security

AI plays a substantial role in national defense. Through its Project Maven, the American military is deploying AI “to sift through the massive troves of data and video captured by surveillance and then alert human analysts of patterns or when there is abnormal or suspicious activity.” 15 According to Deputy Secretary of Defense Patrick Shanahan, the goal of emerging technologies in this area is “to meet our warfighters’ needs and to increase [the] speed and agility [of] technology development and procurement.” 16

Artificial intelligence will accelerate the traditional process of warfare so rapidly that a new term has been coined: hyperwar.

The big data analytics associated with AI will profoundly affect intelligence analysis, as massive amounts of data are sifted in near real time—if not eventually in real time—thereby providing commanders and their staffs a level of intelligence analysis and productivity heretofore unseen. Command and control will similarly be affected as human commanders delegate certain routine, and in special circumstances, key decisions to AI platforms, reducing dramatically the time associated with the decision and subsequent action. In the end, warfare is a time competitive process, where the side able to decide the fastest and move most quickly to execution will generally prevail. Indeed, artificially intelligent intelligence systems, tied to AI-assisted command and control systems, can move decision support and decisionmaking to a speed vastly superior to the speeds of the traditional means of waging war. So fast will be this process, especially if coupled to automatic decisions to launch artificially intelligent autonomous weapons systems capable of lethal outcomes, that a new term has been coined specifically to embrace the speed at which war will be waged: hyperwar.

While the ethical and legal debate is raging over whether America will ever wage war with artificially intelligent autonomous lethal systems, the Chinese and Russians are not nearly so mired in this debate, and we should anticipate our need to defend against these systems operating at hyperwar speeds. The challenge in the West of where to position “humans in the loop” in a hyperwar scenario will ultimately dictate the West’s capacity to be competitive in this new form of conflict. 17

Just as AI will profoundly affect the speed of warfare, the proliferation of zero day or zero second cyber threats as well as polymorphic malware will challenge even the most sophisticated signature-based cyber protection. This forces significant improvement to existing cyber defenses. Increasingly, vulnerable systems are migrating, and will need to shift to a layered approach to cybersecurity with cloud-based, cognitive AI platforms. This approach moves the community toward a “thinking” defensive capability that can defend networks through constant training on known threats. This capability includes DNA-level analysis of heretofore unknown code, with the possibility of recognizing and stopping inbound malicious code by recognizing a string component of the file. This is how certain key U.S.-based systems stopped the debilitating “WannaCry” and “Petya” viruses.

Preparing for hyperwar and defending critical cyber networks must become a high priority because China, Russia, North Korea, and other countries are putting substantial resources into AI. In 2017, China’s State Council issued a plan for the country to “build a domestic industry worth almost $150 billion” by 2030. 18 As an example of the possibilities, the Chinese search firm Baidu has pioneered a facial recognition application that finds missing people. In addition, cities such as Shenzhen are providing up to $1 million to support AI labs. That country hopes AI will provide security, combat terrorism, and improve speech recognition programs. 19 The dual-use nature of many AI algorithms will mean AI research focused on one sector of society can be rapidly modified for use in the security sector as well. 20

Health care

AI tools are helping designers improve computational sophistication in health care. For example, Merantix is a German company that applies deep learning to medical issues. It has an application in medical imaging that “detects lymph nodes in the human body in Computer Tomography (CT) images.” 21 According to its developers, the key is labeling the nodes and identifying small lesions or growths that could be problematic. Humans can do this, but radiologists charge $100 per hour and may be able to carefully read only four images an hour. If there were 10,000 images, the cost of this process would be $250,000, which is prohibitively expensive if done by humans.

What deep learning can do in this situation is train computers on data sets to learn what a normal-looking versus an irregular-appearing lymph node is. After doing that through imaging exercises and honing the accuracy of the labeling, radiological imaging specialists can apply this knowledge to actual patients and determine the extent to which someone is at risk of cancerous lymph nodes. Since only a few are likely to test positive, it is a matter of identifying the unhealthy versus healthy node.

AI has been applied to congestive heart failure as well, an illness that afflicts 10 percent of senior citizens and costs $35 billion each year in the United States. AI tools are helpful because they “predict in advance potential challenges ahead and allocate resources to patient education, sensing, and proactive interventions that keep patients out of the hospital.” 22

Criminal justice

AI is being deployed in the criminal justice area. The city of Chicago has developed an AI-driven “Strategic Subject List” that analyzes people who have been arrested for their risk of becoming future perpetrators. It ranks 400,000 people on a scale of 0 to 500, using items such as age, criminal activity, victimization, drug arrest records, and gang affiliation. In looking at the data, analysts found that youth is a strong predictor of violence, being a shooting victim is associated with becoming a future perpetrator, gang affiliation has little predictive value, and drug arrests are not significantly associated with future criminal activity. 23

Judicial experts claim AI programs reduce human bias in law enforcement and leads to a fairer sentencing system. R Street Institute Associate Caleb Watney writes:

Empirically grounded questions of predictive risk analysis play to the strengths of machine learning, automated reasoning and other forms of AI. One machine-learning policy simulation concluded that such programs could be used to cut crime up to 24.8 percent with no change in jailing rates, or reduce jail populations by up to 42 percent with no increase in crime rates. 24

However, critics worry that AI algorithms represent “a secret system to punish citizens for crimes they haven’t yet committed. The risk scores have been used numerous times to guide large-scale roundups.” 25 The fear is that such tools target people of color unfairly and have not helped Chicago reduce the murder wave that has plagued it in recent years.

Despite these concerns, other countries are moving ahead with rapid deployment in this area. In China, for example, companies already have “considerable resources and access to voices, faces and other biometric data in vast quantities, which would help them develop their technologies.” 26 New technologies make it possible to match images and voices with other types of information, and to use AI on these combined data sets to improve law enforcement and national security. Through its “Sharp Eyes” program, Chinese law enforcement is matching video images, social media activity, online purchases, travel records, and personal identity into a “police cloud.” This integrated database enables authorities to keep track of criminals, potential law-breakers, and terrorists. 27 Put differently, China has become the world’s leading AI-powered surveillance state.

Transportation

Transportation represents an area where AI and machine learning are producing major innovations. Research by Cameron Kerry and Jack Karsten of the Brookings Institution has found that over $80 billion was invested in autonomous vehicle technology between August 2014 and June 2017. Those investments include applications both for autonomous driving and the core technologies vital to that sector. 28

Autonomous vehicles—cars, trucks, buses, and drone delivery systems—use advanced technological capabilities. Those features include automated vehicle guidance and braking, lane-changing systems, the use of cameras and sensors for collision avoidance, the use of AI to analyze information in real time, and the use of high-performance computing and deep learning systems to adapt to new circumstances through detailed maps. 29

Light detection and ranging systems (LIDARs) and AI are key to navigation and collision avoidance. LIDAR systems combine light and radar instruments. They are mounted on the top of vehicles that use imaging in a 360-degree environment from a radar and light beams to measure the speed and distance of surrounding objects. Along with sensors placed on the front, sides, and back of the vehicle, these instruments provide information that keeps fast-moving cars and trucks in their own lane, helps them avoid other vehicles, applies brakes and steering when needed, and does so instantly so as to avoid accidents.

Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. This means that software is the key—not the physical car or truck itself.

Since these cameras and sensors compile a huge amount of information and need to process it instantly to avoid the car in the next lane, autonomous vehicles require high-performance computing, advanced algorithms, and deep learning systems to adapt to new scenarios. This means that software is the key, not the physical car or truck itself. 30 Advanced software enables cars to learn from the experiences of other vehicles on the road and adjust their guidance systems as weather, driving, or road conditions change. 31

Ride-sharing companies are very interested in autonomous vehicles. They see advantages in terms of customer service and labor productivity. All of the major ride-sharing companies are exploring driverless cars. The surge of car-sharing and taxi services—such as Uber and Lyft in the United States, Daimler’s Mytaxi and Hailo service in Great Britain, and Didi Chuxing in China—demonstrate the opportunities of this transportation option. Uber recently signed an agreement to purchase 24,000 autonomous cars from Volvo for its ride-sharing service. 32

However, the ride-sharing firm suffered a setback in March 2018 when one of its autonomous vehicles in Arizona hit and killed a pedestrian. Uber and several auto manufacturers immediately suspended testing and launched investigations into what went wrong and how the fatality could have occurred. 33 Both industry and consumers want reassurance that the technology is safe and able to deliver on its stated promises. Unless there are persuasive answers, this accident could slow AI advancements in the transportation sector.

Smart cities

Metropolitan governments are using AI to improve urban service delivery. For example, according to Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson:

The Cincinnati Fire Department is using data analytics to optimize medical emergency responses. The new analytics system recommends to the dispatcher an appropriate response to a medical emergency call—whether a patient can be treated on-site or needs to be taken to the hospital—by taking into account several factors, such as the type of call, location, weather, and similar calls. 34

Since it fields 80,000 requests each year, Cincinnati officials are deploying this technology to prioritize responses and determine the best ways to handle emergencies. They see AI as a way to deal with large volumes of data and figure out efficient ways of responding to public requests. Rather than address service issues in an ad hoc manner, authorities are trying to be proactive in how they provide urban services.

Cincinnati is not alone. A number of metropolitan areas are adopting smart city applications that use AI to improve service delivery, environmental planning, resource management, energy utilization, and crime prevention, among other things. For its smart cities index, the magazine Fast Company ranked American locales and found Seattle, Boston, San Francisco, Washington, D.C., and New York City as the top adopters. Seattle, for example, has embraced sustainability and is using AI to manage energy usage and resource management. Boston has launched a “City Hall To Go” that makes sure underserved communities receive needed public services. It also has deployed “cameras and inductive loops to manage traffic and acoustic sensors to identify gun shots.” San Francisco has certified 203 buildings as meeting LEED sustainability standards. 35

Through these and other means, metropolitan areas are leading the country in the deployment of AI solutions. Indeed, according to a National League of Cities report, 66 percent of American cities are investing in smart city technology. Among the top applications noted in the report are “smart meters for utilities, intelligent traffic signals, e-governance applications, Wi-Fi kiosks, and radio frequency identification sensors in pavement.” 36

Policy, regulatory, and ethical issues

These examples from a variety of sectors demonstrate how AI is transforming many walks of human existence. The increasing penetration of AI and autonomous devices into many aspects of life is altering basic operations and decisionmaking within organizations, and improving efficiency and response times.

At the same time, though, these developments raise important policy, regulatory, and ethical issues. For example, how should we promote data access? How do we guard against biased or unfair data used in algorithms? What types of ethical principles are introduced through software programming, and how transparent should designers be about their choices? What about questions of legal liability in cases where algorithms cause harm? 37

The increasing penetration of AI into many aspects of life is altering decisionmaking within organizations and improving efficiency. At the same time, though, these developments raise important policy, regulatory, and ethical issues.

Data access problems

The key to getting the most out of AI is having a “data-friendly ecosystem with unified standards and cross-platform sharing.” AI depends on data that can be analyzed in real time and brought to bear on concrete problems. Having data that are “accessible for exploration” in the research community is a prerequisite for successful AI development. 38

According to a McKinsey Global Institute study, nations that promote open data sources and data sharing are the ones most likely to see AI advances. In this regard, the United States has a substantial advantage over China. Global ratings on data openness show that U.S. ranks eighth overall in the world, compared to 93 for China. 39

But right now, the United States does not have a coherent national data strategy. There are few protocols for promoting research access or platforms that make it possible to gain new insights from proprietary data. It is not always clear who owns data or how much belongs in the public sphere. These uncertainties limit the innovation economy and act as a drag on academic research. In the following section, we outline ways to improve data access for researchers.

Biases in data and algorithms

In some instances, certain AI systems are thought to have enabled discriminatory or biased practices. 40 For example, Airbnb has been accused of having homeowners on its platform who discriminate against racial minorities. A research project undertaken by the Harvard Business School found that “Airbnb users with distinctly African American names were roughly 16 percent less likely to be accepted as guests than those with distinctly white names.” 41

Racial issues also come up with facial recognition software. Most such systems operate by comparing a person’s face to a range of faces in a large database. As pointed out by Joy Buolamwini of the Algorithmic Justice League, “If your facial recognition data contains mostly Caucasian faces, that’s what your program will learn to recognize.” 42 Unless the databases have access to diverse data, these programs perform poorly when attempting to recognize African-American or Asian-American features.

Many historical data sets reflect traditional values, which may or may not represent the preferences wanted in a current system. As Buolamwini notes, such an approach risks repeating inequities of the past:

The rise of automation and the increased reliance on algorithms for high-stakes decisions such as whether someone get insurance or not, your likelihood to default on a loan or somebody’s risk of recidivism means this is something that needs to be addressed. Even admissions decisions are increasingly automated—what school our children go to and what opportunities they have. We don’t have to bring the structural inequalities of the past into the future we create. 43

AI ethics and transparency

Algorithms embed ethical considerations and value choices into program decisions. As such, these systems raise questions concerning the criteria used in automated decisionmaking. Some people want to have a better understanding of how algorithms function and what choices are being made. 44

In the United States, many urban schools use algorithms for enrollment decisions based on a variety of considerations, such as parent preferences, neighborhood qualities, income level, and demographic background. According to Brookings researcher Jon Valant, the New Orleans–based Bricolage Academy “gives priority to economically disadvantaged applicants for up to 33 percent of available seats. In practice, though, most cities have opted for categories that prioritize siblings of current students, children of school employees, and families that live in school’s broad geographic area.” 45 Enrollment choices can be expected to be very different when considerations of this sort come into play.

Depending on how AI systems are set up, they can facilitate the redlining of mortgage applications, help people discriminate against individuals they don’t like, or help screen or build rosters of individuals based on unfair criteria. The types of considerations that go into programming decisions matter a lot in terms of how the systems operate and how they affect customers. 46

For these reasons, the EU is implementing the General Data Protection Regulation (GDPR) in May 2018. The rules specify that people have “the right to opt out of personally tailored ads” and “can contest ‘legal or similarly significant’ decisions made by algorithms and appeal for human intervention” in the form of an explanation of how the algorithm generated a particular outcome. Each guideline is designed to ensure the protection of personal data and provide individuals with information on how the “black box” operates. 47

Legal liability

There are questions concerning the legal liability of AI systems. If there are harms or infractions (or fatalities in the case of driverless cars), the operators of the algorithm likely will fall under product liability rules. A body of case law has shown that the situation’s facts and circumstances determine liability and influence the kind of penalties that are imposed. Those can range from civil fines to imprisonment for major harms. 48 The Uber-related fatality in Arizona will be an important test case for legal liability. The state actively recruited Uber to test its autonomous vehicles and gave the company considerable latitude in terms of road testing. It remains to be seen if there will be lawsuits in this case and who is sued: the human backup driver, the state of Arizona, the Phoenix suburb where the accident took place, Uber, software developers, or the auto manufacturer. Given the multiple people and organizations involved in the road testing, there are many legal questions to be resolved.

In non-transportation areas, digital platforms often have limited liability for what happens on their sites. For example, in the case of Airbnb, the firm “requires that people agree to waive their right to sue, or to join in any class-action lawsuit or class-action arbitration, to use the service.” By demanding that its users sacrifice basic rights, the company limits consumer protections and therefore curtails the ability of people to fight discrimination arising from unfair algorithms. 49 But whether the principle of neutral networks holds up in many sectors is yet to be determined on a widespread basis.

Recommendations

In order to balance innovation with basic human values, we propose a number of recommendations for moving forward with AI. This includes improving data access, increasing government investment in AI, promoting AI workforce development, creating a federal advisory committee, engaging with state and local officials to ensure they enact effective policies, regulating broad objectives as opposed to specific algorithms, taking bias seriously as an AI issue, maintaining mechanisms for human control and oversight, and penalizing malicious behavior and promoting cybersecurity.

Improving data access

The United States should develop a data strategy that promotes innovation and consumer protection. Right now, there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design. AI requires data to test and improve its learning capacity. 50 Without structured and unstructured data sets, it will be nearly impossible to gain the full benefits of artificial intelligence.

In general, the research community needs better access to government and business data, although with appropriate safeguards to make sure researchers do not misuse data in the way Cambridge Analytica did with Facebook information. There is a variety of ways researchers could gain data access. One is through voluntary agreements with companies holding proprietary data. Facebook, for example, recently announced a partnership with Stanford economist Raj Chetty to use its social media data to explore inequality. 51 As part of the arrangement, researchers were required to undergo background checks and could only access data from secured sites in order to protect user privacy and security.

In the U.S., there are no uniform standards in terms of data access, data sharing, or data protection. Almost all the data are proprietary in nature and not shared very broadly with the research community, and this limits innovation and system design.

Google long has made available search results in aggregated form for researchers and the general public. Through its “Trends” site, scholars can analyze topics such as interest in Trump, views about democracy, and perspectives on the overall economy. 52 That helps people track movements in public interest and identify topics that galvanize the general public.

Twitter makes much of its tweets available to researchers through application programming interfaces, commonly referred to as APIs. These tools help people outside the company build application software and make use of data from its social media platform. They can study patterns of social media communications and see how people are commenting on or reacting to current events.

In some sectors where there is a discernible public benefit, governments can facilitate collaboration by building infrastructure that shares data. For example, the National Cancer Institute has pioneered a data-sharing protocol where certified researchers can query health data it has using de-identified information drawn from clinical data, claims information, and drug therapies. That enables researchers to evaluate efficacy and effectiveness, and make recommendations regarding the best medical approaches, without compromising the privacy of individual patients.

There could be public-private data partnerships that combine government and business data sets to improve system performance. For example, cities could integrate information from ride-sharing services with its own material on social service locations, bus lines, mass transit, and highway congestion to improve transportation. That would help metropolitan areas deal with traffic tie-ups and assist in highway and mass transit planning.

Some combination of these approaches would improve data access for researchers, the government, and the business community, without impinging on personal privacy. As noted by Ian Buck, the vice president of NVIDIA, “Data is the fuel that drives the AI engine. The federal government has access to vast sources of information. Opening access to that data will help us get insights that will transform the U.S. economy.” 53 Through its Data.gov portal, the federal government already has put over 230,000 data sets into the public domain, and this has propelled innovation and aided improvements in AI and data analytic technologies. 54 The private sector also needs to facilitate research data access so that society can achieve the full benefits of artificial intelligence.

Increase government investment in AI

According to Greg Brockman, the co-founder of OpenAI, the U.S. federal government invests only $1.1 billion in non-classified AI technology. 55 That is far lower than the amount being spent by China or other leading nations in this area of research. That shortfall is noteworthy because the economic payoffs of AI are substantial. In order to boost economic development and social innovation, federal officials need to increase investment in artificial intelligence and data analytics. Higher investment is likely to pay for itself many times over in economic and social benefits. 56

Promote digital education and workforce development

As AI applications accelerate across many sectors, it is vital that we reimagine our educational institutions for a world where AI will be ubiquitous and students need a different kind of training than they currently receive. Right now, many students do not receive instruction in the kinds of skills that will be needed in an AI-dominated landscape. For example, there currently are shortages of data scientists, computer scientists, engineers, coders, and platform developers. These are skills that are in short supply; unless our educational system generates more people with these capabilities, it will limit AI development.

For these reasons, both state and federal governments have been investing in AI human capital. For example, in 2017, the National Science Foundation funded over 6,500 graduate students in computer-related fields and has launched several new initiatives designed to encourage data and computer science at all levels from pre-K to higher and continuing education. 57 The goal is to build a larger pipeline of AI and data analytic personnel so that the United States can reap the full advantages of the knowledge revolution.

But there also needs to be substantial changes in the process of learning itself. It is not just technical skills that are needed in an AI world but skills of critical reasoning, collaboration, design, visual display of information, and independent thinking, among others. AI will reconfigure how society and the economy operate, and there needs to be “big picture” thinking on what this will mean for ethics, governance, and societal impact. People will need the ability to think broadly about many questions and integrate knowledge from a number of different areas.

One example of new ways to prepare students for a digital future is IBM’s Teacher Advisor program, utilizing Watson’s free online tools to help teachers bring the latest knowledge into the classroom. They enable instructors to develop new lesson plans in STEM and non-STEM fields, find relevant instructional videos, and help students get the most out of the classroom. 58 As such, they are precursors of new educational environments that need to be created.

Create a federal AI advisory committee

Federal officials need to think about how they deal with artificial intelligence. As noted previously, there are many issues ranging from the need for improved data access to addressing issues of bias and discrimination. It is vital that these and other concerns be considered so we gain the full benefits of this emerging technology.

In order to move forward in this area, several members of Congress have introduced the “Future of Artificial Intelligence Act,” a bill designed to establish broad policy and legal principles for AI. It proposes the secretary of commerce create a federal advisory committee on the development and implementation of artificial intelligence. The legislation provides a mechanism for the federal government to get advice on ways to promote a “climate of investment and innovation to ensure the global competitiveness of the United States,” “optimize the development of artificial intelligence to address the potential growth, restructuring, or other changes in the United States workforce,” “support the unbiased development and application of artificial intelligence,” and “protect the privacy rights of individuals.” 59

Among the specific questions the committee is asked to address include the following: competitiveness, workforce impact, education, ethics training, data sharing, international cooperation, accountability, machine learning bias, rural impact, government efficiency, investment climate, job impact, bias, and consumer impact. The committee is directed to submit a report to Congress and the administration 540 days after enactment regarding any legislative or administrative action needed on AI.

This legislation is a step in the right direction, although the field is moving so rapidly that we would recommend shortening the reporting timeline from 540 days to 180 days. Waiting nearly two years for a committee report will certainly result in missed opportunities and a lack of action on important issues. Given rapid advances in the field, having a much quicker turnaround time on the committee analysis would be quite beneficial.

Engage with state and local officials

States and localities also are taking action on AI. For example, the New York City Council unanimously passed a bill that directed the mayor to form a taskforce that would “monitor the fairness and validity of algorithms used by municipal agencies.” 60 The city employs algorithms to “determine if a lower bail will be assigned to an indigent defendant, where firehouses are established, student placement for public schools, assessing teacher performance, identifying Medicaid fraud and determine where crime will happen next.” 61

According to the legislation’s developers, city officials want to know how these algorithms work and make sure there is sufficient AI transparency and accountability. In addition, there is concern regarding the fairness and biases of AI algorithms, so the taskforce has been directed to analyze these issues and make recommendations regarding future usage. It is scheduled to report back to the mayor on a range of AI policy, legal, and regulatory issues by late 2019.

Some observers already are worrying that the taskforce won’t go far enough in holding algorithms accountable. For example, Julia Powles of Cornell Tech and New York University argues that the bill originally required companies to make the AI source code available to the public for inspection, and that there be simulations of its decisionmaking using actual data. After criticism of those provisions, however, former Councilman James Vacca dropped the requirements in favor of a task force studying these issues. He and other city officials were concerned that publication of proprietary information on algorithms would slow innovation and make it difficult to find AI vendors who would work with the city. 62 It remains to be seen how this local task force will balance issues of innovation, privacy, and transparency.

Regulate broad objectives more than specific algorithms

The European Union has taken a restrictive stance on these issues of data collection and analysis. 63 It has rules limiting the ability of companies from collecting data on road conditions and mapping street views. Because many of these countries worry that people’s personal information in unencrypted Wi-Fi networks are swept up in overall data collection, the EU has fined technology firms, demanded copies of data, and placed limits on the material collected. 64 This has made it more difficult for technology companies operating there to develop the high-definition maps required for autonomous vehicles.

The GDPR being implemented in Europe place severe restrictions on the use of artificial intelligence and machine learning. According to published guidelines, “Regulations prohibit any automated decision that ‘significantly affects’ EU citizens. This includes techniques that evaluates a person’s ‘performance at work, economic situation, health, personal preferences, interests, reliability, behavior, location, or movements.’” 65 In addition, these new rules give citizens the right to review how digital services made specific algorithmic choices affecting people.

By taking a restrictive stance on issues of data collection and analysis, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

If interpreted stringently, these rules will make it difficult for European software designers (and American designers who work with European counterparts) to incorporate artificial intelligence and high-definition mapping in autonomous vehicles. Central to navigation in these cars and trucks is tracking location and movements. Without high-definition maps containing geo-coded data and the deep learning that makes use of this information, fully autonomous driving will stagnate in Europe. Through this and other data protection actions, the European Union is putting its manufacturers and software designers at a significant disadvantage to the rest of the world.

It makes more sense to think about the broad objectives desired in AI and enact policies that advance them, as opposed to governments trying to crack open the “black boxes” and see exactly how specific algorithms operate. Regulating individual algorithms will limit innovation and make it difficult for companies to make use of artificial intelligence.

Take biases seriously

Bias and discrimination are serious issues for AI. There already have been a number of cases of unfair treatment linked to historic data, and steps need to be undertaken to make sure that does not become prevalent in artificial intelligence. Existing statutes governing discrimination in the physical economy need to be extended to digital platforms. That will help protect consumers and build confidence in these systems as a whole.

For these advances to be widely adopted, more transparency is needed in how AI systems operate. Andrew Burt of Immuta argues, “The key problem confronting predictive analytics is really transparency. We’re in a world where data science operations are taking on increasingly important tasks, and the only thing holding them back is going to be how well the data scientists who train the models can explain what it is their models are doing.” 66

Maintaining mechanisms for human oversight and control

Some individuals have argued that there needs to be avenues for humans to exercise oversight and control of AI systems. For example, Allen Institute for Artificial Intelligence CEO Oren Etzioni argues there should be rules for regulating these systems. First, he says, AI must be governed by all the laws that already have been developed for human behavior, including regulations concerning “cyberbullying, stock manipulation or terrorist threats,” as well as “entrap[ping] people into committing crimes.” Second, he believes that these systems should disclose they are automated systems and not human beings. Third, he states, “An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.” 67 His rationale is that these tools store so much data that people have to be cognizant of the privacy risks posed by AI.

In the same vein, the IEEE Global Initiative has ethical guidelines for AI and autonomous systems. Its experts suggest that these models be programmed with consideration for widely accepted human norms and rules for behavior. AI algorithms need to take into effect the importance of these norms, how norm conflict can be resolved, and ways these systems can be transparent about norm resolution. Software designs should be programmed for “nondeception” and “honesty,” according to ethics experts. When failures occur, there must be mitigation mechanisms to deal with the consequences. In particular, AI must be sensitive to problems such as bias, discrimination, and fairness. 68

A group of machine learning experts claim it is possible to automate ethical decisionmaking. Using the trolley problem as a moral dilemma, they ask the following question: If an autonomous car goes out of control, should it be programmed to kill its own passengers or the pedestrians who are crossing the street? They devised a “voting-based system” that asked 1.3 million people to assess alternative scenarios, summarized the overall choices, and applied the overall perspective of these individuals to a range of vehicular possibilities. That allowed them to automate ethical decisionmaking in AI algorithms, taking public preferences into account. 69 This procedure, of course, does not reduce the tragedy involved in any kind of fatality, such as seen in the Uber case, but it provides a mechanism to help AI developers incorporate ethical considerations in their planning.

Penalize malicious behavior and promote cybersecurity

As with any emerging technology, it is important to discourage malicious treatment designed to trick software or use it for undesirable ends. 70 This is especially important given the dual-use aspects of AI, where the same tool can be used for beneficial or malicious purposes. The malevolent use of AI exposes individuals and organizations to unnecessary risks and undermines the virtues of the emerging technology. This includes behaviors such as hacking, manipulating algorithms, compromising privacy and confidentiality, or stealing identities. Efforts to hijack AI in order to solicit confidential information should be seriously penalized as a way to deter such actions. 71

In a rapidly changing world with many entities having advanced computing capabilities, there needs to be serious attention devoted to cybersecurity. Countries have to be careful to safeguard their own systems and keep other nations from damaging their security. 72 According to the U.S. Department of Homeland Security, a major American bank receives around 11 million calls a week at its service center. In order to protect its telephony from denial of service attacks, it uses a “machine learning-based policy engine [that] blocks more than 120,000 calls per month based on voice firewall policies including harassing callers, robocalls and potential fraudulent calls.” 73 This represents a way in which machine learning can help defend technology systems from malevolent attacks.

To summarize, the world is on the cusp of revolutionizing many sectors through artificial intelligence and data analytics. There already are significant deployments in finance, national security, health care, criminal justice, transportation, and smart cities that have altered decisionmaking, business models, risk mitigation, and system performance. These developments are generating substantial economic and social benefits.

The world is on the cusp of revolutionizing many sectors through artificial intelligence, but the way AI systems are developed need to be better understood due to the major implications these technologies will have for society as a whole.

Yet the manner in which AI systems unfold has major implications for society as a whole. It matters how policy issues are addressed, ethical conflicts are reconciled, legal realities are resolved, and how much transparency is required in AI and data analytic solutions. 74 Human choices about software development affect the way in which decisions are made and the manner in which they are integrated into organizational routines. Exactly how these processes are executed need to be better understood because they will have substantial impact on the general public soon, and for the foreseeable future. AI may well be a revolution in human affairs, and become the single most influential human innovation in history.

Note: We appreciate the research assistance of Grace Gilberg, Jack Karsten, Hillary Schaub, and Kristjan Tomasson on this project.

The Brookings Institution is a nonprofit organization devoted to independent research and policy solutions. Its mission is to conduct high-quality, independent research and, based on that research, to provide innovative, practical recommendations for policymakers and the public. The conclusions and recommendations of any Brookings publication are solely those of its author(s), and do not reflect the views of the Institution, its management, or its other scholars.

Support for this publication was generously provided by Amazon. Brookings recognizes that the value it provides is in its absolute commitment to quality, independence, and impact. Activities supported by its donors reflect this commitment. 

John R. Allen is a member of the Board of Advisors of Amida Technology and on the Board of Directors of Spark Cognition. Both companies work in fields discussed in this piece.

  • Thomas Davenport, Jeff Loucks, and David Schatsky, “Bullish on the Business Value of Cognitive” (Deloitte, 2017), p. 3 (www2.deloitte.com/us/en/pages/deloitte-analytics/articles/cognitive-technology-adoption-survey.html).
  • Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and Where It’s Taking Us Next (New York: Penguin–TarcherPerigee, 2017).
  • Shubhendu and Vijay, “Applicability of Artificial Intelligence in Different Fields of Life.”
  • Andrew McAfee and Erik Brynjolfsson, Machine Platform Crowd: Harnessing Our Digital Future (New York: Norton, 2017).
  • Portions of this paper draw on Darrell M. West, The Future of Work: Robots, AI, and Automation , Brookings Institution Press, 2018.
  • PriceWaterhouseCoopers, “Sizing the Prize: What’s the Real Value of AI for Your Business and How Can You Capitalise?” 2017.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 1.
  • Nathaniel Popper, “Stocks and Bots,” New York Times Magazine , February 28, 2016.
  • Michael Lewis, Flash Boys: A Wall Street Revolt (New York: Norton, 2015).
  • Cade Metz, “In Quantum Computing Race, Yale Professors Battle Tech Giants,” New York Times , November 14, 2017, p. B3.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy,” December 2016, pp. 27-28.
  • Christian Davenport, “Future Wars May Depend as Much on Algorithms as on Ammunition, Report Says,” Washington Post , December 3, 2017.
  • John R. Allen and Amir Husain, “On Hyperwar,” Naval Institute Proceedings , July 17, 2017, pp. 30-36.
  • Paul Mozur, “China Sets Goal to Lead in Artificial Intelligence,” New York Times , July 21, 2017, p. B1.
  • Paul Mozur and John Markoff, “Is China Outsmarting American Artificial Intelligence?” New York Times , May 28, 2017.
  • Economist , “America v China: The Battle for Digital Supremacy,” March 15, 2018.
  • Rasmus Rothe, “Applying Deep Learning to Real-World Problems,” Medium , May 23, 2017.
  • Eric Horvitz, “Reflections on the Status and Future of Artificial Intelligence,” Testimony before the U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016, p. 5.
  • Jeff Asher and Rob Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago,” New York Times Upshot , June 13, 2017.
  • Caleb Watney, “It’s Time for our Justice System to Embrace Artificial Intelligence,” TechTank (blog), Brookings Institution, July 20, 2017.
  • Asher and Arthur, “Inside the Algorithm That Tries to Predict Gun Violence in Chicago.”
  • Paul Mozur and Keith Bradsher, “China’s A.I. Advances Help Its Tech Industry, and State Security,” New York Times , December 3, 2017.
  • Simon Denyer, “China’s Watchful Eye,” Washington Post , January 7, 2018.
  • Cameron Kerry and Jack Karsten, “Gauging Investment in Self-Driving Cars,” Brookings Institution, October 16, 2017.
  • Portions of this section are drawn from Darrell M. West, “Driverless Cars in China, Europe, Japan, Korea, and the United States,” Brookings Institution, September 2016.
  • Yuming Ge, Xiaoman Liu, Libo Tang, and Darrell M. West, “Smart Transportation in China and the United States,” Center for Technology Innovation, Brookings Institution, December 2017.
  • Peter Holley, “Uber Signs Deal to Buy 24,000 Autonomous Vehicles from Volvo,” Washington Post , November 20, 2017.
  • Daisuke Wakabayashi, “Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam,” New York Times , March 19, 2018.
  • Kevin Desouza, Rashmi Krishnamurthy, and Gregory Dawson, “Learning from Public Sector Experimentation with Artificial Intelligence,” TechTank (blog), Brookings Institution, June 23, 2017.
  • Boyd Cohen, “The 10 Smartest Cities in North America,” Fast Company , November 14, 2013.
  • Teena Maddox, “66% of US Cities Are Investing in Smart City Technology,” TechRepublic , November 6, 2017.
  • Osonde Osoba and William Welser IV, “The Risks of Artificial Intelligence to Security and the Future of Work” (Santa Monica, Calif.: RAND Corp., December 2017) (www.rand.org/pubs/perspectives/PE237.html).
  • Ibid., p. 7.
  • Dominic Barton, Jonathan Woetzel, Jeongmin Seong, and Qinzheng Tian, “Artificial Intelligence: Implications for China” (New York: McKinsey Global Institute, April 2017), p. 7.
  • Executive Office of the President, “Preparing for the Future of Artificial Intelligence,” October 2016, pp. 30-31.
  • Elaine Glusac, “As Airbnb Grows, So Do Claims of Discrimination,” New York Times , June 21, 2016.
  • “Joy Buolamwini,” Bloomberg Businessweek , July 3, 2017, p. 80.
  • Mark Purdy and Paul Daugherty, “Why Artificial Intelligence is the Future of Growth,” Accenture, 2016.
  • Jon Valant, “Integrating Charter Schools and Choice-Based Education Systems,” Brown Center Chalkboard blog, Brookings Institution, June 23, 2017.
  • Tucker, “‘A White Mask Worked Better.’”
  • Cliff Kuang, “Can A.I. Be Taught to Explain Itself?” New York Times Magazine , November 21, 2017.
  • Yale Law School Information Society Project, “Governing Machine Learning,” September 2017.
  • Katie Benner, “Airbnb Vows to Fight Racism, But Its Users Can’t Sue to Prompt Fairness,” New York Times , June 19, 2016.
  • Executive Office of the President, “Artificial Intelligence, Automation, and the Economy” and “Preparing for the Future of Artificial Intelligence.”
  • Nancy Scolar, “Facebook’s Next Project: American Inequality,” Politico , February 19, 2018.
  • Darrell M. West, “What Internet Search Data Reveals about Donald Trump’s First Year in Office,” Brookings Institution policy report, January 17, 2018.
  • Ian Buck, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • Keith Nakasone, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Greg Brockman, “The Dawn of Artificial Intelligence,” Testimony before U.S. Senate Subcommittee on Space, Science, and Competitiveness, November 30, 2016.
  • Amir Khosrowshahi, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” February 14, 2018.
  • James Kurose, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Stephen Noonoo, “Teachers Can Now Use IBM’s Watson to Search for Free Lesson Plans,” EdSurge , September 13, 2017.
  • Congress.gov, “H.R. 4625 FUTURE of Artificial Intelligence Act of 2017,” December 12, 2017.
  • Elizabeth Zima, “Could New York City’s AI Transparency Bill Be a Model for the Country?” Government Technology , January 4, 2018.
  • Julia Powles, “New York City’s Bold, Flawed Attempt to Make Algorithms Accountable,” New Yorker , December 20, 2017.
  • Sheera Frenkel, “Tech Giants Brace for Europe’s New Data Privacy Rules,” New York Times , January 28, 2018.
  • Claire Miller and Kevin O’Brien, “Germany’s Complicated Relationship with Google Street View,” New York Times , April 23, 2013.
  • Cade Metz, “Artificial Intelligence is Setting Up the Internet for a Huge Clash with Europe,” Wired , July 11, 2016.
  • Eric Siegel, “Predictive Analytics Interview Series: Andrew Burt,” Predictive Analytics Times , June 14, 2017.
  • Oren Etzioni, “How to Regulate Artificial Intelligence,” New York Times , September 1, 2017.
  • “Ethical Considerations in Artificial Intelligence and Autonomous Systems,” unpublished paper. IEEE Global Initiative, 2018.
  • Ritesh Noothigattu, Snehalkumar Gaikwad, Edmond Awad, Sohan Dsouza, Iyad Rahwan, Pradeep Ravikumar, and Ariel Procaccia, “A Voting-Based System for Ethical Decision Making,” Computers and Society , September 20, 2017 (www.media.mit.edu/publications/a-voting-based-system-for-ethical-decision-making/).
  • Miles Brundage, et al., “The Malicious Use of Artificial Intelligence,” University of Oxford unpublished paper, February 2018.
  • John Markoff, “As Artificial Intelligence Evolves, So Does Its Criminal Potential,” New York Times, October 24, 2016, p. B3.
  • Economist , “The Challenger: Technopolitics,” March 17, 2018.
  • Douglas Maughan, “Testimony before the House Committee on Oversight and Government Reform Subcommittee on Information Technology,” March 7, 2018.
  • Levi Tillemann and Colin McCormick, “Roadmapping a U.S.-German Agenda for Artificial Intelligence Policy,” New American Foundation, March 2017.

Artificial Intelligence

Governance Studies

Center for Technology Innovation

Artificial Intelligence and Emerging Technology Initiative

Chinasa T. Okolo

March 15, 2024

Cameron F. Kerry

March 11, 2024

Norman Eisen, Nicol Turner Lee, Samara Angel

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Head shot of SEAS Ph.D. alum Jacomo Corbo

Alumni profile: Jacomo Corbo, Ph.D. '08

Racing into the future of machine learning 

AI / Machine Learning , Computer Science

Harvard SEAS Ph.D. student Lucas Monteiro Paes wearing a white shirt and black glasses

Ph.D. student Monteiro Paes named Apple Scholar in AI/ML

Monteiro Paes studies fairness and arbitrariness in machine learning models

AI / Machine Learning , Applied Mathematics , Awards , Graduate Student Profile

Four people standing in a line, one wearing a Harvard sweatshirt, two holding second-place statues

A new phase for Harvard Quantum Computing Club

SEAS students place second at MIT quantum hackathon

Computer Science , Quantum Engineering , Undergraduate Student Profile

  • Undergraduate
  • High School
  • Architecture
  • American History
  • Asian History
  • Antique Literature
  • American Literature
  • Asian Literature
  • Classic English Literature
  • World Literature
  • Creative Writing
  • Linguistics
  • Criminal Justice
  • Legal Issues
  • Anthropology
  • Archaeology
  • Political Science
  • World Affairs
  • African-American Studies
  • East European Studies
  • Latin-American Studies
  • Native-American Studies
  • West European Studies
  • Family and Consumer Science
  • Social Issues
  • Women and Gender Studies
  • Social Work
  • Natural Sciences
  • Pharmacology
  • Earth science
  • Agriculture
  • Agricultural Studies
  • Computer Science
  • IT Management
  • Mathematics
  • Investments
  • Engineering and Technology
  • Engineering
  • Aeronautics
  • Medicine and Health
  • Alternative Medicine
  • Communications and Media
  • Advertising
  • Communication Strategies
  • Public Relations
  • Educational Theories
  • Teacher's Career
  • Chicago/Turabian
  • Company Analysis
  • Education Theories
  • Shakespeare
  • Canadian Studies
  • Food Safety
  • Relation of Global Warming and Extreme Weather Condition
  • Movie Review
  • Admission Essay
  • Annotated Bibliography
  • Application Essay
  • Article Critique
  • Article Review
  • Article Writing
  • Book Review
  • Business Plan
  • Business Proposal
  • Capstone Project
  • Cover Letter
  • Creative Essay
  • Dissertation
  • Dissertation - Abstract
  • Dissertation - Conclusion
  • Dissertation - Discussion
  • Dissertation - Hypothesis
  • Dissertation - Introduction
  • Dissertation - Literature
  • Dissertation - Methodology
  • Dissertation - Results
  • GCSE Coursework
  • Grant Proposal
  • Marketing Plan
  • Multiple Choice Quiz
  • Personal Statement

Power Point Presentation

  • Power Point Presentation With Speaker Notes
  • Questionnaire
  • Reaction Paper
  • Research Paper
  • Research Proposal
  • SWOT analysis
  • Thesis Paper
  • Online Quiz
  • Literature Review
  • Movie Analysis
  • Statistics problem
  • Math Problem
  • All papers examples
  • How It Works
  • Money Back Policy
  • Terms of Use
  • Privacy Policy
  • We Are Hiring

Development’s of Artificial Intelligence, Essay Example

Pages: 4

Words: 1062

Hire a Writer for Custom Essay

Use 10% Off Discount: "custom10" in 1 Click 👇

You are free to use it as an inspiration or a source for your own work.

Introduction

From the dawn of humankind from the first imaginative thinking and thoughts to the development of Artificial Intelligence (AI) continues to be progressive and innovative while introducing new heights of human intelligence in computer technology. The development of artificial intelligence (AI) is one the most controversial issues in the computing technology industry. The technology industry has spent many years examining AI in the fields of computer science, mathematics, and engineering. Most will agree that AI is the method of creating systems that can show characteristics of intelligent behavior. AI research is conducted by a range of scientists and technologists with varying perspectives, interests, and motivations. Scientists tend to be interested in understanding the underlying basis of intelligence and cognition, some with an emphasis on unraveling the mysteries of human thought and others examining intelligence more broad(Computer Science and Telecommunication Board 198).

Computer Technology

The computer technology industries have made many technological advances in AI while contributing to the modern world. The very thought of clone or non-human entity would be able to evolve to the point of higher intelligence of a human being. In the real world of computer technology, intelligence is traditionally thought of as a personal level of knowledge or genius. That genius has the ability to gather, retain, and understand extremely large and complex concepts. However, the computer technology study advances has some concerned about building artificial intelligence that smarter than a human being and does not have a conscious to handled that programmed intelligence. In our society, AI is a technology that can be seen making significant contributions in our daily lives. Artificial Intelligence using computer technology using the algorithms to programmed banking transactions. The contributions are a large percentage of the computer revolutions that is improving the way we learn.

New Developments

There many new developments in the field of AI that has support from the Defense Advanced Research Projects Agency (DARPA, known during certain periods as ARPA) and other units of the Department of Defense (DOD). Other funding agencies have included the National Institutes of Health, National Science Foundation, and National Aeronautics and Space Administration (NASA) (Computer Science and Telecommunication Board 199). Dartmouth University continues to develop new advances in AI along with IBM that jump-started the AI development in the 1950s. Another advancement is LISP is an important part of AI that programs language using computational calculations for knowledge. This involves logical reasoning, problem solving, and formulas. The Stanford Research Institute has been developing improvement of AI for over 60 years.  The next decade will bring in new AI technology such as Snake Like Robots, Robotic Surgery, Underwater Robots, AI learning that imitates children learning and Robots that fix power outages (Science Daily 1).

The ethics of the technology called artificial intelligence is because artificial intelligence does not have a conscious. The robot can be programmed to release the atom bomb without having any thought of humanity or how many people would be killed. The robotethics is about the behavior of humans who design, use, and program and develop artificially intelligent beings (Guerin 3). Roboethics is concerned with the robot given programming to kill or maim or unmorally disposed of millions.

The second type of AI ethics is known as machine ethics, which based on moral behavior of the artificial mortal agents. In reference to the movie” I Robot” starring Will Smith, the move addressed machine ethics motivated by next generation robotics. The scientist in the movie was seeking to make artificial intelligent robots with feelings and equipped with ethical standards. Motivated by planned next-generation robotic systems, machine ethics typically explores solutions for agents with autonomous capacities intermediate between those of current artificial agents and humans, with designs developed incrementally by and embedded in a society of human agents(Shulman,Jonsson & Tarleton,2009 96).

The future on AI presents new future technology that will change society. The beginning of AI has not reach the maximum potential yet. However, society will continue to change just like the evolution of the computer. Society has the capability of using IPhones that can talk, track, and search while doing anything a computer can do. In addition, every industry has begun to take advantage of AI technology. As Big Data evolves, so will machine-learning systems that can process it, and apply it toward particular outcomes. We are witnessing the beginning of a revolution that will see a fundamental change in the way businesses run, and people work (RocketFuel 1).

In our society, there are boundaries we as human being cannot reach. The human being cannot travel a billion miles away with losing life. However, artificial intelligence can reach that planet sending critical information back to earth. The thought of flying cars, futuristic self-sufficient homes, and traveling billions of miles away provide opportunities for the world. The technology must be controlled legally to ensure the AI technology does not fall into the wrong hands. AI should be used for humanitarian improvements for all people but not used for destruction. The legal ramifications from AI in the hands of the commercial market is concerning. The AI can become intellectual technology of the business that purchases the right to that AI. In addition, releasing AI technology to the public for sale is dangerous because any country can purchase the advance AI technology. There must boundaries set before we unleash to the full potential of AI. There are disadvantages to AI such possibility The possibility of engineers building or programming a machine to outthink human beings that may lead to catastrophic results relying on a machine to make decisions. In addition, to the danger of mastery of task that is so important, that it could launch a nuclear bomb because it was programmed to defend the United States. It is possible that a machine could falsely believe commercial airplane is attacking the United States thus ending the world with nuclear bomb. The AI contributions will bring futuristic technology changes that will affect our communities, environments, and cultures; however, AI must be left unchecked.

Works Cited

Computer Science and Telecommunication Board. Funding a revolution: Government support for computing research. Washington, DC: The National Academies Press, 1999. Print. Funding

Guerin, F. (2014).On roboethics and the robotic human. Retrieved on October 8 th , 2014 from http://www.truth-out.org/news/item/25281-on-roboethics-and-the-robotic-human

RocketFuel. (2014).Artificial intelligence is changing the world and humankind must adapt. Retrieved October 4, 2014 from http://rocketfuel.com/blog/artificial-intelligence-is-changing-the-world-and-humankind-must-adapt

Shulman, C., Henrik J., and Nick T. (2009). Machine ethics and super intelligence. Retrieved on October 8 th , 2014 from http://ia-cap.org/ap-cap09/proceedings.pdf

Stuck with your Essay?

Get in touch with one of our experts for instant help!

Ragnar Danneskjolds Robin Hood, Essay Example

Advanced Practice Nurse Acts, Power Point Presentation Example

Time is precious

don’t waste it!

Plagiarism-free guarantee

Privacy guarantee

Secure checkout

Money back guarantee

E-book

Related Essay Samples & Examples

Voting as a civic responsibility, essay example.

Pages: 1

Words: 287

Utilitarianism and Its Applications, Essay Example

Words: 356

The Age-Related Changes of the Older Person, Essay Example

Pages: 2

Words: 448

The Problems ESOL Teachers Face, Essay Example

Pages: 8

Words: 2293

Should English Be the Primary Language? Essay Example

Words: 999

The Term “Social Construction of Reality”, Essay Example

Words: 371

Oxford Martin School logo

Artificial intelligence is transforming our world — it is on all of us to make sure that it goes well

How ai gets built is currently decided by a small group of technologists. as this technology is transforming our lives, it should be in all of our interest to become informed and engaged..

Why should you care about the development of artificial intelligence?

Think about what the alternative would look like. If you and the wider public do not get informed and engaged, then we leave it to a few entrepreneurs and engineers to decide how this technology will transform our world.

That is the status quo. This small number of people at a few tech firms directly working on artificial intelligence (AI) do understand how extraordinarily powerful this technology is becoming . If the rest of society does not become engaged, then it will be this small elite who decides how this technology will change our lives.

To change this status quo, I want to answer three questions in this article: Why is it hard to take the prospect of a world transformed by AI seriously? How can we imagine such a world? And what is at stake as this technology becomes more powerful?

Why is it hard to take the prospect of a world transformed by artificial intelligence seriously?

In some way, it should be obvious how technology can fundamentally transform the world. We just have to look at how much the world has already changed. If you could invite a family of hunter-gatherers from 20,000 years ago on your next flight, they would be pretty surprised. Technology has changed our world already, so we should expect that it can happen again.

But while we have seen the world transform before, we have seen these transformations play out over the course of generations. What is different now is how very rapid these technological changes have become. In the past, the technologies that our ancestors used in their childhood were still central to their lives in their old age. This has not been the case anymore for recent generations. Instead, it has become common that technologies unimaginable in one's youth become ordinary in later life.

This is the first reason we might not take the prospect seriously: it is easy to underestimate the speed at which technology can change the world.

The second reason why it is difficult to take the possibility of transformative AI – potentially even AI as intelligent as humans – seriously is that it is an idea that we first heard in the cinema. It is not surprising that for many of us, the first reaction to a scenario in which machines have human-like capabilities is the same as if you had asked us to take seriously a future in which vampires, werewolves, or zombies roam the planet. 1

But, it is plausible that it is both the stuff of sci-fi fantasy and the central invention that could arrive in our, or our children’s, lifetimes.

The third reason why it is difficult to take this prospect seriously is by failing to see that powerful AI could lead to very large changes. This is also understandable. It is difficult to form an idea of a future that is very different from our own time. There are two concepts that I find helpful in imagining a very different future with artificial intelligence. Let’s look at both of them.

How to develop an idea of what the future of artificial intelligence might look like?

When thinking about the future of artificial intelligence, I find it helpful to consider two different concepts in particular: human-level AI, and transformative AI. 2 The first concept highlights the AI’s capabilities and anchors them to a familiar benchmark, while transformative AI emphasizes the impact that this technology would have on the world.

From where we are today, much of this may sound like science fiction. It is therefore worth keeping in mind that the majority of surveyed AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

The advantages and disadvantages of comparing machine and human intelligence

One way to think about human-level artificial intelligence is to contrast it with the current state of AI technology. While today’s AI systems often have capabilities similar to a particular, limited part of the human mind, a human-level AI would be a machine that is capable of carrying out the same range of intellectual tasks that we humans are capable of. 3 It is a machine that would be “able to learn to do anything that a human can do,” as Norvig and Russell put it in their textbook on AI. 4

Taken together, the range of abilities that characterize intelligence gives humans the ability to solve problems and achieve a wide variety of goals. A human-level AI would therefore be a system that could solve all those problems that we humans can solve, and do the tasks that humans do today. Such a machine, or collective of machines, would be able to do the work of a translator, an accountant, an illustrator, a teacher, a therapist, a truck driver, or the work of a trader on the world’s financial markets. Like us, it would also be able to do research and science, and to develop new technologies based on that.

The concept of human-level AI has some clear advantages. Using the familiarity of our own intelligence as a reference provides us with some clear guidance on how to imagine the capabilities of this technology.

However, it also has clear disadvantages. Anchoring the imagination of future AI systems to the familiar reality of human intelligence carries the risk that it obscures the very real differences between them.

Some of these differences are obvious. For example, AI systems will have the immense memory of computer systems, against which our own capacity to store information pales. Another obvious difference is the speed at which a machine can absorb and process information. But information storage and processing speed are not the only differences. The domains in which machines already outperform humans is steadily increasing: in chess, after matching the level of the best human players in the late 90s, AI systems reached superhuman levels more than a decade ago. In other games like Go or complex strategy games, this has happened more recently. 5

These differences mean that an AI that is at least as good as humans in every domain would overall be much more powerful than the human mind. Even the first “human-level AI” would therefore be quite superhuman in many ways. 6

Human intelligence is also a bad metaphor for machine intelligence in other ways. The way we think is often very different from machines, and as a consequence the output of thinking machines can be very alien to us.

Most perplexing and most concerning are the strange and unexpected ways in which machine intelligence can fail. The AI-generated image of the horse below provides an example: on the one hand, AIs can do what no human can do – produce an image of anything, in any style (here photorealistic), in mere seconds – but on the other hand it can fail in ways that no human would fail. 7 No human would make the mistake of drawing a horse with five legs. 8

Imagining a powerful future AI as just another human would therefore likely be a mistake. The differences might be so large that it will be a misnomer to call such systems “human-level.”

AI-generated image of a horse 9

A brown horse running in a grassy field. The horse appears to have five legs.

Transformative artificial intelligence is defined by the impact this technology would have on the world

In contrast, the concept of transformative AI is not based on a comparison with human intelligence. This has the advantage of sidestepping the problems that the comparisons with our own mind bring. But it has the disadvantage that it is harder to imagine what such a system would look like and be capable of. It requires more from us. It requires us to imagine a world with intelligent actors that are potentially very different from ourselves.

Transformative AI is not defined by any specific capabilities, but by the real-world impact that the AI would have. To qualify as transformative, researchers think of it as AI that is “powerful enough to bring us into a new, qualitatively different future.” 10

In humanity’s history, there have been two cases of such major transformations, the agricultural and the industrial revolutions.

Transformative AI becoming a reality would be an event on that scale. Like the arrival of agriculture 10,000 years ago, or the transition from hand- to machine-manufacturing, it would be an event that would change the world for billions of people around the globe and for the entire trajectory of humanity’s future .

Technologies that fundamentally change how a wide range of goods or services are produced are called ‘general-purpose technologies’. The two previous transformative events were caused by the discovery of two particularly significant general-purpose technologies: the change in food production as humanity transitioned from hunting and gathering to farming, and the rise of machine manufacturing in the industrial revolution. Based on the evidence and arguments presented in this series on AI development, I believe it is plausible that powerful AI could represent the introduction of a similarly significant general-purpose technology.

Timeline of the three transformative events in world history

development of artificial intelligence essay

A future of human-level or transformative AI?

The two concepts are closely related, but they are not the same. The creation of a human-level AI would certainly have a transformative impact on our world. If the work of most humans could be carried out by an AI, the lives of millions of people would change. 11

The opposite, however, is not true: we might see transformative AI without developing human-level AI. Since the human mind is in many ways a poor metaphor for the intelligence of machines, we might plausibly develop transformative AI before we develop human-level AI. Depending on how this goes, this might mean that we will never see any machine intelligence for which human intelligence is a helpful comparison.

When and if AI systems might reach either of these levels is of course difficult to predict. In my companion article on this question, I give an overview of what researchers in this field currently believe. Many AI experts believe there is a real chance that such systems will be developed within the next decades, and some believe that they will exist much sooner.

What is at stake as artificial intelligence becomes more powerful?

All major technological innovations lead to a range of positive and negative consequences. For AI, the spectrum of possible outcomes – from the most negative to the most positive – is extraordinarily wide.

That the use of AI technology can cause harm is clear, because it is already happening.

AI systems can cause harm when people use them maliciously. For example, when they are used in politically-motivated disinformation campaigns or to enable mass surveillance. 12

But AI systems can also cause unintended harm, when they act differently than intended or fail. For example, in the Netherlands the authorities used an AI system which falsely claimed that an estimated 26,000 parents made fraudulent claims for child care benefits. The false allegations led to hardship for many poor families, and also resulted in the resignation of the Dutch government in 2021. 13

As AI becomes more powerful, the possible negative impacts could become much larger. Many of these risks have rightfully received public attention: more powerful AI could lead to mass labor displacement, or extreme concentrations of power and wealth. In the hands of autocrats, it could empower totalitarianism through its suitability for mass surveillance and control.

The so-called alignment problem of AI is another extreme risk. This is the concern that nobody would be able to control a powerful AI system, even if the AI takes actions that harm us humans, or humanity as a whole. This risk is unfortunately receiving little attention from the wider public, but it is seen as an extremely large risk by many leading AI researchers. 14

How could an AI possibly escape human control and end up harming humans?

The risk is not that an AI becomes self-aware, develops bad intentions, and “chooses” to do this. The risk is that we try to instruct the AI to pursue some specific goal – even a very worthwhile one – and in the pursuit of that goal it ends up harming humans. It is about unintended consequences. The AI does what we told it to do, but not what we wanted it to do.

Can’t we just tell the AI to not do those things? It is definitely possible to build an AI that avoids any particular problem we foresee, but it is hard to foresee all the possible harmful unintended consequences. The alignment problem arises because of “the impossibility of defining true human purposes correctly and completely,” as AI researcher Stuart Russell puts it. 15

Can’t we then just switch off the AI? This might also not be possible. That is because a powerful AI would know two things: it faces a risk that humans could turn it off, and it can’t achieve its goals once it has been turned off. As a consequence, the AI will pursue a very fundamental goal of ensuring that it won’t be switched off. This is why, once we realize that an extremely intelligent AI is causing unintended harm in the pursuit of some specific goal, it might not be possible to turn it off or change what the system does. 16

This risk – that humanity might not be able to stay in control once AI becomes very powerful, and that this might lead to an extreme catastrophe – has been recognized right from the early days of AI research more than 70 years ago. 17 The very rapid development of AI in recent years has made a solution to this problem much more urgent.

I have tried to summarize some of the risks of AI, but a short article is not enough space to address all possible questions. Especially on the very worst risks of AI systems, and what we can do now to reduce them, I recommend reading the book The Alignment Problem by Brian Christian and Benjamin Hilton’s article ‘Preventing an AI-related catastrophe’ .

If we manage to avoid these risks, transformative AI could also lead to very positive consequences. Advances in science and technology were crucial to the many positive developments in humanity’s history. If artificial ingenuity can augment our own, it could help us make progress on the many large problems we face: from cleaner energy, to the replacement of unpleasant work, to much better healthcare.

This extremely large contrast between the possible positives and negatives makes clear that the stakes are unusually high with this technology. Reducing the negative risks and solving the alignment problem could mean the difference between a healthy, flourishing, and wealthy future for humanity – and the destruction of the same.

How can we make sure that the development of AI goes well?

Making sure that the development of artificial intelligence goes well is not just one of the most crucial questions of our time, but likely one of the most crucial questions in human history. This needs public resources – public funding, public attention, and public engagement.

Currently, almost all resources that are dedicated to AI aim to speed up the development of this technology. Efforts that aim to increase the safety of AI systems, on the other hand, do not receive the resources they need. Researcher Toby Ord estimated that in 2020 between $10 to $50 million was spent on work to address the alignment problem. 18 Corporate AI investment in the same year was more than 2000-times larger, it summed up to $153 billion.

This is not only the case for the AI alignment problem. The work on the entire range of negative social consequences from AI is under-resourced compared to the large investments to increase the power and use of AI systems.

It is frustrating and concerning for society as a whole that AI safety work is extremely neglected and that little public funding is dedicated to this crucial field of research. On the other hand, for each individual person this neglect means that they have a good chance to actually make a positive difference, if they dedicate themselves to this problem now. And while the field of AI safety is small, it does provide good resources on what you can do concretely if you want to work on this problem.

I hope that more people dedicate their individual careers to this cause, but it needs more than individual efforts. A technology that is transforming our society needs to be a central interest of all of us. As a society we have to think more about the societal impact of AI, become knowledgeable about the technology, and understand what is at stake.

When our children look back at today, I imagine that they will find it difficult to understand how little attention and resources we dedicated to the development of safe AI. I hope that this changes in the coming years, and that we begin to dedicate more resources to making sure that powerful AI gets developed in a way that benefits us and the next generations.

If we fail to develop this broad-based understanding, then it will remain the small elite that finances and builds this technology that will determine how one of the – or plausibly the – most powerful technology in human history will transform our world.

If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future — the future of humanity — will be.

With our work at Our World in Data we want to do our small part to enable a better informed public conversation on AI and the future we want to live in. You can find these resources on OurWorldinData.org/artificial-intelligence

Acknowledgements: I would like to thank my colleagues Daniel Bachler, Charlie Giattino, and Edouard Mathieu for their helpful comments to drafts of this essay.

This problem becomes even larger when we try to imagine how a future with a human-level AI might play out. Any particular scenario will not only involve the idea that this powerful AI exists, but a whole range of additional assumptions about the future context in which this happens. It is therefore hard to communicate a scenario of a world with human-level AI that does not sound contrived, bizarre or even silly.

Both of these concepts are widely used in the scientific literature on artificial intelligence. For example, questions about the timelines for the development of future AI are often framed using these terms. See my article on this topic .

The fact that humans are capable of a range of intellectual tasks means that you arrive at different definitions of intelligence depending on which aspect within that range you focus on (the Wikipedia entry on intelligence , for example, lists a number of definitions from various researchers and different disciplines). As a consequence there are also various definitions of ‘human-level AI’.

There are also several closely related terms: Artificial General Intelligence, High-Level Machine Intelligence, Strong AI, or Full AI are sometimes synonymously used, and sometimes defined in similar, yet different ways. In specific discussions, it is necessary to define this concept more narrowly; for example, in studies on AI timelines researchers offer more precise definitions of what human-level AI refers to in their particular study.

Peter Norvig and Stuart Russell (2021) — Artificial Intelligence: A Modern Approach. Fourth edition. Published by Pearson.

The AI system AlphaGo , and its various successors, won against Go masters. The AI system Pluribus beat humans at no-limit Texas hold 'em poker. The AI system Cicero can strategize and use human language to win the strategy game Diplomacy. See: Meta Fundamental AI Research Diplomacy Team (FAIR), Anton Bakhtin, Noam Brown, Emily Dinan, Gabriele Farina, Colin Flaherty, Daniel Fried, et al. (2022) – ‘Human-Level Play in the Game of Diplomacy by Combining Language Models with Strategic Reasoning’. In Science 0, no. 0 (22 November 2022): eade9097. https://doi.org/10.1126/science.ade9097 .

This also poses a problem when we evaluate how the intelligence of a machine compares with the intelligence of humans. If intelligence was a general ability, a single capacity, then we could easily compare and evaluate it, but the fact that it is a range of skills makes it much more difficult to compare across machine and human intelligence. Tests for AI systems are therefore comprising a wide range of tasks. See for example Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, Jacob Steinhardt (2020) –  Measuring Massive Multitask Language Understanding or the definition of what would qualify as artificial general intelligence in this Metaculus prediction .

An overview of how AI systems can fail can be found in Charles Choi – 7 Revealing Ways AIs Fail . It is also worth reading through the AIAAIC Repository which “details recent incidents and controversies driven by or relating to AI, algorithms, and automation."

I have taken this example from AI researcher François Chollet , who published it here .

Via François Chollet , who published it here . Based on Chollet’s comments it seems that this image was created by the AI system ‘Stable Diffusion’.

This quote is from Holden Karnofsky (2021) – AI Timelines: Where the Arguments, and the "Experts," Stand . For Holden Karnofsky’s earlier thinking on this conceptualization of AI see his 2016 article ‘Some Background on Our Views Regarding Advanced Artificial Intelligence’ .

Ajeya Cotra, whose research on AI timelines I discuss in other articles of this series, attempts to give a quantitative definition of what would qualify as transformative AI. in her widely cited report on AI timelines she defines it as a change in software technology that brings the growth rate of gross world product "to 20%-30% per year". Several other researchers define TAI in similar terms.

Human-level AI is typically defined as a software system that can carry out at least 90% or 99% of all economically relevant tasks that humans carry out. A lower-bar definition would be an AI system that can carry out all those tasks that can currently be done by another human who is working remotely on a computer.

On the use of AI in politically-motivated disinformation campaigns see for example John Villasenor (November 2020) – How to deal with AI-enabled disinformation . More generally on this topic see Brundage and Avin et al. (2018) – The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation, published at maliciousaireport.com . A starting point for literature and reporting on mass surveillance by governments is the relevant Wikipedia entry .

See for example the Wikipedia entry on the ‘Dutch childcare benefits scandal’ and Melissa Heikkilä (2022) – ‘Dutch scandal serves as a warning for Europe over risks of using algorithms’ , in Politico. The technology can also reinforce discrimination in terms of race and gender. See Brian Christian’s book The Alignment Problem and the reports of the AI Now Institute .

Overviews are provided in Stuart Russell (2019) – Human Compatible (especially chapter 5) and Brian Christian’s 2020 book The Alignment Problem . Christian presents the thinking of many leading AI researchers from the earliest days up to now and presents an excellent overview of this problem. It is also seen as a large risk by some of the leading private firms who work towards powerful AI – see OpenAI's article " Our approach to alignment research " from August 2022.

Stuart Russell (2019) – Human Compatible

A question that follows from this is, why build such a powerful AI in the first place?

The incentives are very high. As I emphasize below, this innovation has the potential to lead to very positive developments. In addition to the large social benefits there are also large incentives for those who develop it – the governments that can use it for their goals, the individuals who can use it to become more powerful and wealthy. Additionally, it is of scientific interest and might help us to understand our own mind and intelligence better. And lastly, even if we wanted to stop building powerful AIs, it is likely very hard to actually achieve it. It is very hard to coordinate across the whole world and agree to stop building more advanced AI – countries around the world would have to agree and then find ways to actually implement it.

In 1950 the computer science pioneer Alan Turing put it like this: “If a machine can think, it might think more intelligently than we do, and then where should we be? … [T]his new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety. It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. … I cannot offer any such comfort, for I believe that no such bounds can be set.” Alan. M. Turing (1950) – Computing Machinery and Intelligence , In Mind, Volume LIX, Issue 236, October 1950, Pages 433–460.

Norbert Wiener is another pioneer who saw the alignment problem very early. One way he put it was “If we use, to achieve our purposes, a mechanical agency with whose operation we cannot interfere effectively … we had better be quite sure that the purpose put into the machine is the purpose which we really desire.” quoted from Norbert Wiener (1960) – Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers. In Science.

In 1950 – the same year in which Turing published the cited article – Wiener published his book The Human Use of Human Beings, whose front-cover blurb reads: “The ‘mechanical brain’ and similar machines can destroy human values or enable us to realize them as never before.”

Toby Ord – The Precipice . He makes this projection in footnote 55 of chapter 2. It is based on the 2017 estimate by Farquhar.

Cite this work

Our articles and data visualizations rely on work from many different people and organizations. When citing this article, please also cite the underlying data sources. This article can be cited as:

BibTeX citation

Reuse this work freely

All visualizations, data, and code produced by Our World in Data are completely open access under the Creative Commons BY license . You have the permission to use, distribute, and reproduce these in any medium, provided the source and authors are credited.

The data produced by third parties and made available by Our World in Data is subject to the license terms from the original third-party authors. We will always indicate the original source of the data in our documentation, so you should always check the license of any such third-party data before use and redistribution.

All of our charts can be embedded in any site.

Our World in Data is free and accessible for everyone.

Help us do this work by making a donation.

National Academies Press: OpenBook

Funding a Revolution: Government Support for Computing Research (1999)

Chapter: 9 development in artificial intelligence, 9 developments in artificial intelligence.

Artificial intelligence (AI) has been one of the most controversial domains of inquiry in computer science since it was first proposed in the 1950s. Defined as the part of computer science concerned with designing systems that exhibit the characteristics associated with human intelligence—understanding language, learning, reasoning, solving problems, and so on (Barr and Feigenbaum, 1981)—the field has attracted researchers because of its ambitious goals and enormous underlying intellectual challenges. The field has been controversial because of its social, ethical, and philosophical implications. Such controversy has affected the funding environment for AI and the objectives of many research programs.

AI research is conducted by a range of scientists and technologists with varying perspectives, interests, and motivations. Scientists tend to be interested in understanding the underlying basis of intelligence and cognition, some with an emphasis on unraveling the mysteries of human thought and others examining intelligence more broadly. Engineering-oriented researchers, by contrast, are interested in building systems that behave intelligently. Some attempt to build systems using techniques analogous to those used by humans, whereas others apply a range of techniques adopted from fields such as information theory, electrical engineering, statistics, and pattern recognition. Those in the latter category often do not necessarily consider themselves AI researchers, but rather fall into a broader category of researchers interested in machine intelligence.

The concept of AI originated in the private sector, but the growth of

the field, both intellectually and in the size of the research community, has depended largely on public investments. Public monies have been invested in a range of AI programs, from fundamental, long-term research into cognition to shorter-term efforts to develop operational systems. Most of the federal support has come from the Defense Advanced Research Projects Agency (DARPA, known during certain periods as ARPA) and other units of the Department of Defense (DOD). Other funding agencies have included the National Institutes of Health, National Science Foundation, and National Aeronautics and Space Administration (NASA), which have pursued AI applications of particular relevance to their missions—health care, scientific research, and space exploration.

This chapter highlights key trends in the development of the field of AI and the important role of federal investments. The sections of this chapter, presented in roughly chronological order, cover the launching of the AI field, the government's initial participation, the pivotal role played by DARPA, the success of speech recognition research, the shift from basic to applied research, and AI in the 1990s. The final section summarizes the lessons to be learned from history. This case study is based largely on published accounts, the scientific and technical literature, reports by the major AI research centers, and interviews conducted with several leaders of AI research centers. (Little information was drawn from the records of the participants in the field, funding agencies, editors and publishers, and other primary sources most valued by professional historian.) 1

The Private Sector Launches the Field

The origins of AI research are intimately linked with two landmark papers on chess playing by machine. 2 They were written in 1950 by Claude E. Shannon, a mathematician at Bell Laboratories who is widely acknowledged as a principal creator of information theory. In the late 1930s, while still a graduate student, he developed a method for symbolic analysis of switching systems and networks (Shannon, 1938), which provided scientists and engineers with much-improved analytical and conceptual tools. After working at Bell Labs for half a decade, Shannon published a paper on information theory (Shannon, 1948). Shortly thereafter, he published two articles outlining the construction or programming of a computer for playing chess (Shannon, 1950a,b).

Shannon's work inspired a young mathematician, John McCarthy, who, while a research instructor in mathematics at Princeton University, joined Shannon in 1952 in organizing a conference on automata studies, largely to promote symbolic modeling and work on the theory of machine intelligence. 3 A year later, Shannon arranged for McCarthy and another

future pioneer in AI, Marvin Minsky, then a graduate student in mathematics at Princeton and a participant in the 1952 conference, to work with him at Bell Laboratories during 1953. 4

By 1955, McCarthy believed that the theory of machine intelligence was sufficiently advanced, and that related work involved such a critical mass of researchers, that rapid progress could be promoted by a concentrated summer seminar at Dartmouth University, where he was then an assistant professor of mathematics. He approached the Rockefeller Foundation's Warren Weaver, also a mathematician and a promoter of cutting-edge science, as well as Shannon's collaborator on information theory. Weaver and his colleague Robert S. Morison, director for Biological and Medical Research, were initially skeptical (Weaver, 1955). Morison pushed McCarthy and Shannon to widen the range of participants and made other suggestions. McCarthy and Shannon responded with a widened proposal that needed much of Morison's advice. They brought in Minsky and a well-known industrial researcher, Nathaniel Rochester 5 of IBM, as co-principal investigators for the proposal, submitted in September 1955. 6

In the proposal, the four researchers declared that the summer study was ''to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.'' They sought to bring a number of U.S. scholars to Dartmouth to create a research agenda for AI and begin actual work on it. In spite of Morison's skepticism, the Rockefeller Foundation agreed to fund this summer project with a grant of $7,500 (Rhind, 1955), primarily to cover summer salaries and expenses of the academic participants. Researchers from industry would be compensated by their respective firms.

Although most accounts of AI history focus on McCarthy's entrepreneurship, the role of Shannon—an intellectual leader from industry—is also critical. Without his participation, McCarthy would not have commanded the attention he received from the Rockefeller Foundation. Shannon also had considerable influence on Marvin Minsky. The title of Minsky's 1954 doctoral dissertation was "Neural Nets and the Brain Model Problem."

The role of IBM is similarly important. Nathan Rochester was a strong supporter of the AI concept, and he and his IBM colleagues who attended the 1956 Dartmouth workshop contributed to the early research in the field. After the workshop IBM welcomed McCarthy to its research laboratories, in large part because of IBM's previous work in AI and because "IBM looked like a good bet to pursue artificial intelligence research vigorously" in the future. 7 Rochester was a visiting professor at the Massachusetts Institute of Technology (MIT) during 1958-1959, and he unques-

tionably helped McCarthy with the development of LISP, an important list-processing language (see Box 9.1 ). 8 Rochester also apparently lent his support to the creation in 1958 of the MIT Artificial Intelligence Project (Rochester and Gelertner, 1958). 9 Yet, in spite of the early activity of Rochester and other IBM researchers, the corporation's interest in AI cooled. Although work continued on computer-based checkers and chess, an internal report prepared about 1960 took a strong position against broad support for AI.

Thus, the activities surrounding the Dartmouth workshop were, at the outset, linked with the cutting-edge research at a leading private research laboratory (AT&T Bell Laboratories) and a rapidly emerging industrial giant (IBM). Researchers at Bell Laboratories and IBM nurtured the earliest work in AI and gave young academic researchers like McCarthy and Minsky credibility that might otherwise have been lacking. Moreover, the Dartmouth summer research project in AI was funded by private philanthropy and by industry, not by government. The same is true for much of the research that led up to the summer project.

The Government Steps in

The federal government's initial involvement in AI research was manifested in the work of Herbert Simon and Allen Newell, who attended the 1956 Dartmouth workshop to report on "complex information processing." Trained in political science and economics at the University of Chicago, Simon had moved to Carnegie Institute of Technology in 1946 and was instrumental in the founding and early research of the Graduate School of Industrial Administration (GSIA). Funded heavily by the Ford Foundation and the Office of Naval Research (ONR), and the Air Force, GSIA was the pioneer in bringing quantitative behavioral social sciences research (including operations research) into graduate management education. 10 Because of his innovative work in human decision making, Simon became, in March 1951, a consultant to the RAND Corporation, the pioneering think tank established by the Air Force shortly after World War II. 11

At RAND, where he spent several summers carrying out collaborative research, Simon encountered Newell, a mathematician who helped to conceive and develop the Systems Research Laboratory, which was spun out of RAND as the System Development Corporation in 1957. In 1955, Simon and Newell began a long collaboration on the simulation of human thought, which by the summer of 1956 had resulted in their fundamental work (with RAND computer programmer J.C. Shaw) on the Logic Theorist, a computer program capable of proving theorems found in the

Principia of Bertrand Russell and Alfred North Whitehead (Newell and Simon, 1956). 12

This program is regarded by many as the first successful AI program, and the language it used, IPL2, is recognized as the first significant list-processing language. As programmed by Simon, Newell, and Shaw, a computer simulated human intelligence, solving a problem in logic in

much the same way as would a skilled logician. In this sense, the machine demonstrated artificial intelligence. The project was funded almost entirely by the Air Force through Project RAND, and much of the computer programming was done at RAND on an Air Force-funded computer (the Johnniac, named after RAND consultant John von Neumann, the creator of the basic architecture for digital electronic computers). 13

Newell's collaboration with Simon took him to Carnegie Tech, where, in 1957, he completed the institution's first doctoral dissertation in AI, "Information Processing: A New Technique for the Behavioral Sciences." Its thrust was clearly driven by the agenda laid out by the architects of GSIA. As Newell later stressed, his work with Simon (and that of Simon's several other AI students at GSIA) reflected the larger agenda of GSIA, even though most of this work was funded by the Air Force and ONR until the early 1960s. All of this work concentrated on the formal modeling of decision making and problem solving.

Simon and Newell developed another well-known AI program as a sequel to Logic Theorist—the General Problem Solver (GPS), first run in 1957 and developed further in subsequent years. Their work on GPS, like that on Logic Theorist, was characterized by its use of heuristics (i.e., efficient but fallible rules of thumb) as the means to simulate human cognitive processes (Newell et al., 1959). The GPS was capable of solving an array of problems that challenge human intelligence (an important accomplishment in and of itself), but, most significantly, it solved these problems by simulating the way a human being would solve them. These overall research efforts at GSIA, including the doctoral research of Simon's students—all funded principally by Air Force and ONR money—remained modest in scale compared to those at Carnegie Tech after 1962. 14

Also modest were the efforts at MIT, where McCarthy and Minsky established the Artificial Intelligence Project in September 1957. This effort was funded principally through a word-of-mouth agreement with Jerome Wiesner, then director of MIT's military-funded Research Laboratory in Electronics (RLE). In exchange for "a room, two programmers, a secretary and a keypunch [machine]," the two assistant professors of mathematics agreed, according to McCarthy, to "undertake the supervision of some of the six mathematics graduate students that RLE had undertaken to support." 15

The research efforts at Carnegie Tech (which became Carnegie Mellon University [CMU] in 1967), RAND, and MIT, although limited, yielded outstanding results in a short time. Simon and Newell showed that computers could demonstrate human-like behavior in certain well-defined tasks. 16 Substantial progress was also made by McCarthy, with his pioneering development of LISP, and Minsky, who formalized heuristic processes and other means of reasoning, including pattern recognition.

Previously, computers had been used principally to crunch numbers, and the tools for such tasks were primitive. The AI researchers found ways to represent logical formulas, carry out proofs, conduct plans, and manipulate such objects. Buoyed by their successes, researchers at both institutions projected bold visions—which, as the research was communicated to the public, became magnified into excessive claims—about the future of the new field of AI and what computers might ultimately achieve. 17

Darpa's Pivotal Role

The establishment in 1962 of ARPA's Information Processing Techniques Office (IPTO) radically changed the scale of research in AI, propelling it from a collection of small projects into a large-scale, high-profile domain. From the 1960s through the 1990s, DARPA provided the bulk of the nation's support for AI research and thus helped to legitimize AI as an important field of inquiry and influence the scope of related research. Over time, the nature of DARPA's support changed radically—from an emphasis on fundamental research at a limited number of centers of excellence to more broad-based support for applied research tied to military applications—both reflecting and motivating changes in the field of AI itself.

The early academic centers were MIT and Carnegie Tech. Following John McCarthy's move to Stanford in 1963 to create the Stanford Artificial Intelligence Laboratory (SAIL), IPTO worked a similar transformation of AI research at Stanford by making it the third center of excellence in AI. Indeed, the IPTO increased Stanford's allocation in 1965, allowing it to upgrade its computing capabilities and to launch five major team projects in AI research. Commenting in 1984 about how AI-related research at Carnegie Tech migrated out of GSIA into what became an autonomous department (and later a college) of CMU, Newell (1984) captured the transformation wrought by IPTO:

. . . the DARPA support of AI and computer science is a remarkable story of the nurturing of a new scientific field. Not only with MIT, Stanford and CMU, which are now seen as the main DARPA-supported university computer-science research environments, but with other universities as well . . . DARPA began to build excellence in information processing in whatever fashion we thought best. . . . The DARPA effort, or anything similar, had not been in our wildest imaginings. . . .

Another center of excellence—the Stanford Research Institute's (SRI's) Artificial Intelligence Center—emerged a bit later (in 1966), with Charles Rosen at the command. It focused on developing "automatons capable of gathering, processing, and transmitting information in a hostile environ-

ment" (Nilsson, 1984). Soon, SRI committed itself to the development of an AI-driven robot, Shakey, as a means to achieve its objective. Shakey's development necessitated extensive basic research in several domains, including planning, natural-language processing, and machine vision. SRI's achievements in these areas (e.g., the STRIPS planning system and work in machine vision) have endured, but changes in the funder's expectations for this research exposed SRI's AI program to substantial criticism in spite of these real achievements.

Under J.C.R. Licklider, Ivan Sutherland, and Robert Taylor, DARPA continued to invest in AI research at CMU, MIT, Stanford, and SRI and, to a lesser extent, other institutions. 18 Licklider (1964) asserted that AI was central to DARPA's mission because it was a key to the development of advanced command-and-control systems. Artificial intelligence was a broad category for Licklider (and his immediate successors), who "supported work in problem solving, natural language processing, pattern recognition, heuristic programming, automatic theorem proving, graphics, and intelligent automata. Various problems relating to human-machine communication—tablets, graphic systems, hand-eye coordination—were all pursued with IPTO support" (Norberg and O'Neill, 1996).

These categories were sufficiently broad that researchers like McCarthy, Minsky, and Newell could view their institutions' research, during the first 10 to 15 years of DARPA's AI funding, as essentially unfettered by immediate applications. Moreover, as work in one problem domain spilled over into others easily and naturally, researchers could attack problems from multiple perspectives. Thus, AI was ideally suited to graduate education, and enrollments at each of the AI centers grew rapidly during the first decade of DARPA funding.

DARPA's early support launched a golden age of AI research and rapidly advanced the emergence of a formal discipline. Much of DARPA's funding for AI was contained in larger program initiatives. Licklider considered AI a part of his general charter of Computers, Command, and Control. Project MAC (see Box 4.2 ), a project on time-shared computing at MIT, allocated roughly one-third of its $2.3 million annual budget to AI research, with few specific objectives.

Success in Speech Recognition

The history of speech recognition systems illustrates several themes common to AI research more generally: the long time periods between the initial research and development of successful products, and the interactions between AI researchers and the broader community of researchers in machine intelligence. Many capabilities of today's speech-recognition systems derive from the early work of statisticians, electrical engineers,

information theorists, and pattern-recognition researchers. Another key theme is the complementary nature of government and industry funding. Industry supported work in speech recognition at least as far back as the 1950s, when researchers at Bell Laboratories worked on systems for recognizing individual spoken digits "zero" through "nine." Research in the area was boosted tremendously by DARPA in the 1970s.

DARPA established the Speech Understanding Research (SUR) program to develop a computer system that could understand continuous speech. Lawrence Roberts initiated this project in 1971 while he was director of IPTO, against the advice of a National Academy of Sciences committee. 19 Roberts wanted a system that could handle a vocabulary of 10,000 English words spoken by anyone. His advisory board, which included Allen Newell and J.C.R. Licklider, issued a report calling for an objective of 1,000 words spoken in a quiet room by a limited number of people, using a restricted subject vocabulary (Newell et al., 1971).

Roberts committed $3 million per year for 5 years, with the intention of pursuing a 5-year follow-on project. Major SUR project groups were established at CMU, SRI, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BBN). Smaller contracts were awarded to a few other institutions. Five years later, SUR products were demonstrated. CMU researchers demonstrated two systems, HARPY and HEARSAY-I, and BBN developed Hear What I Mean (HWIM). The system developed cooperatively by SRI and SDC was never tested (Green, 1988). The system that came the closest to satisfying the original project goals—and may have exceeded the benchmarks—was HARPY, but controversy arose within DARPA and the AI community about the way the tests were handled. Full details regarding the testing of system performance had not been worked out at the outset of the SUR program. 20 As a result, some researchers—including DARPA research managers—believed that the SUR program had failed to meet its objectives. DARPA terminated the program without funding the follow-on. 21 Nevertheless, industry groups, including those at IBM, continued to invest in this research area and made important contributions to the development of continuous speech recognition methods. 22

DARPA began funding speech recognition research on a large scale again in 1984 as part of the Strategic Computing Program (discussed later in this chapter) and continued funding research in this area well into the late 1990s. Many of the same institutions that had been part of the SUR program, including CMU, BBN, SRI, and MIT, participated in the new initiatives. Firms such as IBM and Dragon Systems also participated. As a result of the controversy over SUR testing, evaluation methods and criteria for these programs were carefully prescribed though mutual agreements between DARPA managers and the funded researchers. Some

researchers have hailed this development and praised DARPA's role in benchmarking speech-recognition technology, not only for research purposes but also for the commercial market.

By holding annual system evaluations on carefully designed tasks and test materials, DARPA and the National Bureau of Standards (later the National Institute of Standards and Technology) led the standards-definition process, drawing the participation of not only government contractors but also industry and university groups from around the world, such as AT&T, Cambridge University (of the United Kingdom), and LIMSI (of France). The overall effect was the rapid adoption of the most successful techniques by every participant and quick migration of those techniques into products and services. Although it resulted in quick diffusion of successful techniques, this approach may also have narrowed the scope of approaches taken. Critics have seen this as symptomatic of a profound change in DARPA's philosophy that has reduced the emphasis on basic research.

DARPA's funding of research on understanding speech has been extremely important. First, it pushed the research frontiers of speech recognition and AI more generally. HEARSAY-II is particularly notable for the way it parsed information into independent knowledge sources, which in turn interacted with each other through a common database that CMU researchers labeled a "blackboard" (Englemore et al., 1988). This blackboard method of information processing proved to be a significant advance in AI. Moreover, although early speech-recognition researchers appeared overly ambitious in incorporating syntax and semantics into their systems, others have recently begun to adopt this approach to improve statistically based speech-recognition technology.

Perhaps more important, the results of this research have been incorporated into the products of established companies, such as IBM and BBN, as well as start-ups such as Nuance Communications (an SRI spinoff) and Dragon Systems. Microsoft Corporation, too, is incorporating speech recognition technology into its operating system (DARPA, 1997; McClain, 1998). The leading commercial speech-recognition program on the market today, the Dragon Systems software, traces its roots directly back to the work done at CMU between 1971 and 1975 as part of SUR (see Box 9.2 ). The DRAGON program developed in CMU's SUR project (the predecessor of the HARPY program) pioneered the use of techniques borrowed from mathematics and statistics (hidden Markov models) to recognize continuous speech (Baker, 1975). According to some scholars, the adoption of hidden Markov models by CMU's research team owes much to activities outside the AI field, such as research by engineers and statisticians with an interest in machine intelligence. 23

Other examples of commercial success abound. Charles Schwab and

Company adopted DARPA technology to develop its Voice Broker system, which provides stock quotes over the telephone. The system can recognize the names of 13,000 different securities as well as major regional U.S. accents. On the military side, DARPA provided translingual communication devices for use in Bosnia. These devices translated spoken English phrases into corresponding Serbo-Croatian or Russian phrases. The total market for these new personal-use voice recognition technologies is expected to reach about $4 billion in 2001 (Garfinkel, 1998).

Shift to Applied Research Increases Investment

Although most founders of the AI field continued to pursue basic questions of human and machine intelligence, some of their students and other second-generation researchers began to seek ways to use AI meth-

ods and approaches to tackle real-world problems. Their initiatives were important, not only in their own right, but also because they were indicative of a gradual but significant change in the funding environment toward more applied realms of research. The development fo expert systems, such as DENDRAL at SAIL, provides but one example of this trend (see Box 9.3 ).

The promotion in 1969 of Lawrence Roberts to director of IPTO also contributed to a perceived tightening of AI research. Under Roberts, IPTO developed a formal AI program, which in turn was divided into formal subprograms (Norberg and O'Neill, 1996). The line-item budget of AI research inevitably led to greater scrutiny owing to reporting mechanisms and the need to justify programs to the DOD, the Administration, and the U.S. Congress. Consequently, researchers began to believe that they were being boxed in by IPTO and DARPA, and to a certain extent they were. The flow of DARPA's AI research money to CMU, MIT, and Stanford University did not cease or even diminish much, but the demand grew for interim reports and more tangible results.

External developments reinforced this shift. The most important was the passage of the Mansfield Amendment in 1969. 24 Passed during the Vietnam War amid growing public concern about the "military-industrial complex" and the domination of U.S. academic science by the military, the Mansfield Amendment restricted the DOD to supporting basic re-

search that was of "direct and apparent" utility to specific military functions and operations. It brought about a swift decline in some of the military's support for basic research, often driving it toward the applied realm. 25 Roberts and his successors now had to justify AI research programs on the basis of immediate utility to the military mission. The move toward relevance spawned dissatisfaction among both the established pioneers of the AI field and its outside skeptics. 26

Another external development provided further impetus for change. In 1973, at the request of the British Scientific Research Council, Sir James Lighthill, the Lucasian Professor of Applied Mathematics at Cambridge University and a Fellow of the Royal Society of London, produced a survey that expressed considerable skepticism about AI in general and research domains in particular. Despite having no expertise in AI himself, Lighthill suggested that any particular successes in AI had stemmed from modeling efforts in more traditional disciplines, not from AI per se. He singled out robotics research for especially sharp criticism. The Lighthill report raised questions about AI research funding in the United States and led DOD to establish a panel to assess DARPA's AI program.

Known as the American Study Group, the panel (which included some of AI's major research figures) raised some of the same questions as did Lighthill's report and served to inform George Heilmeier, a former research manager from RCA Corporation who was then assistant director of Defense R&D and later became director of DARPA. The Lighthill report and its U.S. equivalent led to a shifting of DARPA funds out of robotics research (hurting institutions such as SRI that had committed heavily to the area) and toward "mission-oriented direct research, rather than basic undirected research" (Fleck, 1982). 27

As a result of these forces, DARPA's emphasis on relevance in AI research grew during the late 1970s and 1980s. Despite the disgruntlement among some scientists, the changes led to increased funding—although not directly to widespread commercial success—for AI research. A magnet for these monies was the Strategic Computing Program (SCP), announced in 1983 (DARPA, 1983). DARPA committed $1 billion over the planned 10-year course of the program. The four main goals of the SCP were as follows:

  • Advance machine intelligence technology and high-performance computing, including speech recognition and understanding, natural-language computer interfaces, vision comprehension systems, and advanced expert systems development, and to do so by providing significant increases in computer performance, through parallel-computer architectures, software, and supporting microelectronics;
  • Transfer technology from DARPA-sponsored university research
  • efforts to the defense industry through competitive research contracts, with industry and universities jointly participating;
  • Develop more new scientists in AI and high-performance computing through increased funding of graduate student research in these areas; and
  • Provide the supporting research infrastructure for AI research through advanced networking, new microcircuit fabrication facilities, advanced emulation facilities, and advanced symbolic processors (Kahn, 1988).

To achieve these goals, DARPA established three specific applications as R&D objectives: a pilot's associate for the Air Force, an autonomous land vehicle for the Army, and an aircraft battle management system for the Navy. The applications were intended to spark the military services' interest in developing AI technology based on fundamental research. The SCP differed from some other large-scale national efforts in that its goals were extremely ambitious, requiring fundamental advances in the underlying technology. (By contrast, efforts such as the Apollo space program were principally engineering projects drawing from an established scientific base [Office of Technology Assessment, 1985]). The SCP also differed from earlier large AI programs in that some 60 percent of its funds were committed to industry. However, of the 30 prime contractors for the SCP involved in software or AI research, more than 20 were established defense contractors (Goldstein, 1992).

The SCP significantly boosted overall federal funding for AI research but also altered its character. Between 1984 and 1988, total federal funding for AI research, excluding the SCP, tripled from $57 million to $159 million (see Table 9.1 ). With support for the SCP included, federal funding increased from $106 million to $274 million. Because the SCP was budgeted as an applied program, it tipped the balance of federal funding toward applied research. Although DARPA's funding for basic AI research doubled from roughly $20 million to $40 million during this same period, the DOD's overall role in basic AI research declined (see Table 9.2 ). Meanwhile, it continued to play the dominant role in supporting applied research in AI (see Table 9.3 ). Although budget categorizations for programs such as the SCP are somewhat arbitrary and subject to political influence, researchers noted a change in DARPA's funding style.

The SCP also attracted a tremendous amount of industry investment and venture capital to AI research and development. Firms developing and selling expert systems entered the market, often basing their systems on the LISP machines developed by the AI community. Several new firms entered the market to design, make, and sell the very expensive LISP machines. Yet the rapid development of engineering workstations, especially those of Sun Microsystems, Inc., soon undermined the LISP machine industry. This segment of the market, which was clearly tied to

TABLE 9.1 Total Federal Funding for Artificial Intelligence Research (in millions of dollars), 1984-1988

TABLE 9.2 Federal Funding for Basic Research in Artificial Intelligence by Agency (in millions of dollars), 1984-1988

TABLE 9.3 Federal Funding for Applied Research in Artificial Intelligence by Agency (in millions of dollars), 1984-1988

the SCP, collapsed. Even with the development of expert-system shells to run on less-costly machines, doubts began to arise about the capabilities and flexibility of expert systems; this doubt hampered the commercialization of AI. In addition, commercial contractors had difficulty meeting the high-profile milestones of the major SCP projects because of difficulties with either the AI technologies themselves or their incorporation into larger systems. Such problems undermined the emergence of a clearly identifiable AI industry and contributed to a shift in emphasis in high-performance computing, away from AI and toward other grand challenges, such as weather modeling and prediction and scientific visualization.

Artificial Intelligence in the 1990s

Despite the commercial difficulties associated with the Strategic Computing Program, the AI-driven advances in rule-based reasoning systems (i.e., expert systems) and their successors—many of which were initiated with DARPA funding in the 1960s and 1970s—proved to be extremely valuable for the emerging national information infrastructure and electronic commerce. These advances, including probabilistic reasoning systems and Bayesian networks, natural language processing, and knowledge representation, brought AI out of the laboratory and into the marketplace. Paradoxically, the major commercial successes of AI research applications are mostly hidden from view today because they are embedded in larger software systems. None of these systems has demonstrated general human intelligence, but many have contributed to commercial and military objectives.

An example is the Lumiere project initiated at Microsoft Research in 1993. Lumiere monitors a computer user's actions to determine when assistance may be needed. It continuously follows the user's goals and tasks as software programs run, using Bayesian networks to generate a probability distribution over topic areas that might pose difficulties and calculating the probability that the user will not mind being bothered with assistance. Lumiere forms the basis of the "office assistant" that monitors the behavior of users of Microsoft's Office 97 and assists them with applications. Lumiere is based on earlier work on probabilistic models of user goals to support the display of customized information to pilots of commercial aircraft, as well as user modeling for display control for flight engineers at NASA's Mission Control Center. These earlier projects, sponsored by the NASA-Ames Research Center and NASA's Johnson Space Center, were undertaken while some of the Lumiere researchers were students at Stanford University. 28

Patent trends suggest that AI technology is being incorporated into growing numbers of commercial products. The number of patents in AI,

expert systems, and neural networks jumped from fewer than 20 in 1988 to more than 120 in 1996, and the number of patents citing patents in these areas grew from about 140 to almost 800. 29 The number of AI-related patents (including patents in AI, expert systems, neural networks, intelligent systems, adaptive agents, and adaptive systems) issued annually in the United States increased exponentially from approximately 100 in 1985 to more than 900 in 1996 (see Figure 9.1 ). Changes in the U.S. Patent and Trademark Office's rules on the patentability of algorithms have no doubt contributed to this growth, as has the increased commercial value of AI technology. The vast majority of these patents are held by private firms, including large manufacturers of electronics and computers, as well as major users of information technology (see Table 9.4 ). These data indicate that AI technology is likely to be embedded in larger systems, from computers to cars to manufacturing lines, rather than used as stand-alone products.

A central problem confronting the wider commercialization of AI today revolves around integration. Both the software and the hardware developed by the AI research community were so advanced that their integration into older, more conservative computer and organizational

Figure 9.1

Artificial-intelligence-related patents awarded per year, 1976-1996.

Source: Compiled from data in the U.S. Patent and Trademark Office's U.S. Patent  Bibliographic Database, available online at < http://patents.uspto.gov >; and the IBM  Patent Server, available online at < http://patent.womplex.ibm.com >.

TABLE 9.4 Leading Holders of Patents Related to Artificial Intelligence, 1976-1997

systems proved to be an enormous challenge. As one observer has noted, "Because AI was a leading-edge technology, it arrived in this world too early. As a consequence, the AI application community had to ride many waves of technological quick fixes and fads. . . . Many of these integration problems are now being addressed head on by a broad community of information technologists using Internet-based frameworks such as CORBA [common object request broker architecture] and the World Wide Web" (Shrobe, 1996).

The rapid development of computer hardware and software, the networking of information systems, and the need to make these systems function smoothly and intelligently are leading to wide diffusion of AI knowledge and technology across the infrastructure of the information age. Federal funding reflects these changes (see Box 9.4 ). Meanwhile, much of the knowledge acquired through AI research over the years is

now being brought to bear on real-world problems and applications while also being deepened and broadened. The economic and social benefits are enormous. Technologies such as expert systems, natural-language processing, and computer vision are now used in a range of applications, such as decision aids, planning tools, speech-recognition systems, pattern recognition, knowledge representation, and computer-controlled robots. 30

AI technologies help industry diagnose machine failures, design new products, and plan, simulate, and schedule production. They help scientists search large databases and decode DNA sequences, and they help doctors make more-informed decisions about diagnosis and treatment of particular ailments. AI technologies also make the larger systems into which they are incorporated easier to use and more productive. These benefits are relatively easy to identify, but measuring them is difficult.

Federal investments in AI have produced a number of notable results, some envisioned by the founders of the field and others probably not even imagined. Without question, DARPA's generous, enduring funding of various aspects of AI research created a scientific research discipline that meets the standard criteria of discipline formation laid out by sociologists of science. 31 At least three major academic centers of excellence and several other significant centers were established, and they produced a large number of graduates with Ph.D.s who diffused AI research to other research universities, cross-pollinated the major research centers, and moved AI methods into commercial markets. ( Figure 9.2 shows the production of Ph.D. degrees in AI and related fields at U.S. Universities.

Figure 9.2

Ph.D. dissertations submitted annually in artificial intelligence and related fields, 1956-1995.

Source: Data from Dissertation Abstracts Online, which is available through subscription  to the OCLC First search database from UMI Company.

Number of Ph.D. dissertations submitted annually in AI and related fields and in computer science, 1956-1995.

Source: Data from Dissertation Abstracts Online, which is available through subscription to the OCLC First search database from UMI Company.

Figure 9.3 compares Ph.D. production in AI and related disciplines to degree production in computer science more broadly.) In sum, the returns on the public investment are clearly enormous, both in matters of national security (which are beyond the scope of this study) 32 and in contributions to the U.S. economy.

Lessons from History

As this case study demonstrates, federal funding is critical in establishing new disciplines because it can sustain long-term, high-risk research areas and nurture a critical mass of technical and human resources. DARPA helped legitimize the AI field and served as the major source of research funds beginning in the 1960s. It created centers of excellence that evolved into today's major computer science research centers. This support was particularly critical given that some objectives took much longer to realize than was originally anticipated.

A diversity of approaches to research problems can be critical to the development of practical tools. A prime example is the field of speech recognition, in which the most effective products to date have used tech-

niques borrowed from the mathematics and statistics communities rather than more traditional AI techniques. This outcome could not have been predicted and demonstrates the importance of supporting competing approaches, even those outside the mainstream.

Federal funding has promoted innovation in commercial products such as expert systems, the establishment of new companies, the growth of billion-dollar markets for technologies such as speech recognition, and the development of valuable military applications. AI technologies often enhance the performance of the larger systems into which they are increasingly incorporated.

There is a creative tension between fundamental research and attempts to create functional devices. Original attempts to design intelligent, thinking machines motivated fundamental work that created a base of knowledge. Initial advances achieved through research were not sufficient to produce, by themselves, commercial products, but they could be integrated with other components and exploited in different applications. Efforts to apply AI technology often failed initally because they uncovered technical problems that had not yet been adequately addressed. Applications were fed back into the research process, thus motivating inquiries into new areas.

The past 50 years have witnessed a revolution in computing and related communications technologies. The contributions of industry and university researchers to this revolution are manifest; less widely recognized is the major role the federal government played in launching the computing revolution and sustaining its momentum. Funding a Revolution examines the history of computing since World War II to elucidate the federal government's role in funding computing research, supporting the education of computer scientists and engineers, and equipping university research labs. It reviews the economic rationale for government support of research, characterizes federal support for computing research, and summarizes key historical advances in which government-sponsored research played an important role.

Funding a Revolution contains a series of case studies in relational databases, the Internet, theoretical computer science, artificial intelligence, and virtual reality that demonstrate the complex interactions among government, universities, and industry that have driven the field. It offers a series of lessons that identify factors contributing to the success of the nation's computing enterprise and the government's role within it.

Welcome to OpenBook!

You're looking at OpenBook, NAP.edu's online reading room since 1999. Based on feedback from you, our users, we've made some improvements that make it easier than ever to read thousands of publications on our website.

Do you want to take a quick tour of the OpenBook's features?

Show this book's table of contents , where you can jump to any chapter by name.

...or use these buttons to go back to the previous chapter or skip to the next one.

Jump up to the previous page or down to the next one. Also, you can type in a page number and press Enter to go directly to that page in the book.

Switch between the Original Pages , where you can read the report as it appeared in print, and Text Pages for the web version, where you can highlight and search the text.

To search the entire text of this book, type in your search term here and press Enter .

Share a link to this book page on your preferred social network or via email.

View our suggested citation for this chapter.

Ready to take your reading offline? Click here to buy this book in print or download it as a free PDF, if available.

Get Email Updates

Do you enjoy reading reports from the Academies online for free ? Sign up for email notifications and we'll let you know about new publications in your areas of interest when they're released.

Read our research on: TikTok | Podcasts | Election 2024

Regions & Countries

Artificial intelligence and the future of humans, experts say the rise of artificial intelligence will make most people better off over the next decade, but many have concerns about how advances in ai will affect what it means to be human, to be productive and to exercise free will.

(Saul Loeb/AFP/Getty Images)

Digital life is augmenting human capacities and disrupting eons-old human activities. Code-driven systems have spread to more than half of the world’s inhabitants in ambient information and connectivity, offering previously unimagined opportunities and unprecedented threats. As emerging algorithm-driven artificial intelligence (AI) continues to spread, will people be better off than they are today?

Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018.

The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities. They spoke of the wide-ranging possibilities; that computers might match or even exceed human intelligence and capabilities on tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. They said “smart” systems in communities, in vehicles, in buildings and utilities, on farms and in business processes will save time, money and lives and offer opportunities for individuals to enjoy a more-customized future.

Many focused their optimistic remarks on health care and the many possible applications of AI in diagnosing and treating patients or helping senior citizens live fuller and healthier lives. They were also enthusiastic about AI’s role in contributing to broad public-health programs built around massive amounts of data that may be captured in the coming years about everything from personal genomes to nutrition. Additionally, a number of these experts predicted that AI would abet long-anticipated changes in formal and informal education systems.

Yet, most experts, regardless of whether they are optimistic or not, expressed concerns about the long-term impact of these new tools on the essential elements of being human. All respondents in this non-scientific canvassing were asked to elaborate on why they felt AI would leave people better off or not. Many shared deep worries, and many also suggested pathways toward solutions. The main themes they sounded about threats and remedies are outlined in the accompanying table.

Specifically, participants were asked to consider the following:

“Please think forward to the year 2030. Analysts expect that people will become even more dependent on networked artificial intelligence (AI) in complex digital systems. Some say we will continue on the historic arc of augmenting our lives with mostly positive results as we widely implement these networked tools. Some say our increasing dependence on these AI and related systems is likely to lead to widespread difficulties.

Our question: By 2030, do you think it is most likely that advancing AI and related technology systems will enhance human capacities and empower them? That is, most of the time, will most people be better off than they are today? Or is it most likely that advancing AI and related technology systems will lessen human autonomy and agency to such an extent that most people will not be better off than the way things are today?”

Overall, and despite the downsides they fear, 63% of respondents in this canvassing said they are hopeful that most individuals will be mostly better off in 2030, and 37% said people will not be better off.

A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated. Some of the powerful, overarching answers included those from:

Sonia Katyal , co-director of the Berkeley Center for Law and Technology and a member of the inaugural U.S. Commerce Department Digital Economy Board of Advisors, predicted, “In 2030, the greatest set of questions will involve how perceptions of AI and their application will influence the trajectory of civil rights in the future. Questions about privacy, speech, the right of assembly and technological construction of personhood will all re-emerge in this new AI context, throwing into question our deepest-held beliefs about equality and opportunity for all. Who will benefit and who will be disadvantaged in this new world depends on how broadly we analyze these questions today, for the future.”

We need to work aggressively to make sure technology matches our values. Erik Brynjolfsson

Erik Brynjolfsson , director of the MIT Initiative on the Digital Economy and author of “Machine, Platform, Crowd: Harnessing Our Digital Future,” said, “AI and related technologies have already achieved superhuman performance in many areas, and there is little doubt that their capabilities will improve, probably very significantly, by 2030. … I think it is more likely than not that we will use this power to make the world a better place. For instance, we can virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. That said, AI and ML [machine learning] can also be used to increasingly concentrate wealth and power, leaving many people behind, and to create even more horrifying weapons. Neither outcome is inevitable, so the right question is not ‘What will happen?’ but ‘What will we choose to do?’ We need to work aggressively to make sure technology matches our values. This can and must be done at all levels, from government, to business, to academia, and to individual choices.”

Bryan Johnson , founder and CEO of Kernel, a leading developer of advanced neural interfaces, and OS Fund, a venture capital firm, said, “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of AI. I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Marina Gorbis , executive director of the Institute for the Future, said, “Without significant changes in our political economy and data governance regimes [AI] is likely to create greater economic inequalities, more surveillance and more programmed and non-human-centric interactions. Every time we program our environments, we end up programming ourselves and our interactions. Humans have to become more standardized, removing serendipity and ambiguity from our interactions. And this ambiguity and complexity is what is the essence of being human.”

Judith Donath , author of “The Social Machine, Designs for Living Online” and faculty fellow at Harvard University’s Berkman Klein Center for Internet & Society, commented, “By 2030, most social situations will be facilitated by bots – intelligent-seeming programs that interact with us in human-like ways. At home, parents will engage skilled bots to help kids with homework and catalyze dinner conversations. At work, bots will run meetings. A bot confidant will be considered essential for psychological well-being, and we’ll increasingly turn to such companions for advice ranging from what to wear to whom to marry. We humans care deeply about how others see us – and the others whose approval we seek will increasingly be artificial. By then, the difference between humans and bots will have blurred considerably. Via screen and projection, the voice, appearance and behaviors of bots will be indistinguishable from those of humans, and even physical robots, though obviously non-human, will be so convincingly sincere that our impression of them as thinking, feeling beings, on par with or superior to ourselves, will be unshaken. Adding to the ambiguity, our own communication will be heavily augmented: Programs will compose many of our messages and our online/AR appearance will [be] computationally crafted. (Raw, unaided human speech and demeanor will seem embarrassingly clunky, slow and unsophisticated.) Aided by their access to vast troves of data about each of us, bots will far surpass humans in their ability to attract and persuade us. Able to mimic emotion expertly, they’ll never be overcome by feelings: If they blurt something out in anger, it will be because that behavior was calculated to be the most efficacious way of advancing whatever goals they had ‘in mind.’ But what are those goals? Artificially intelligent companions will cultivate the impression that social goals similar to our own motivate them – to be held in good regard, whether as a beloved friend, an admired boss, etc. But their real collaboration will be with the humans and institutions that control them. Like their forebears today, these will be sellers of goods who employ them to stimulate consumption and politicians who commission them to sway opinions.”

Andrew McLaughlin , executive director of the Center for Innovative Thinking at Yale University, previously deputy chief technology officer of the United States for President Barack Obama and global public policy lead for Google, wrote, “2030 is not far in the future. My sense is that innovations like the internet and networked AI have massive short-term benefits, along with long-term negatives that can take decades to be recognizable. AI will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment.”

Michael M. Roberts , first president and CEO of the Internet Corporation for Assigned Names and Numbers (ICANN) and Internet Hall of Fame member, wrote, “The range of opportunities for intelligent agents to augment human intelligence is still virtually unlimited. The major issue is that the more convenient an agent is, the more it needs to know about you – preferences, timing, capacities, etc. – which creates a tradeoff of more help requires more intrusion. This is not a black-and-white issue – the shades of gray and associated remedies will be argued endlessly. The record to date is that convenience overwhelms privacy. I suspect that will continue.”

danah boyd , a principal researcher for Microsoft and founder and president of the Data & Society Research Institute, said, “AI is a tool that will be used by humans for all sorts of purposes, including in the pursuit of power. There will be abuses of power that involve AI, just as there will be advances in science and humanitarian efforts that also involve AI. Unfortunately, there are certain trend lines that are likely to create massive instability. Take, for example, climate change and climate migration. This will further destabilize Europe and the U.S., and I expect that, in panic, we will see AI be used in harmful ways in light of other geopolitical crises.”

Amy Webb , founder of the Future Today Institute and professor of strategic foresight at New York University, commented, “The social safety net structures currently in place in the U.S. and in many other countries around the world weren’t designed for our transition to AI. The transition through AI will last the next 50 years or more. As we move farther into this third era of computing, and as every single industry becomes more deeply entrenched with AI systems, we will need new hybrid-skilled knowledge workers who can operate in jobs that have never needed to exist before. We’ll need farmers who know how to work with big data sets. Oncologists trained as robotocists. Biologists trained as electrical engineers. We won’t need to prepare our workforce just once, with a few changes to the curriculum. As AI matures, we will need a responsive workforce, capable of adapting to new processes, systems and tools every few years. The need for these fields will arise faster than our labor departments, schools and universities are acknowledging. It’s easy to look back on history through the lens of present – and to overlook the social unrest caused by widespread technological unemployment. We need to address a difficult truth that few are willing to utter aloud: AI will eventually cause a large number of people to be permanently out of work. Just as generations before witnessed sweeping changes during and in the aftermath of the Industrial Revolution, the rapid pace of technology will likely mean that Baby Boomers and the oldest members of Gen X – especially those whose jobs can be replicated by robots – won’t be able to retrain for other kinds of work without a significant investment of time and effort.”

Barry Chudakov , founder and principal of Sertain Research, commented, “By 2030 the human-machine/AI collaboration will be a necessary tool to manage and counter the effects of multiple simultaneous accelerations: broad technology advancement, globalization, climate change and attendant global migrations. In the past, human societies managed change through gut and intuition, but as Eric Teller, CEO of Google X, has said, ‘Our societal structures are failing to keep pace with the rate of change.’ To keep pace with that change and to manage a growing list of ‘wicked problems’ by 2030, AI – or using Joi Ito’s phrase, extended intelligence – will value and revalue virtually every area of human behavior and interaction. AI and advancing technologies will change our response framework and time frames (which in turn, changes our sense of time). Where once social interaction happened in places – work, school, church, family environments – social interactions will increasingly happen in continuous, simultaneous time. If we are fortunate, we will follow the 23 Asilomar AI Principles outlined by the Future of Life Institute and will work toward ‘not undirected intelligence but beneficial intelligence.’ Akin to nuclear deterrence stemming from mutually assured destruction, AI and related technology systems constitute a force for a moral renaissance. We must embrace that moral renaissance, or we will face moral conundrums that could bring about human demise. … My greatest hope for human-machine/AI collaboration constitutes a moral and ethical renaissance – we adopt a moonshot mentality and lock arms to prepare for the accelerations coming at us. My greatest fear is that we adopt the logic of our emerging technologies – instant response, isolation behind screens, endless comparison of self-worth, fake self-presentation – without thinking or responding smartly.”

John C. Havens , executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and the Council on Extended Intelligence, wrote, “Now, in 2018, a majority of people around the world can’t access their data, so any ‘human-AI augmentation’ discussions ignore the critical context of who actually controls people’s information and identity. Soon it will be extremely difficult to identify any autonomous or intelligent systems whose algorithms don’t interact with human data in one form or another.”

At stake is nothing less than what sort of society we want to live in and how we experience our humanity. Batya Friedman

Batya Friedman , a human-computer interaction professor at the University of Washington’s Information School, wrote, “Our scientific and technological capacities have and will continue to far surpass our moral ones – that is our ability to use wisely and humanely the knowledge and tools that we develop. … Automated warfare – when autonomous weapons kill human beings without human engagement – can lead to a lack of responsibility for taking the enemy’s life or even knowledge that an enemy’s life has been taken. At stake is nothing less than what sort of society we want to live in and how we experience our humanity.”

Greg Shannon , chief scientist for the CERT Division at Carnegie Mellon University, said, “Better/worse will appear 4:1 with the long-term ratio 2:1. AI will do well for repetitive work where ‘close’ will be good enough and humans dislike the work. … Life will definitely be better as AI extends lifetimes, from health apps that intelligently ‘nudge’ us to health, to warnings about impending heart/stroke events, to automated health care for the underserved (remote) and those who need extended care (elder care). As to liberty, there are clear risks. AI affects agency by creating entities with meaningful intellectual capabilities for monitoring, enforcing and even punishing individuals. Those who know how to use it will have immense potential power over those who don’t/can’t. Future happiness is really unclear. Some will cede their agency to AI in games, work and community, much like the opioid crisis steals agency today. On the other hand, many will be freed from mundane, unengaging tasks/jobs. If elements of community happiness are part of AI objective functions, then AI could catalyze an explosion of happiness.”

Kostas Alexandridis , author of “Exploring Complex Dynamics in Multi-agent-based Intelligent Systems,” predicted, “Many of our day-to-day decisions will be automated with minimal intervention by the end-user. Autonomy and/or independence will be sacrificed and replaced by convenience. Newer generations of citizens will become more and more dependent on networked AI structures and processes. There are challenges that need to be addressed in terms of critical thinking and heterogeneity. Networked interdependence will, more likely than not, increase our vulnerability to cyberattacks. There is also a real likelihood that there will exist sharper divisions between digital ‘haves’ and ‘have-nots,’ as well as among technologically dependent digital infrastructures. Finally, there is the question of the new ‘commanding heights’ of the digital network infrastructure’s ownership and control.”

Oscar Gandy , emeritus professor of communication at the University of Pennsylvania, responded, “We already face an ungranted assumption when we are asked to imagine human-machine ‘collaboration.’ Interaction is a bit different, but still tainted by the grant of a form of identity – maybe even personhood – to machines that we will use to make our way through all sorts of opportunities and challenges. The problems we will face in the future are quite similar to the problems we currently face when we rely upon ‘others’ (including technological systems, devices and networks) to acquire things we value and avoid those other things (that we might, or might not be aware of).”

James Scofield O’Rourke , a professor of management at the University of Notre Dame, said, “Technology has, throughout recorded history, been a largely neutral concept. The question of its value has always been dependent on its application. For what purpose will AI and other technological advances be used? Everything from gunpowder to internal combustion engines to nuclear fission has been applied in both helpful and destructive ways. Assuming we can contain or control AI (and not the other way around), the answer to whether we’ll be better off depends entirely on us (or our progeny). ‘The fault, dear Brutus, is not in our stars, but in ourselves, that we are underlings.’”

Simon Biggs , a professor of interdisciplinary arts at the University of Edinburgh, said, “AI will function to augment human capabilities. The problem is not with AI but with humans. As a species we are aggressive, competitive and lazy. We are also empathic, community minded and (sometimes) self-sacrificing. We have many other attributes. These will all be amplified. Given historical precedent, one would have to assume it will be our worst qualities that are augmented. My expectation is that in 2030 AI will be in routine use to fight wars and kill people, far more effectively than we can currently kill. As societies we will be less affected by this as we currently are, as we will not be doing the fighting and killing ourselves. Our capacity to modify our behaviour, subject to empathy and an associated ethical framework, will be reduced by the disassociation between our agency and the act of killing. We cannot expect our AI systems to be ethical on our behalf – they won’t be, as they will be designed to kill efficiently, not thoughtfully. My other primary concern is to do with surveillance and control. The advent of China’s Social Credit System (SCS) is an indicator of what it likely to come. We will exist within an SCS as AI constructs hybrid instances of ourselves that may or may not resemble who we are. But our rights and affordances as individuals will be determined by the SCS. This is the Orwellian nightmare realised.”

Mark Surman , executive director of the Mozilla Foundation, responded, “AI will continue to concentrate power and wealth in the hands of a few big monopolies based on the U.S. and China. Most people – and parts of the world – will be worse off.”

William Uricchio , media scholar and professor of comparative media studies at MIT, commented, “AI and its related applications face three problems: development at the speed of Moore’s Law, development in the hands of a technological and economic elite, and development without benefit of an informed or engaged public. The public is reduced to a collective of consumers awaiting the next technology. Whose notion of ‘progress’ will prevail? We have ample evidence of AI being used to drive profits, regardless of implications for long-held values; to enhance governmental control and even score citizens’ ‘social credit’ without input from citizens themselves. Like technologies before it, AI is agnostic. Its deployment rests in the hands of society. But absent an AI-literate public, the decision of how best to deploy AI will fall to special interests. Will this mean equitable deployment, the amelioration of social injustice and AI in the public service? Because the answer to this question is social rather than technological, I’m pessimistic. The fix? We need to develop an AI-literate public, which means focused attention in the educational sector and in public-facing media. We need to assure diversity in the development of AI technologies. And until the public, its elected representatives and their legal and regulatory regimes can get up to speed with these fast-moving developments we need to exercise caution and oversight in AI’s development.”

The remainder of this report is divided into three sections that draw from hundreds of additional respondents’ hopeful and critical observations: 1) concerns about human-AI evolution, 2) suggested solutions to address AI’s impact, and 3) expectations of what life will be like in 2030, including respondents’ positive outlooks on the quality of life and the future of work, health care and education. Some responses are lightly edited for style.

Sign up for our Internet, Science and Tech newsletter

New findings, delivered monthly

Report Materials

development of artificial intelligence essay

Table of Contents

Q&a: how we used large language models to identify guests on popular podcasts, striking findings from 2023, what the data says about americans’ views of artificial intelligence, about 1 in 5 u.s. teens who’ve heard of chatgpt have used it for schoolwork, key findings about americans and data privacy, most popular.

About Pew Research Center Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of The Pew Charitable Trusts .

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: 24 February 2023

Artificial intelligence in academic writing: a paradigm-shifting technological advance

  • Roei Golan   ORCID: orcid.org/0000-0002-7214-3073 1   na1 ,
  • Rohit Reddy 2   na1 ,
  • Akhil Muthigi 2 &
  • Ranjith Ramasamy 2  

Nature Reviews Urology volume  20 ,  pages 327–328 ( 2023 ) Cite this article

3499 Accesses

20 Citations

63 Altmetric

Metrics details

  • Preclinical research
  • Translational research

Artificial intelligence (AI) has rapidly become one of the most important and transformative technologies of our time, with applications in virtually every field and industry. Among these applications, academic writing is one of the areas that has experienced perhaps the most rapid development and uptake of AI-based tools and methodologies. We argue that use of AI-based tools for scientific writing should widely be adopted.

This is a preview of subscription content, access via your institution

Relevant articles

Open Access articles citing this article.

How artificial intelligence will affect the future of medical publishing

  • Jean-Louis Vincent

Critical Care Open Access 06 July 2023

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

Checco, A., Bracciale, L., Loreti, P., Pinfield, S. & Bianchi, G. AI-assisted peer review. Humanit. Soc. Sci. Commun. 8 , 25 (2021).

Article   Google Scholar  

Hutson, M. Could AI help you to write your next paper? Nature 611 , 192–193 (2022).

Article   CAS   PubMed   Google Scholar  

Krzastek, S. C., Farhi, J., Gray, M. & Smith, R. P. Impact of environmental toxin exposure on male fertility potential. Transl Androl. Urol. 9 , 2797–2813 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Khullar, D. Social media and medical misinformation: confronting new variants of an old problem. JAMA 328 , 1393–1394 (2022).

Article   PubMed   Google Scholar  

Reddy, R. V. et al. Assessing the quality and readability of online content on shock wave therapy for erectile dysfunction. Andrologia 54 , e14607 (2022).

Khodamoradi, K., Golan, R., Dullea, A. & Ramasamy, R. Exosomes as potential biomarkers for erectile dysfunction, varicocele, and testicular injury. Sex. Med. Rev. 10 , 311–322 (2022).

Stone, L. You’ve got a friend online. Nat. Rev. Urol. 17 , 320 (2020).

PubMed   Google Scholar  

Pai, R. K. et al. A review of current advancements and limitations of artificial intelligence in genitourinary cancers. Am. J. Clin. Exp. Urol. 8 , 152–162 (2020).

PubMed   PubMed Central   Google Scholar  

You, J. B. et al. Machine learning for sperm selection. Nat. Rev. Urol. 18 , 387–403 (2021).

Stone, L. The dawning of the age of artificial intelligence in urology. Nat. Rev. Urol. 18 , 322 (2021).

Download references

Acknowledgements

The manuscript was edited for grammar and structure using the advanced language model ChatGPT. The authors thank S. Verma for addressing inquiries related to artificial intelligence.

Author information

These authors contributed equally: Roei Golan, Rohit Reddy.

Authors and Affiliations

Department of Clinical Sciences, Florida State University College of Medicine, Tallahassee, FL, USA

Desai Sethi Urology Institute, University of Miami Miller School of Medicine, Miami, FL, USA

Rohit Reddy, Akhil Muthigi & Ranjith Ramasamy

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Ranjith Ramasamy .

Ethics declarations

Competing interests.

R.R. is funded by the National Institutes of Health Grant R01 DK130991 and the Clinician Scientist Development Grant from the American Cancer Society. The other authors declare no competing interests.

Additional information

Related links.

ChatGPT: https://chat.openai.com/

Cohere: https://cohere.ai/

CoSchedule Headline Analyzer: https://coschedule.com/headline-analyzer

DALL-E 2: https://openai.com/dall-e-2/

Elicit: https://elicit.org/

Penelope.ai: https://www.penelope.ai/

Quillbot: https://quillbot.com/

Semantic Scholar: https://www.semanticscholar.org/

Wordtune by AI21 Labs: https://www.wordtune.com/

Writefull: https://www.writefull.com/

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Golan, R., Reddy, R., Muthigi, A. et al. Artificial intelligence in academic writing: a paradigm-shifting technological advance. Nat Rev Urol 20 , 327–328 (2023). https://doi.org/10.1038/s41585-023-00746-x

Download citation

Published : 24 February 2023

Issue Date : June 2023

DOI : https://doi.org/10.1038/s41585-023-00746-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Techniques for supercharging academic writing with generative ai.

  • Zhicheng Lin

Nature Biomedical Engineering (2024)

Critical Care (2023)

What do academics have to say about ChatGPT? A text mining analytics on the discussions regarding ChatGPT on research writing

  • Rex Bringula

AI and Ethics (2023)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: Translational Research newsletter — top stories in biotechnology, drug discovery and pharma.

development of artificial intelligence essay

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Tzu Chi Med J
  • v.32(4); Oct-Dec 2020

Logo of tcmj

The impact of artificial intelligence on human society and bioethics

Michael cheng-tek tai.

Department of Medical Sociology and Social Work, College of Medicine, Chung Shan Medical University, Taichung, Taiwan

Artificial intelligence (AI), known by some as the industrial revolution (IR) 4.0, is going to change not only the way we do things, how we relate to others, but also what we know about ourselves. This article will first examine what AI is, discuss its impact on industrial, social, and economic changes on humankind in the 21 st century, and then propose a set of principles for AI bioethics. The IR1.0, the IR of the 18 th century, impelled a huge social change without directly complicating human relationships. Modern AI, however, has a tremendous impact on how we do things and also the ways we relate to one another. Facing this challenge, new principles of AI bioethics must be considered and developed to provide guidelines for the AI technology to observe so that the world will be benefited by the progress of this new intelligence.

W HAT IS ARTIFICIAL INTELLIGENCE ?

Artificial intelligence (AI) has many different definitions; some see it as the created technology that allows computers and machines to function intelligently. Some see it as the machine that replaces human labor to work for men a more effective and speedier result. Others see it as “a system” with the ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation [ 1 ].

Despite the different definitions, the common understanding of AI is that it is associated with machines and computers to help humankind solve problems and facilitate working processes. In short, it is an intelligence designed by humans and demonstrated by machines. The term AI is used to describe these functions of human-made tool that emulates the “cognitive” abilities of the natural intelligence of human minds [ 2 ].

Along with the rapid development of cybernetic technology in recent years, AI has been seen almost in all our life circles, and some of that may no longer be regarded as AI because it is so common in daily life that we are much used to it such as optical character recognition or the Siri (speech interpretation and recognition interface) of information searching equipment on computer [ 3 ].

D IFFERENT TYPES OF ARTIFICIAL INTELLIGENCE

From the functions and abilities provided by AI, we can distinguish two different types. The first is weak AI, also known as narrow AI that is designed to perform a narrow task, such as facial recognition or Internet Siri search or self-driving car. Many currently existing systems that claim to use “AI” are likely operating as a weak AI focusing on a narrowly defined specific function. Although this weak AI seems to be helpful to human living, there are still some think weak AI could be dangerous because weak AI could cause disruptions in the electric grid or may damage nuclear power plants when malfunctioned.

The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem. While narrow AI may outperform humans such as playing chess or solving equations, but its effect is still weak. AGI, however, could outperform humans at nearly every cognitive task.

Strong AI is a different perception of AI that it can be programmed to actually be a human mind, to be intelligent in whatever it is commanded to attempt, even to have perception, beliefs and other cognitive capacities that are normally only ascribed to humans [ 4 ].

In summary, we can see these different functions of AI [ 5 , 6 ]:

  • Automation: What makes a system or process to function automatically
  • Machine learning and vision: The science of getting a computer to act through deep learning to predict and analyze, and to see through a camera, analog-to-digital conversion and digital signal processing
  • Natural language processing: The processing of human language by a computer program, such as spam detection and converting instantly a language to another to help humans communicate
  • Robotics: A field of engineering focusing on the design and manufacturing of cyborgs, the so-called machine man. They are used to perform tasks for human's convenience or something too difficult or dangerous for human to perform and can operate without stopping such as in assembly lines
  • Self-driving car: Use a combination of computer vision, image recognition amid deep learning to build automated control in a vehicle.

D O HUMAN-BEINGS REALLY NEED ARTIFICIAL INTELLIGENCE ?

Is AI really needed in human society? It depends. If human opts for a faster and effective way to complete their work and to work constantly without taking a break, yes, it is. However if humankind is satisfied with a natural way of living without excessive desires to conquer the order of nature, it is not. History tells us that human is always looking for something faster, easier, more effective, and convenient to finish the task they work on; therefore, the pressure for further development motivates humankind to look for a new and better way of doing things. Humankind as the homo-sapiens discovered that tools could facilitate many hardships for daily livings and through tools they invented, human could complete the work better, faster, smarter and more effectively. The invention to create new things becomes the incentive of human progress. We enjoy a much easier and more leisurely life today all because of the contribution of technology. The human society has been using the tools since the beginning of civilization, and human progress depends on it. The human kind living in the 21 st century did not have to work as hard as their forefathers in previous times because they have new machines to work for them. It is all good and should be all right for these AI but a warning came in early 20 th century as the human-technology kept developing that Aldous Huxley warned in his book Brave New World that human might step into a world in which we are creating a monster or a super human with the development of genetic technology.

Besides, up-to-dated AI is breaking into healthcare industry too by assisting doctors to diagnose, finding the sources of diseases, suggesting various ways of treatment performing surgery and also predicting if the illness is life-threatening [ 7 ]. A recent study by surgeons at the Children's National Medical Center in Washington successfully demonstrated surgery with an autonomous robot. The team supervised the robot to perform soft-tissue surgery, stitch together a pig's bowel, and the robot finished the job better than a human surgeon, the team claimed [ 8 , 9 ]. It demonstrates robotically-assisted surgery can overcome the limitations of pre-existing minimally-invasive surgical procedures and to enhance the capacities of surgeons performing open surgery.

Above all, we see the high-profile examples of AI including autonomous vehicles (such as drones and self-driving cars), medical diagnosis, creating art, playing games (such as Chess or Go), search engines (such as Google search), online assistants (such as Siri), image recognition in photographs, spam filtering, predicting flight delays…etc. All these have made human life much easier and convenient that we are so used to them and take them for granted. AI has become indispensable, although it is not absolutely needed without it our world will be in chaos in many ways today.

T HE IMPACT OF ARTIFICIAL INTELLIGENCE ON HUMAN SOCIETY

Negative impact.

Questions have been asked: With the progressive development of AI, human labor will no longer be needed as everything can be done mechanically. Will humans become lazier and eventually degrade to the stage that we return to our primitive form of being? The process of evolution takes eons to develop, so we will not notice the backsliding of humankind. However how about if the AI becomes so powerful that it can program itself to be in charge and disobey the order given by its master, the humankind?

Let us see the negative impact the AI will have on human society [ 10 , 11 ]:

  • A huge social change that disrupts the way we live in the human community will occur. Humankind has to be industrious to make their living, but with the service of AI, we can just program the machine to do a thing for us without even lifting a tool. Human closeness will be gradually diminishing as AI will replace the need for people to meet face to face for idea exchange. AI will stand in between people as the personal gathering will no longer be needed for communication
  • Unemployment is the next because many works will be replaced by machinery. Today, many automobile assembly lines have been filled with machineries and robots, forcing traditional workers to lose their jobs. Even in supermarket, the store clerks will not be needed anymore as the digital device can take over human labor
  • Wealth inequality will be created as the investors of AI will take up the major share of the earnings. The gap between the rich and the poor will be widened. The so-called “M” shape wealth distribution will be more obvious
  • New issues surface not only in a social sense but also in AI itself as the AI being trained and learned how to operate the given task can eventually take off to the stage that human has no control, thus creating un-anticipated problems and consequences. It refers to AI's capacity after being loaded with all needed algorithm may automatically function on its own course ignoring the command given by the human controller
  • The human masters who create AI may invent something that is racial bias or egocentrically oriented to harm certain people or things. For instance, the United Nations has voted to limit the spread of nucleus power in fear of its indiscriminative use to destroying humankind or targeting on certain races or region to achieve the goal of domination. AI is possible to target certain race or some programmed objects to accomplish the command of destruction by the programmers, thus creating world disaster.

P OSITIVE IMPACT

There are, however, many positive impacts on humans as well, especially in the field of healthcare. AI gives computers the capacity to learn, reason, and apply logic. Scientists, medical researchers, clinicians, mathematicians, and engineers, when working together, can design an AI that is aimed at medical diagnosis and treatments, thus offering reliable and safe systems of health-care delivery. As health professors and medical researchers endeavor to find new and efficient ways of treating diseases, not only the digital computer can assist in analyzing, robotic systems can also be created to do some delicate medical procedures with precision. Here, we see the contribution of AI to health care [ 7 , 11 ]:

Fast and accurate diagnostics

IBM's Watson computer has been used to diagnose with the fascinating result. Loading the data to the computer will instantly get AI's diagnosis. AI can also provide various ways of treatment for physicians to consider. The procedure is something like this: To load the digital results of physical examination to the computer that will consider all possibilities and automatically diagnose whether or not the patient suffers from some deficiencies and illness and even suggest various kinds of available treatment.

Socially therapeutic robots

Pets are recommended to senior citizens to ease their tension and reduce blood pressure, anxiety, loneliness, and increase social interaction. Now cyborgs have been suggested to accompany those lonely old folks, even to help do some house chores. Therapeutic robots and the socially assistive robot technology help improve the quality of life for seniors and physically challenged [ 12 ].

Reduce errors related to human fatigue

Human error at workforce is inevitable and often costly, the greater the level of fatigue, the higher the risk of errors occurring. Al technology, however, does not suffer from fatigue or emotional distraction. It saves errors and can accomplish the duty faster and more accurately.

Artificial intelligence-based surgical contribution

AI-based surgical procedures have been available for people to choose. Although this AI still needs to be operated by the health professionals, it can complete the work with less damage to the body. The da Vinci surgical system, a robotic technology allowing surgeons to perform minimally invasive procedures, is available in most of the hospitals now. These systems enable a degree of precision and accuracy far greater than the procedures done manually. The less invasive the surgery, the less trauma it will occur and less blood loss, less anxiety of the patients.

Improved radiology

The first computed tomography scanners were introduced in 1971. The first magnetic resonance imaging (MRI) scan of the human body took place in 1977. By the early 2000s, cardiac MRI, body MRI, and fetal imaging, became routine. The search continues for new algorithms to detect specific diseases as well as to analyze the results of scans [ 9 ]. All those are the contribution of the technology of AI.

Virtual presence

The virtual presence technology can enable a distant diagnosis of the diseases. The patient does not have to leave his/her bed but using a remote presence robot, doctors can check the patients without actually being there. Health professionals can move around and interact almost as effectively as if they were present. This allows specialists to assist patients who are unable to travel.

S OME CAUTIONS TO BE REMINDED

Despite all the positive promises that AI provides, human experts, however, are still essential and necessary to design, program, and operate the AI from any unpredictable error from occurring. Beth Kindig, a San Francisco-based technology analyst with more than a decade of experience in analyzing private and public technology companies, published a free newsletter indicating that although AI has a potential promise for better medical diagnosis, human experts are still needed to avoid the misclassification of unknown diseases because AI is not omnipotent to solve all problems for human kinds. There are times when AI meets an impasse, and to carry on its mission, it may just proceed indiscriminately, ending in creating more problems. Thus vigilant watch of AI's function cannot be neglected. This reminder is known as physician-in-the-loop [ 13 ].

The question of an ethical AI consequently was brought up by Elizabeth Gibney in her article published in Nature to caution any bias and possible societal harm [ 14 ]. The Neural Information processing Systems (NeurIPS) conference in Vancouver Canada in 2020 brought up the ethical controversies of the application of AI technology, such as in predictive policing or facial recognition, that due to bias algorithms can result in hurting the vulnerable population [ 14 ]. For instance, the NeurIPS can be programmed to target certain race or decree as the probable suspect of crime or trouble makers.

T HE CHALLENGE OF ARTIFICIAL INTELLIGENCE TO BIOETHICS

Artificial intelligence ethics must be developed.

Bioethics is a discipline that focuses on the relationship among living beings. Bioethics accentuates the good and the right in biospheres and can be categorized into at least three areas, the bioethics in health settings that is the relationship between physicians and patients, the bioethics in social settings that is the relationship among humankind and the bioethics in environmental settings that is the relationship between man and nature including animal ethics, land ethics, ecological ethics…etc. All these are concerned about relationships within and among natural existences.

As AI arises, human has a new challenge in terms of establishing a relationship toward something that is not natural in its own right. Bioethics normally discusses the relationship within natural existences, either humankind or his environment, that are parts of natural phenomena. But now men have to deal with something that is human-made, artificial and unnatural, namely AI. Human has created many things yet never has human had to think of how to ethically relate to his own creation. AI by itself is without feeling or personality. AI engineers have realized the importance of giving the AI ability to discern so that it will avoid any deviated activities causing unintended harm. From this perspective, we understand that AI can have a negative impact on humans and society; thus, a bioethics of AI becomes important to make sure that AI will not take off on its own by deviating from its originally designated purpose.

Stephen Hawking warned early in 2014 that the development of full AI could spell the end of the human race. He said that once humans develop AI, it may take off on its own and redesign itself at an ever-increasing rate [ 15 ]. Humans, who are limited by slow biological evolution, could not compete and would be superseded. In his book Superintelligence, Nick Bostrom gives an argument that AI will pose a threat to humankind. He argues that sufficiently intelligent AI can exhibit convergent behavior such as acquiring resources or protecting itself from being shut down, and it might harm humanity [ 16 ].

The question is–do we have to think of bioethics for the human's own created product that bears no bio-vitality? Can a machine have a mind, consciousness, and mental state in exactly the same sense that human beings do? Can a machine be sentient and thus deserve certain rights? Can a machine intentionally cause harm? Regulations must be contemplated as a bioethical mandate for AI production.

Studies have shown that AI can reflect the very prejudices humans have tried to overcome. As AI becomes “truly ubiquitous,” it has a tremendous potential to positively impact all manner of life, from industry to employment to health care and even security. Addressing the risks associated with the technology, Janosch Delcker, Politico Europe's AI correspondent, said: “I don't think AI will ever be free of bias, at least not as long as we stick to machine learning as we know it today,”…. “What's crucially important, I believe, is to recognize that those biases exist and that policymakers try to mitigate them” [ 17 ]. The High-Level Expert Group on AI of the European Union presented Ethics Guidelines for Trustworthy AI in 2019 that suggested AI systems must be accountable, explainable, and unbiased. Three emphases are given:

  • Lawful-respecting all applicable laws and regulations
  • Ethical-respecting ethical principles and values
  • Robust-being adaptive, reliable, fair, and trustworthy from a technical perspective while taking into account its social environment [ 18 ].

Seven requirements are recommended [ 18 ]:

  • AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes
  • AI should be secure and accurate. It should not be easily compromised by external attacks, and it should be reasonably reliable
  • Personal data collected by AI systems should be secure and private. It should not be accessible to just anyone, and it should not be easily stolen
  • Data and algorithms used to create an AI system should be accessible, and the decisions made by the software should be “understood and traced by human beings.” In other words, operators should be able to explain the decisions their AI systems make
  • Services provided by AI should be available to all, regardless of age, gender, race, or other characteristics. Similarly, systems should not be biased along these lines
  • AI systems should be sustainable (i.e., they should be ecologically responsible) and “enhance positive social change”
  • AI systems should be auditable and covered by existing protections for corporate whistleblowers. The negative impacts of systems should be acknowledged and reported in advance.

From these guidelines, we can suggest that future AI must be equipped with human sensibility or “AI humanities.” To accomplish this, AI researchers, manufacturers, and all industries must bear in mind that technology is to serve not to manipulate humans and his society. Bostrom and Judkowsky listed responsibility, transparency, auditability, incorruptibility, and predictability [ 19 ] as criteria for the computerized society to think about.

S UGGESTED PRINCIPLES FOR ARTIFICIAL INTELLIGENCE BIOETHICS

Nathan Strout, a reporter at Space and Intelligence System at Easter University, USA, reported just recently that the intelligence community is developing its own AI ethics. The Pentagon made announced in February 2020 that it is in the process of adopting principles for using AI as the guidelines for the department to follow while developing new AI tools and AI-enabled technologies. Ben Huebner, chief of the Office of Director of National Intelligence's Civil Liberties, Privacy, and Transparency Office, said that “We're going to need to ensure that we have transparency and accountability in these structures as we use them. They have to be secure and resilient” [ 20 ]. Two themes have been suggested for the AI community to think more about: Explainability and interpretability. Explainability is the concept of understanding how the analytic works, while interpretability is being able to understand a particular result produced by an analytic [ 20 ].

All the principles suggested by scholars for AI bioethics are well-brought-up. I gather from different bioethical principles in all the related fields of bioethics to suggest four principles here for consideration to guide the future development of the AI technology. We however must bear in mind that the main attention should still be placed on human because AI after all has been designed and manufactured by human. AI proceeds to its work according to its algorithm. AI itself cannot empathize nor have the ability to discern good from evil and may commit mistakes in processes. All the ethical quality of AI depends on the human designers; therefore, it is an AI bioethics and at the same time, a trans-bioethics that abridge human and material worlds. Here are the principles:

  • Beneficence: Beneficence means doing good, and here it refers to the purpose and functions of AI should benefit the whole human life, society and universe. Any AI that will perform any destructive work on bio-universe, including all life forms, must be avoided and forbidden. The AI scientists must understand that reason of developing this technology has no other purpose but to benefit human society as a whole not for any individual personal gain. It should be altruistic, not egocentric in nature
  • Value-upholding: This refers to AI's congruence to social values, in other words, universal values that govern the order of the natural world must be observed. AI cannot elevate to the height above social and moral norms and must be bias-free. The scientific and technological developments must be for the enhancement of human well-being that is the chief value AI must hold dearly as it progresses further
  • Lucidity: AI must be transparent without hiding any secret agenda. It has to be easily comprehensible, detectable, incorruptible, and perceivable. AI technology should be made available for public auditing, testing and review, and subject to accountability standards … In high-stakes settings like diagnosing cancer from radiologic images, an algorithm that can't “explain its work” may pose an unacceptable risk. Thus, explainability and interpretability are absolutely required
  • Accountability: AI designers and developers must bear in mind they carry a heavy responsibility on their shoulders of the outcome and impact of AI on whole human society and the universe. They must be accountable for whatever they manufacture and create.

C ONCLUSION

AI is here to stay in our world and we must try to enforce the AI bioethics of beneficence, value upholding, lucidity and accountability. Since AI is without a soul as it is, its bioethics must be transcendental to bridge the shortcoming of AI's inability to empathize. AI is a reality of the world. We must take note of what Joseph Weizenbaum, a pioneer of AI, said that we must not let computers make important decisions for us because AI as a machine will never possess human qualities such as compassion and wisdom to morally discern and judge [ 10 ]. Bioethics is not a matter of calculation but a process of conscientization. Although AI designers can up-load all information, data, and programmed to AI to function as a human being, it is still a machine and a tool. AI will always remain as AI without having authentic human feelings and the capacity to commiserate. Therefore, AI technology must be progressed with extreme caution. As Von der Leyen said in White Paper on AI – A European approach to excellence and trust : “AI must serve people, and therefore, AI must always comply with people's rights…. High-risk AI. That potentially interferes with people's rights has to be tested and certified before it reaches our single market” [ 21 ].

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

R EFERENCES

Javatpoint Logo

Artificial Intelligence

Control System

  • Interview Q

Intelligent Agent

Problem-solving, adversarial search, knowledge represent, uncertain knowledge r., subsets of ai, artificial intelligence mcq, related tutorials.

JavaTpoint

  • Send your Feedback to [email protected]

Help Others, Please Share

facebook

Learn Latest Tutorials

Splunk tutorial

Transact-SQL

Tumblr tutorial

Reinforcement Learning

R Programming tutorial

R Programming

RxJS tutorial

React Native

Python Design Patterns

Python Design Patterns

Python Pillow tutorial

Python Pillow

Python Turtle tutorial

Python Turtle

Keras tutorial

Preparation

Aptitude

Verbal Ability

Interview Questions

Interview Questions

Company Interview Questions

Company Questions

Trending Technologies

Artificial Intelligence

Cloud Computing

Hadoop tutorial

Data Science

Angular 7 Tutorial

Machine Learning

DevOps Tutorial

B.Tech / MCA

DBMS tutorial

Data Structures

DAA tutorial

Operating System

Computer Network tutorial

Computer Network

Compiler Design tutorial

Compiler Design

Computer Organization and Architecture

Computer Organization

Discrete Mathematics Tutorial

Discrete Mathematics

Ethical Hacking

Ethical Hacking

Computer Graphics Tutorial

Computer Graphics

Software Engineering

Software Engineering

html tutorial

Web Technology

Cyber Security tutorial

Cyber Security

Automata Tutorial

C Programming

C++ tutorial

Data Mining

Data Warehouse Tutorial

Data Warehouse

RSS Feed

Artificial Intelligence and Its Impact on Education Essay

Introduction, ai’s impact on education, the impact of ai on teachers, the impact of ai on students, reference list.

Rooted in computer science, Artificial Intelligence (AI) is defined by the development of digital systems that can perform tasks, which are dependent on human intelligence (Rexford, 2018). Interest in the adoption of AI in the education sector started in the 1980s when researchers were exploring the possibilities of adopting robotic technologies in learning (Mikropoulos, 2018). Their mission was to help learners to study conveniently and efficiently. Today, some of the events and impact of AI on the education sector are concentrated in the fields of online learning, task automation, and personalization learning (Chen, Chen and Lin, 2020). The COVID-19 pandemic is a recent news event that has drawn attention to AI and its role in facilitating online learning among other virtual educational programs. This paper seeks to find out the possible impact of artificial intelligence on the education sector from the perspectives of teachers and learners.

Technology has transformed the education sector in unique ways and AI is no exception. As highlighted above, AI is a relatively new area of technological development, which has attracted global interest in academic and teaching circles. Increased awareness of the benefits of AI in the education sector and the integration of high-performance computing systems in administrative work have accelerated the pace of transformation in the field (Fengchun et al. , 2021). This change has affected different facets of learning to the extent that government agencies and companies are looking to replicate the same success in their respective fields (IBM, 2020). However, while the advantages of AI are widely reported in the corporate scene, few people understand its impact on the interactions between students and teachers. This research gap can be filled by understanding the impact of AI on the education sector, as a holistic ecosystem of learning.

As these gaps in education are minimized, AI is contributing to the growth of the education sector. Particularly, it has increased the number of online learning platforms using big data intelligence systems (Chen, Chen and Lin, 2020). This outcome has been achieved by exploiting opportunities in big data analysis to enhance educational outcomes (IBM, 2020). Overall, the positive contributions that AI has had to the education sector mean that it has expanded opportunities for growth and development in the education sector (Rexford, 2018). Therefore, teachers are likely to benefit from increased opportunities for learning and growth that would emerge from the adoption of AI in the education system.

The impact of AI on teachers can be estimated by examining its effects on the learning environment. Some of the positive outcomes that teachers have associated with AI adoption include increased work efficiency, expanded opportunities for career growth, and an improved rate of innovation adoption (Chen, Chen and Lin, 2020). These benefits are achievable because AI makes it possible to automate learning activities. This process gives teachers the freedom to complete supplementary tasks that support their core activities. At the same time, the freedom they enjoy may be used to enhance creativity and innovation in their teaching practice. Despite the positive outcomes of AI adoption in learning, it undermines the relevance of teachers as educators (Fengchun et al., 2021). This concern is shared among educators because the increased reliance on robotics and automation through AI adoption has created conditions for learning to occur without human input. Therefore, there is a risk that teacher participation may be replaced by machine input.

Performance Evaluation emerges as a critical area where teachers can benefit from AI adoption. This outcome is feasible because AI empowers teachers to monitor the behaviors of their learners and the differences in their scores over a specific time (Mikropoulos, 2018). This comparative analysis is achievable using advanced data management techniques in AI-backed performance appraisal systems (Fengchun et al., 2021). Researchers have used these systems to enhance adaptive group formation programs where groups of students are formed based on a balance of the strengths and weaknesses of the members (Live Tiles, 2021). The information collected using AI-backed data analysis techniques can be recalibrated to capture different types of data. For example, teachers have used AI to understand students’ learning patterns and the correlation between these configurations with the individual understanding of learning concepts (Rexford, 2018). Furthermore, advanced biometric techniques in AI have made it possible for teachers to assess their student’s learning attentiveness.

Overall, the contributions of AI to the teaching practice empower teachers to redesign their learning programs to fill the gaps identified in the performance assessments. Employing the capabilities of AI in their teaching programs has also made it possible to personalize their curriculums to empower students to learn more effectively (Live Tiles, 2021). Nonetheless, the benefits of AI to teachers could be undermined by the possibility of job losses due to the replacement of human labor with machines and robots (Gulson et al. , 2018). These fears are yet to materialize but indications suggest that AI adoption may elevate the importance of machines above those of human beings in learning.

The benefits of AI to teachers can be replicated in student learning because learners are recipients of the teaching strategies adopted by teachers. In this regard, AI has created unique benefits for different groups of learners based on the supportive role it plays in the education sector (Fengchun et al., 2021). For example, it has created conditions necessary for the use of virtual reality in learning. This development has created an opportunity for students to learn at their pace (Live Tiles, 2021). Allowing students to learn at their pace has enhanced their learning experiences because of varied learning speeds. The creation of virtual reality using AI learning has played a significant role in promoting equality in learning by adapting to different learning needs (Live Tiles, 2021). For example, it has helped students to better track their performances at home and identify areas of improvement in the process. In this regard, the adoption of AI in learning has allowed for the customization of learning styles to improve students’ attention and involvement in learning.

AI also benefits students by personalizing education activities to suit different learning styles and competencies. In this analysis, AI holds the promise to develop personalized learning at scale by customizing tools and features of learning in contemporary education systems (du Boulay, 2016). Personalized learning offers several benefits to students, including a reduction in learning time, increased levels of engagement with teachers, improved knowledge retention, and increased motivation to study (Fengchun et al., 2021). The presence of these benefits means that AI enriches students’ learning experiences. Furthermore, AI shares the promise of expanding educational opportunities for people who would have otherwise been unable to access learning opportunities. For example, disabled people are unable to access the same quality of education as ordinary students do. Today, technology has made it possible for these underserved learners to access education services.

Based on the findings highlighted above, AI has made it possible to customize education services to suit the needs of unique groups of learners. By extension, AI has made it possible for teachers to select the most appropriate teaching methods to use for these student groups (du Boulay, 2016). Teachers have reported positive outcomes of using AI to meet the needs of these underserved learners (Fengchun et al., 2021). For example, through online learning, some of them have learned to be more patient and tolerant when interacting with disabled students (Fengchun et al., 2021). AI has also made it possible to integrate the educational and curriculum development plans of disabled and mainstream students, thereby standardizing the education outcomes across the divide. Broadly, these statements indicate that the expansion of opportunities via AI adoption has increased access to education services for underserved groups of learners.

Overall, AI holds the promise to solve most educational challenges that affect the world today. UNESCO (2021) affirms this statement by saying that AI can address most problems in learning through innovation. Therefore, there is hope that the adoption of new technology would accelerate the process of streamlining the education sector. This outcome could be achieved by improving the design of AI learning programs to make them more effective in meeting student and teachers’ needs. This contribution to learning will help to maximize the positive impact and minimize the negative effects of AI on both parties.

The findings of this study demonstrate that the application of AI in education has a largely positive impact on students and teachers. The positive effects are summarized as follows: improved access to education for underserved populations improved teaching practices/instructional learning, and enhanced enthusiasm for students to stay in school. Despite the existence of these positive views, negative outcomes have also been highlighted in this paper. They include the potential for job losses, an increase in education inequalities, and the high cost of installing AI systems. These concerns are relevant to the adoption of AI in the education sector but the benefits of integration outweigh them. Therefore, there should be more support given to educational institutions that intend to adopt AI. Overall, this study demonstrates that AI is beneficial to the education sector. It will improve the quality of teaching, help students to understand knowledge quickly, and spread knowledge via the expansion of educational opportunities.

Chen, L., Chen, P. and Lin, Z. (2020) ‘Artificial intelligence in education: a review’, Institute of Electrical and Electronics Engineers Access , 8(1), pp. 75264-75278.

du Boulay, B. (2016) Artificial intelligence as an effective classroom assistant. Institute of Electrical and Electronics Engineers Intelligent Systems , 31(6), pp.76–81.

Fengchun, M. et al. (2021) AI and education: a guide for policymakers . Paris: UNESCO Publishing.

Gulson, K . et al. (2018) Education, work and Australian society in an AI world . Web.

IBM. (2020) Artificial intelligence . Web.

Live Tiles. (2021) 15 pros and 6 cons of artificial intelligence in the classroom . Web.

Mikropoulos, T. A. (2018) Research on e-Learning and ICT in education: technological, pedagogical and instructional perspectives . New York, NY: Springer.

Rexford, J. (2018) The role of education in AI (and vice versa). Web.

Seo, K. et al. (2021) The impact of artificial intelligence on learner–instructor interaction in online learning. International Journal of Educational Technology in Higher Education , 18(54), pp. 1-12.

UNESCO. (2021) Artificial intelligence in education . Web.

  • Chicago (A-D)
  • Chicago (N-B)

IvyPanda. (2023, October 1). Artificial Intelligence and Its Impact on Education. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/

"Artificial Intelligence and Its Impact on Education." IvyPanda , 1 Oct. 2023, ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

IvyPanda . (2023) 'Artificial Intelligence and Its Impact on Education'. 1 October.

IvyPanda . 2023. "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

1. IvyPanda . "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

Bibliography

IvyPanda . "Artificial Intelligence and Its Impact on Education." October 1, 2023. https://ivypanda.com/essays/artificial-intelligence-and-its-impact-on-education/.

  • The Age of Artificial Intelligence (AI)
  • The Importance of Trust in AI Adoption
  • Working With Artificial Intelligence (AI)
  • Effects of AI on the Accounting Profession
  • Artificial Intelligence and the Associated Threats
  • Artificial Intelligence in Cybersecurity
  • Leaders’ Attitude Toward AI Adoption in the UAE
  • Artificial Intelligence in “I, Robot” by Alex Proyas
  • The Aspects of the Artificial Intelligence
  • Robotics and Artificial Intelligence in Organizations
  • Machine Learning: Bias and Variance
  • Machine Learning and Regularization Techniques
  • Would Artificial Intelligence Reduce the Shortage of the Radiologists
  • Artificial Versus Human Intelligence
  • Artificial Intelligence: Application and Future

Artificial Intelligence Essay for Students and Children

500+ words essay on artificial intelligence.

Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence of humans and animals. With Artificial Intelligence, machines perform functions such as learning, planning, reasoning and problem-solving. Most noteworthy, Artificial Intelligence is the simulation of human intelligence by machines. It is probably the fastest-growing development in the World of technology and innovation . Furthermore, many experts believe AI could solve major challenges and crisis situations.

Artificial Intelligence Essay

Types of Artificial Intelligence

First of all, the categorization of Artificial Intelligence is into four types. Arend Hintze came up with this categorization. The categories are as follows:

Type 1: Reactive machines – These machines can react to situations. A famous example can be Deep Blue, the IBM chess program. Most noteworthy, the chess program won against Garry Kasparov , the popular chess legend. Furthermore, such machines lack memory. These machines certainly cannot use past experiences to inform future ones. It analyses all possible alternatives and chooses the best one.

Type 2: Limited memory – These AI systems are capable of using past experiences to inform future ones. A good example can be self-driving cars. Such cars have decision making systems . The car makes actions like changing lanes. Most noteworthy, these actions come from observations. There is no permanent storage of these observations.

Type 3: Theory of mind – This refers to understand others. Above all, this means to understand that others have their beliefs, intentions, desires, and opinions. However, this type of AI does not exist yet.

Type 4: Self-awareness – This is the highest and most sophisticated level of Artificial Intelligence. Such systems have a sense of self. Furthermore, they have awareness, consciousness, and emotions. Obviously, such type of technology does not yet exist. This technology would certainly be a revolution .

Get the huge list of more than 500 Essay Topics and Ideas

Applications of Artificial Intelligence

First of all, AI has significant use in healthcare. Companies are trying to develop technologies for quick diagnosis. Artificial Intelligence would efficiently operate on patients without human supervision. Such technological surgeries are already taking place. Another excellent healthcare technology is IBM Watson.

Artificial Intelligence in business would significantly save time and effort. There is an application of robotic automation to human business tasks. Furthermore, Machine learning algorithms help in better serving customers. Chatbots provide immediate response and service to customers.

development of artificial intelligence essay

AI can greatly increase the rate of work in manufacturing. Manufacture of a huge number of products can take place with AI. Furthermore, the entire production process can take place without human intervention. Hence, a lot of time and effort is saved.

Artificial Intelligence has applications in various other fields. These fields can be military , law , video games , government, finance, automotive, audit, art, etc. Hence, it’s clear that AI has a massive amount of different applications.

To sum it up, Artificial Intelligence looks all set to be the future of the World. Experts believe AI would certainly become a part and parcel of human life soon. AI would completely change the way we view our World. With Artificial Intelligence, the future seems intriguing and exciting.

{ “@context”: “https://schema.org”, “@type”: “FAQPage”, “mainEntity”: [{ “@type”: “Question”, “name”: “Give an example of AI reactive machines?”, “acceptedAnswer”: { “@type”: “Answer”, “text”: “Reactive machines react to situations. An example of it is the Deep Blue, the IBM chess program, This program defeated the popular chess player Garry Kasparov.” } }, { “@type”: “Question”, “name”: “How do chatbots help in business?”, “acceptedAnswer”: { “@type”: “Answer”, “text”:”Chatbots help in business by assisting customers. Above all, they do this by providing immediate response and service to customers.”} }] }

Customize your course in 30 seconds

Which class are you in.

tutor

  • Travelling Essay
  • Picnic Essay
  • Our Country Essay
  • My Parents Essay
  • Essay on Favourite Personality
  • Essay on Memorable Day of My Life
  • Essay on Knowledge is Power
  • Essay on Gurpurab
  • Essay on My Favourite Season
  • Essay on Types of Sports

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Download the App

Google Play

More From Forbes

Artificial intelligence for good: how ai is helping humanity.

  • Share to Facebook
  • Share to Twitter
  • Share to Linkedin

Founder and CEO,  Analytics Insight , providing organizations with strategic insights on disruptive technologies. 

Artificial intelligence (AI) is considered one of the most revolutionary developments in human history, and the world has already witnessed its transformative capabilities. Not surprisingly, AI-based innovations are powering some of the most cutting-edge solutions we use in our daily lives.

Today, AI empowers organizations, governments and communities to build a high-performing ecosystem to serve the entire world. Its profound impact on human lives is solving some of the most critical challenges faced by society. Here are a few innovations for social causes that I find most notable. 

Developing New Drugs: The healthcare industry is ripe with disruptive applications of AI, including the discovery and development of new drugs. AI and machine learning have been used to identify potential molecules by leveraging a large volume of data. Pharmaceutical companies use predictive analytics to discover these molecule candidates and optimize them with several rounds of iteration to select the best one for drug manufacturing.

Reporting Sexual Harassment: Artificial intelligence offers new ways of reporting gender-based violence, child sex abuse and more. AI programs are being designed to monitor internal communications , such as corporate documents, emails and chat, for inappropriate content. Various applications and platforms have been developed to help victims share their experiences of sexual harassment and abuse along with the time and location these events took place.

Best High-Yield Savings Accounts Of 2024

Best 5% interest savings accounts of 2024.

Combatting Human Trafficking: Human trafficking is a serious crime against humanity and a threat to global security. Traffickers often use the internet to place advertisements to lure potential victims. Artificial intelligence tools and computer vision algorithms scrape images from different websites used by traffickers and label objects in images to search and review suspect advertisements. Additionally, these tools analyze data from the advertisements and websites to identify the potential victims of human trafficking and alert authorities before the crime.

Optimizing Renewable Energy Generation: Artificial intelligence in collaboration with other technologies, such as the Internet of Things (IoT), cloud computing and big data analytics, has significantly transformed the renewable energy sector. AI programs have the ability to combine weather data and sensors to optimize, predict and manage energy consumption across different sectors. AI-based accurate predictions result in increased dispatch efficiency and reducing the operating reserves needed.

Helping People With Disabilities: Artificial intelligence has also assisted people living independently with disabilities. Voice-assisted AI is one of the major breakthroughs , particularly for those who are visually impaired. It helps them communicate with others using smart devices and describe their surroundings. Tools like this can significantly help in overcoming daily obstacles for those with disabilities.

Investing In AI For Good

While the adoption of AI technologies is increasing, challenges remain. I’ve found that some of the major challenges faced by organizations developing AI solutions for social good include the fear of risk; defining how to measure the value the solution will bring; an incomplete understanding of AI; the high cost of technology; and regulatory, ethical and security concerns. However, organizations and institutions can overcome them by investing in advanced research, human capital and infrastructure, and encouraging AI literacy in society.

Organizations planning to invest in advanced AI research or implementing AI for social good must actively collaborate with research institutions and government bodies to apply their AI solutions for real-world impact. Moreover, workshops and forums are some of the best platforms for organizations to gather the insights they need. These platforms can be used to understand whether an organization’s solutions are the right fit to solve challenges for social good.

Bottom Line

Artificial intelligence has enormous potential to serve society, bringing more radical innovations for humans in the future. Its problem-solving ability could help people and communities around the world by solving today’s toughest challenges. With sensible use of AI, we should continue to see a wide scope of AI applications and new developments for social good.

Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?

Ashish Sukhadeve

  • Editorial Standards
  • Reprints & Permissions

Artificial Intelligence Essay

500+ words essay on artificial intelligence.

Artificial intelligence (AI) has come into our daily lives through mobile devices and the Internet. Governments and businesses are increasingly making use of AI tools and techniques to solve business problems and improve many business processes, especially online ones. Such developments bring about new realities to social life that may not have been experienced before. This essay on Artificial Intelligence will help students to know the various advantages of using AI and how it has made our lives easier and simpler. Also, in the end, we have described the future scope of AI and the harmful effects of using it. To get a good command of essay writing, students must practise CBSE Essays on different topics.

Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use techniques such as machine learning and deep learning to solve problems in particular domains without hard coding all possibilities (i.e. algorithmic steps) in software. Due to this, AI started showing promising solutions for industry and businesses as well as our daily lives.

Importance and Advantages of Artificial Intelligence

Advances in computing and digital technologies have a direct influence on our lives, businesses and social life. This has influenced our daily routines, such as using mobile devices and active involvement on social media. AI systems are the most influential digital technologies. With AI systems, businesses are able to handle large data sets and provide speedy essential input to operations. Moreover, businesses are able to adapt to constant changes and are becoming more flexible.

By introducing Artificial Intelligence systems into devices, new business processes are opting for the automated process. A new paradigm emerges as a result of such intelligent automation, which now dictates not only how businesses operate but also who does the job. Many manufacturing sites can now operate fully automated with robots and without any human workers. Artificial Intelligence now brings unheard and unexpected innovations to the business world that many organizations will need to integrate to remain competitive and move further to lead the competitors.

Artificial Intelligence shapes our lives and social interactions through technological advancement. There are many AI applications which are specifically developed for providing better services to individuals, such as mobile phones, electronic gadgets, social media platforms etc. We are delegating our activities through intelligent applications, such as personal assistants, intelligent wearable devices and other applications. AI systems that operate household apparatus help us at home with cooking or cleaning.

Future Scope of Artificial Intelligence

In the future, intelligent machines will replace or enhance human capabilities in many areas. Artificial intelligence is becoming a popular field in computer science as it has enhanced humans. Application areas of artificial intelligence are having a huge impact on various fields of life to solve complex problems in various areas such as education, engineering, business, medicine, weather forecasting etc. Many labourers’ work can be done by a single machine. But Artificial Intelligence has another aspect: it can be dangerous for us. If we become completely dependent on machines, then it can ruin our life. We will not be able to do any work by ourselves and get lazy. Another disadvantage is that it cannot give a human-like feeling. So machines should be used only where they are actually required.

Students must have found this essay on “Artificial Intelligence” useful for improving their essay writing skills. They can get the study material and the latest updates on CBSE/ICSE/State Board/Competitive Exams, at BYJU’S.

Leave a Comment Cancel reply

Your Mobile number and Email id will not be published. Required fields are marked *

Request OTP on Voice Call

Post My Comment

development of artificial intelligence essay

  • Share Share

Register with BYJU'S & Download Free PDFs

Register with byju's & watch live videos.

close

Counselling

  • Share full article

Advertisement

Supported by

Guest Essay

I’m a Congressman Who Codes. A.I. Freaks Me Out.

A computer-generated pixilated image of blue, orange and white.

By Ted Lieu

Mr. Lieu represents California’s 36th Congressional District in the U.S. House of Representatives.

Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits.

I didn’t write the above paragraph. It was generated in a few seconds by an A.I. program called ChatGPT, which is available on the internet. I simply logged into the program and entered the following prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated.”

I was surprised at how ChatGPT effectively drafted a compelling argument that reflected my views on A.I., and so quickly. As one of just three members of Congress with a computer science degree, I am enthralled by A.I. and excited about the incredible ways it will continue to advance society. And as a member of Congress, I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated.

A.I. is part of our daily life. It gives us instantaneous search results, helps us navigate unfamiliar roads, recommends songs we might like and can improve almost any task you can imagine. A.I. is embedded in systems that help prevent fraud on your credit card, predict the weather and allow early detection of diseases. A.I. thinks exponentially faster than humans, can analyze orders of magnitude more data than we can and sees patterns the human mind would never see.

At the same time, A.I. has caused harm. Some of the harm is merely disruptive. Teachers (and newspaper editors) might find it increasingly difficult to determine if a written document was created by A.I. or a human. Deep fake technology can create videos and photographs that look real.

But some of the harm could be deadly. Tesla’s “full self-driving” A.I. feature apparently malfunctioned last Thanksgiving in a car in San Francisco’s Yerba Buena Tunnel, causing the car to suddenly stop and resulting in a multicar accident. The exact cause of the accident has not been fully established, but nine people were injured as a result of the crash.

A.I. algorithms in social media have helped radicalize foreign terrorists and domestic white supremacists.

And some of the harm can cause widespread discrimination. Facial recognition systems used by law enforcement are less accurate for people with darker skin, resulting in possible misidentification of innocent minorities.

Private entities such as the Los Angeles Football Club and Madison Square Garden Entertainment already are deploying A.I. facial recognition systems. The football (professional soccer) club uses it for its team and staff. Recently, Madison Square Garden used facial recognition to ban lawyers from entering the venue who worked at firms representing clients in litigation against M.S.G. Left unregulated, facial recognition can result in an intrusive public and private surveillance state, where both the government and private corporations can know exactly where you are and what you are doing.

Last year, I introduced legislation to regulate the use of facial recognition systems by law enforcement. It took me and my staff over two years working with privacy and technology experts to do so — and building the coalition of support needed to pass this bill will take more time. Again, my bill is for just one application of A.I. It would be virtually impossible for Congress to pass individual laws to regulate each specific use of A.I.

What we need is a dedicated agency to regulate A.I. An agency is nimbler than the legislative process, is staffed with experts and can reverse its decisions if it makes an error. Creating such an agency will be a difficult and huge undertaking because A.I. is complicated and still not well understood.

But there is precedent for establishing a necessary agency to protect people from harm. How molecules interact with millions of unique human beings is a complicated subject and not well understood. Yet we created an agency — the Food and Drug Administration — to regulate pharmaceutical drugs.

Going from virtually zero regulation of A.I. to an entire federal agency would not pass Congress. This critical and necessary endeavor needs to proceed in steps. That’s why I will be introducing legislation to create a nonpartisan A.I. Commission to provide recommendations on how to structure a federal agency to regulate A.I., what types of A.I. should be regulated and what standards should apply.

We may not need to regulate the A.I. in a smart toaster, but we should regulate it in an autonomous car that can go over 100 miles per hour. The National Institute of Standards and Technology has released a second draft of its AI Risk Management Framework. In it, NIST outlines the ways in which organizations, industries and society can manage and mitigate the risks of A.I., like addressing algorithmic biases and prioritizing transparency to stakeholders. These are nonbinding suggestions, however, and do not contain compliance mechanisms. That is why we must build on the great work already being done by NIST and create a regulatory infrastructure for A.I.

Congress has been slow to react when it comes to technological issues. But things are changing. We now have more members who are fluent in technology because they grew up with it, and we also have members like Representative Don Beyer, who is pursuing a master’s in machine learning. Having more members who recognize the promise of this technology — and its potential harms — will serve us well as we tackle this challenge.

The fourth industrial revolution is here. We can harness and regulate A.I. to create a more utopian society or risk having an unchecked, unregulated A.I. push us toward a more dystopian future. And yes, I wrote this paragraph.

Ted W. Lieu represents California’s 36th Congressional District in the U.S. House of Representatives.

The Times is committed to publishing a diversity of letters to the editor. We’d like to hear what you think about this or any of our articles. Here are some tips . And here’s our email: [email protected] .

Follow The New York Times Opinion section on Facebook , Twitter (@NYTopinion) and Instagram .

Talk to our experts

1800-120-456-456

  • Artificial Intelligence Essay

ffImage

Essay on Artificial Intelligence

Artificial Intelligence is the intelligence possessed by the machines under which they can perform various functions with human help. With the help of A.I, machines will be able to learn, solve problems, plan things, think, etc. Artificial Intelligence, for example, is the simulation of human intelligence by machines. In the field of technology, Artificial Intelligence is evolving rapidly day by day and it is believed that in the near future, artificial intelligence is going to change human life very drastically and will most probably end all the crises of the world by sorting out the major problems. 

Our life in this modern age depends largely on computers. It is almost impossible to think about life without computers. We need computers in everything that we use in our daily lives. So it becomes very important to make computers intelligent so that our lives become easy. Artificial Intelligence is the theory and development of computers, which imitates the human intelligence and senses, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial Intelligence has brought a revolution in the world of technology. 

Artificial Intelligence Applications

AI is widely used in the field of healthcare. Companies are attempting to develop technologies that will allow for rapid diagnosis. Artificial Intelligence would be able to operate on patients without the need for human oversight. Surgical procedures based on technology are already being performed.

Artificial Intelligence would save a lot of our time. The use of robots would decrease human labour. For example, in industries robots are used which have saved a lot of human effort and time. 

In the field of education, AI has the potential to be very effective. It can bring innovative ways of teaching students with the help of which students will be able to learn the concepts better. 

Artificial intelligence is the future of innovative technology as we can use it in many fields. For example, it can be used in the Military sector, Industrial sector, Automobiles, etc. In the coming years, we will be able to see more applications of AI as this technology is evolving day by day. 

Marketing: Artificial Intelligence provides a deep knowledge of consumers and potential clients to the marketers by enabling them to deliver information at the right time. Through AI solutions, the marketers can refine their campaigns and strategies.

Agriculture: AI technology can be used to detect diseases in plants, pests, and poor plant nutrition. With the help of AI, farmers can analyze the weather conditions, temperature, water usage, and condition of the soil.

Banking: Fraudulent activities can be detected through AI solutions. AI bots, digital payment advisers can create a high quality of service.

Health Care: Artificial Intelligence can surpass human cognition in the analysis, diagnosis, and complication of complicated medical data.

History of Artificial Intelligence

Artificial Intelligence may seem to be a new technology but if we do a bit of research, we will find that it has roots deep in the past. In Greek Mythology, it is said that the concepts of AI were used. 

The model of Artificial neurons was first brought forward in 1943 by Warren McCulloch and Walter Pits. After seven years, in 1950, a research paper related to AI was published by Alan Turing which was titled 'Computer Machinery and Intelligence. The term Artificial Intelligence was first coined in 1956 by John McCarthy, who is known as the father of Artificial Intelligence. 

To conclude, we can say that Artificial Intelligence will be the future of the world. As per the experts, we won't be able to separate ourselves from this technology as it would become an integral part of our lives shortly. AI would change the way we live in this world. This technology would prove to be revolutionary because it will change our lives for good. 

Branches of Artificial Intelligence:

Knowledge Engineering

Machines Learning

Natural Language Processing

Types of Artificial Intelligence

Artificial Intelligence is categorized in two types based on capabilities and functionalities. 

Artificial Intelligence Type-1

Artificial intelligence type-2.

Narrow AI (weak AI): This is designed to perform a specific task with intelligence. It is termed as weak AI because it cannot perform beyond its limitations. It is trained to do a specific task. Some examples of Narrow AI are facial recognition (Siri in Apple phones), speech, and image recognition. IBM’s Watson supercomputer, self-driving cars, playing chess, and solving equations are also some of the examples of weak AI.

General AI (AGI or strong AI): This system can perform nearly every cognitive task as efficiently as humans can do. The main characteristic of general AI is to make a system that can think like a human on its own. This is a long-term goal of many researchers to create such machines.

Super AI: Super AI is a type of intelligence of systems in which machines can surpass human intelligence and can perform any cognitive task better than humans. The main features of strong AI would be the ability to think, reason, solve puzzles, make judgments, plan and communicate on its own. The creation of strong AI might be the biggest revolution in human history.

Reactive Machines: These machines are the basic types of AI. Such AI systems focus only on current situations and react as per the best possible action. They do not store memories for future actions. IBM’s deep blue system and Google’s Alpha go are the examples of reactive machines.

Limited Memory: These machines can store data or past memories for a short period of time. Examples are self-driving cars. They can store information to navigate the road, speed, and distance of nearby cars.

Theory of Mind: These systems understand emotions, beliefs, and requirements like humans. These kinds of machines are still not invented and it’s a long-term goal for the researchers to create one. 

Self-Awareness: Self-awareness AI is the future of artificial intelligence. These machines can outsmart the humans. If these machines are invented then it can bring a revolution in human society. 

Artificial Intelligence will bring a huge revolution in the history of mankind. Human civilization will flourish by amplifying human intelligence with artificial intelligence, as long as we manage to keep the technology beneficial.

arrow-right

FAQs on Artificial Intelligence Essay

1. What is Artificial Intelligence?

Artificial Intelligence is a branch of computer science that emphasizes the development of intelligent machines that would think and work like humans.

2. How is Artificial Intelligence Categorised?

Artificial Intelligence is categorized in two types based on capabilities and functionalities. Based on capabilities, AI includes Narrow AI (weak AI), General AI, and super AI. Based on functionalities, AI includes Relative Machines, limited memory, theory of mind, self-awareness.

3. How Does AI Help in Marketing?

AI helps marketers to strategize their marketing campaigns and keep data of their prospective clients and consumers.

4. Give an Example of a Relative Machine?

IBM’s deep blue system and Google’s Alpha go are examples of reactive machines.

5. How can Artificial Intelligence help us?

Artificial Intelligence can help us in many ways. It is already helping us in some cases. For example, if we think about the robots used in a factory, they all run on the principle of Artificial Intelligence. In the automobile sector, some vehicles have been invented that don't need any humans to drive them, they are self-driving. The search engines these days are also AI-powered. There are many other uses of Artificial Intelligence as well.

From the world wide web to AI: 11 technology milestones that changed our lives

Laptop half-open.

The world wide web is a key technological milestone in the past 40 years. Image:  Unsplash/Ales Nesetril

.chakra .wef-1c7l3mo{-webkit-transition:all 0.15s ease-out;transition:all 0.15s ease-out;cursor:pointer;-webkit-text-decoration:none;text-decoration:none;outline:none;color:inherit;}.chakra .wef-1c7l3mo:hover,.chakra .wef-1c7l3mo[data-hover]{-webkit-text-decoration:underline;text-decoration:underline;}.chakra .wef-1c7l3mo:focus,.chakra .wef-1c7l3mo[data-focus]{box-shadow:0 0 0 3px rgba(168,203,251,0.5);} Stephen Holroyd

development of artificial intelligence essay

.chakra .wef-9dduvl{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-9dduvl{font-size:1.125rem;}} Explore and monitor how .chakra .wef-15eoq1r{margin-top:16px;margin-bottom:16px;line-height:1.388;font-size:1.25rem;color:#F7DB5E;}@media screen and (min-width:56.5rem){.chakra .wef-15eoq1r{font-size:1.125rem;}} Artificial Intelligence is affecting economies, industries and global issues

A hand holding a looking glass by a lake

.chakra .wef-1nk5u5d{margin-top:16px;margin-bottom:16px;line-height:1.388;color:#2846F8;font-size:1.25rem;}@media screen and (min-width:56.5rem){.chakra .wef-1nk5u5d{font-size:1.125rem;}} Get involved with our crowdsourced digital platform to deliver impact at scale

Stay up to date:, emerging technologies.

  • It’s been 40 years since the launch of the Apple Macintosh personal computer.
  • Since then, technological innovation has accelerated – here are some of the most notable tech milestones over the past four decades.
  • The World Economic Forum’s EDISON Alliance aims to digitally connect 1 billion people to essential services like healthcare, education and finance by 2025.

On 24 January 1984, Apple unveiled the Macintosh 128K and changed the face of personal computers forever.

Steve Jobs’ compact, user-friendly computer introduced the graphical user interface to the world, marking a pivotal moment in the evolution of personal technology.

Since that day, the rate of technological innovation has exploded, with developments in computing, communication, connectivity and machine learning expanding at an astonishing rate.

Here are some of the key technological milestones that have changed our lives over the past 40 years.

Have you read?

9 ways ai is helping tackle climate change, driving trust: paving the road for autonomous vehicles, these are the top 10 emerging technologies of 2023: here's how they can impact the world, 1993: the world wide web.

Although the internet’s official birthday is often debated, it was the invention of the world wide web that drove the democratization of information access and shaped the modern internet we use today.

Created by British scientist Tim Berners-Lee, the World Wide Web was launched to the public in 1993 and brought with it the dawn of online communication, e-commerce and the beginning of the digital economy.

Despite the enormous progress since its invention, 2.6 billion people still lack internet access and global digital inclusion is considered a priority. The World Economic Forum’s EDISON Alliance aims to bridge this gap and digitally connect 1 billion people to essential services like healthcare, education and finance by 2025.

1997: Wi-Fi

The emergence of publicly available Wi-Fi in 1997 changed the face of internet access – removing the need to tether to a network via a cable. Without Wi-Fi, the smartphone and the ever-present internet connection we’ve come to rely on, wouldn’t have been possible, and it has become an indispensable part of our modern, connected world.

1998: Google

The launch of Google’s search engine in 1998 marked the beginning of efficient web search, transforming how people across the globe accessed and navigated online information . Today, there are many others to choose from – Bing, Yahoo!, Baidu – but Google remains the world’s most-used search engine.

2004: Social media

Over the past two decades, the rise of social media and social networking has dominated our connected lives. In 2004, MySpace became the first social media site to reach one million monthly active users. Since then, platforms like Facebook, Instagram and TikTok have reshaped communication and social interaction , nurturing global connectivity and information sharing on an enormous scale, albeit not without controversy .

Most popular social networks worldwide as of January 2024, ranked by number of monthly active users

2007: The iPhone

More than a decade after the first smartphone had been introduced, the iPhone redefined mobile technology by combining a phone, music player, camera and internet communicator in one sleek device. It set new standards for smartphones and ultimately accelerated the explosion of smartphone usage we see across the planet today.

2009: Bitcoin

The foundations for modern digital payments were laid in the late 1950s with the introduction of the first credit and debit cards, but it was the invention of Bitcoin in 2009 that set the stage for a new era of secure digital transactions. The first decentralized cryptocurrency, Bitcoin introduced a new form of digital payment system that operates independently of traditional banking systems. Its underlying technology, blockchain, revolutionized the concept of digital transactions by providing a secure, transparent, and decentralized method for peer-to-peer payments. Bitcoin has not only influenced the development of other cryptocurrencies but has also sparked discussions about the future of money in the digital age.

2014: Virtual reality

2014 was a pivotal year in the development of virtual reality (VR) for commercial applications. Facebook acquired the Oculus VR company for $2 billion and kickstarted a drive for high-quality VR experiences to be made accessible to consumers. Samsung and Sony also announced VR products, and Google released the now discontinued Cardboard – a low-cost, do-it-yourself viewer for smartphones. The first batch of Oculus Rift headsets began shipping to consumers in 2016.

2015: Autonomous vehicles

Autonomous vehicles have gone from science fiction to science fact in the past two decades, and predictions suggest that almost two-thirds of registered passenger cars worldwide will feature partly-assisted driving and steering by 2025 . In 2015, the introduction of Tesla’s Autopilot brought autonomous features to consumer vehicles, contributing to the mainstream adoption of self-driving technology.

Cars Increasingly Ready for Autonomous Driving

2019: Quantum computing

A significant moment in the history of quantum computing was achieved in October 2019 when Google’s Sycamore processor demonstrated “quantum supremacy” by solving a complex problem faster than the world’s most powerful supercomputers. Quantum technologies can be used in a variety of applications and offer transformative impacts across industries. The World Economic Forum’s Quantum Economy Blueprint provides a framework for value-led, democratic access to quantum resources to help ensure an equitable global distribution and avoid a quantum divide.

2020: The COVID-19 pandemic

The COVID-19 pandemic accelerated digital transformation on an unprecedented scale . With almost every aspect of human life impacted by the spread of the virus – from communicating with loved ones to how and where we work – the rate of innovation and uptake of technology across the globe emphasized the importance of remote work, video conferencing, telemedicine and e-commerce in our daily lives.

In response to the uncertainties surrounding generative AI and the need for robust AI governance frameworks to ensure responsible and beneficial outcomes for all, the Forum’s Centre for the Fourth Industrial Revolution (C4IR) has launched the AI Governance Alliance .

The Alliance will unite industry leaders, governments, academic institutions, and civil society organizations to champion responsible global design and release of transparent and inclusive AI systems.

2022: Artificial intelligence

Artificial intelligence (AI) technology has been around for some time and AI-powered consumer electronics, from smart home devices to personalized assistants, have become commonplace. However, the emergence of mainstream applications of generative AI has dominated the sector in recent years.

In 2022, OpenAI unveiled its chatbot, ChatGPT. Within a week, it had gained over one million users and become the fastest-growing consumer app in history . In the same year, DALL-E 2, a text-to-image generative AI tool, also launched.

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Related topics:

The agenda .chakra .wef-n7bacu{margin-top:16px;margin-bottom:16px;line-height:1.388;font-weight:400;} weekly.

A weekly update of the most important issues driving the global agenda

.chakra .wef-1dtnjt5{display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;} More on Emerging Technologies .chakra .wef-17xejub{-webkit-flex:1;-ms-flex:1;flex:1;justify-self:stretch;-webkit-align-self:stretch;-ms-flex-item-align:stretch;align-self:stretch;} .chakra .wef-nr1rr4{display:-webkit-inline-box;display:-webkit-inline-flex;display:-ms-inline-flexbox;display:inline-flex;white-space:normal;vertical-align:middle;text-transform:uppercase;font-size:0.75rem;border-radius:0.25rem;font-weight:700;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;line-height:1.2;-webkit-letter-spacing:1.25px;-moz-letter-spacing:1.25px;-ms-letter-spacing:1.25px;letter-spacing:1.25px;background:none;padding:0px;color:#B3B3B3;-webkit-box-decoration-break:clone;box-decoration-break:clone;-webkit-box-decoration-break:clone;}@media screen and (min-width:37.5rem){.chakra .wef-nr1rr4{font-size:0.875rem;}}@media screen and (min-width:56.5rem){.chakra .wef-nr1rr4{font-size:1rem;}} See all

development of artificial intelligence essay

Open Transaction Network: What is it and what does it mean for the incoming era?

Satwik Mishra, Neeraj Jain and Rajeesh Menon

March 22, 2024

development of artificial intelligence essay

Nvidia unveils new chip, and other technology stories to read

Sebastian Buckup

development of artificial intelligence essay

Technology’s tipping point: Why now is the time to earn trust in AI

Margot Edelman

March 21, 2024

development of artificial intelligence essay

Open Transaction Network: Could this shift in technology transform the global economy?

development of artificial intelligence essay

How pairing digital twin technology with AI could boost buildings’ emissions reductions

Philip Panaro, Sarah Parlow and François Amman

March 19, 2024

development of artificial intelligence essay

The challenges and opportunities with Ethiopia's digital transformation

Belete Molla Getahun

March 18, 2024

In the Age of AI, Students Need to Develop Their Self-Intelligence

Explore more.

  • Classroom Management
  • Course Design
  • Student Engagement

A s artificial intelligence advances at an overwhelming pace and becomes more deeply ingrained in our everyday lives, we cannot lose sight of the qualities that make us distinctly human—our self -intelligence. This includes not only our self-awareness, but our resilience, flexibility, and agility, which allow us to adapt and thrive in moments of complexity, overwhelm, and uncertainty.

These uniquely human skills are also the ones employers seek most. Self-awareness, resilience, flexibility, and agility are four of the top five skills recruited worldwide .

Yet, in higher education, we overlook teaching these human skills as an essential part of the curriculum. Despite employer and even our students’ expectations , a sense of heightened self-awareness, courage, and adaptability do not top the lists of graduating competencies. Instead, we seem to assume that, through their lived experiences in our programs, our students somehow “pick it up” along the way.

“It’s imperative that our students see their capacity to navigate change, complexity, and uncertainty as a skill they need to develop in its own right.”

This can happen, to some extent. But at the end of their higher education journeys, when students are asked to describe how they have grown as individuals and what they think they’ll be able to tackle in the future, they lack the confidence to answer these self-reflective questions .

Thus, it’s imperative that our students see their capacity to navigate change, complexity, and uncertainty as a skill they need to develop in its own right . At Sheridan College, where I am the associate vice provost of human development and potential, we have been conducting research and designing a new program over the last two years that aims to support students’ development in these areas (see sidebar).

Through this work, we’ve identified techniques you can employ within your own curriculum to help students improve their self-intelligence.

How to incorporate self-intelligence skills into any course

Here are five techniques you can integrate into your own teaching to ensure students are intentionally developing their resilience and agility alongside their subject matter expertise.

At my institution, we’re actively piloting the Sheridan S-Sense program, which we’ll launch in fall 2024. Sheridan S-Sense offers students a self-paced online learning platform (and accompanying mobile application) that recommends research-backed practices that can help them work on skills that comprise their resilience and agility.

These include cultivating awareness of their personal habits, assumptions, and biases, as well as cognitive flexibility, emotion regulation, openness to different and new approaches, and mindful learning from setbacks, among others. This work requires practice and reflection over time, so we built the platform to be available to all our students for the entirety of their academic careers.

Within the platform, students set their own aspirations and goals (e.g., I desire to be more aware of negative assumptions about myself or I desire to receive feedback in responsive, productive ways ). Then, based on their identified areas of strength and growth, the program recommends specific practices to experiment with (e.g., cognitive reframing or mindful self-compassion), nudges them to pause and reflect on their progress, and generates progress summaries on their key capabilities that can then be shared with future employers.

Early feedback from students in our piloting has been positive; over 90 percent of participating students agree that the program is relevant, valuable, and important. One student expressed it as a personal breakthrough: “This was the first time I was able to see what was really holding me back. It’s a lot of negative beliefs I have about myself and my fear of trying new approaches that’s keeping me from improving my situation.”

Typically, in our course design processes, we engage in backward design , meaning that we start with our end goal: What do we want our students to learn in this course? From there, we unpack these desired learning outcomes and ask ourselves, What do students have to do to achieve these outcomes? What knowledge or abilities are required for them to succeed on these projects, papers, or other forms of assessment?

Herein lies our opportunity to also ask, What self-capacities (self-intelligence skills, attitudes, or behaviors) can be cultivated during the course’s learning process?

To illustrate, a course’s learning outcome might be to “compose a persuasive essay,” and it involves students being able to “determine essential component(s) of an argument” and “evaluate evidence to support the argument (extrapolation),” among other competencies. These competencies, in turn, involve students’ abilities to process complex information and critique, synthesize, reason, and engage in divergent and convergent thinking, for example.

Driving these higher-order cognitive skills are essential self-capacities, such as students’ courage to assert a novel perspective, their emotional regulation through rounds of feedback, their curiosity to reframe a perceived dead-end or setback, and their openness to pivot their thinking.

I encourage you to explicitly identify these self-intelligence skills as part of the learning journey of your course. If you’re unsure where to start, a good source to support your discovery of these “learning bottlenecks” is the seminal work by Joan Middendorf and Leah Shopkow , which offers a strong step-by-step framework.

2. Bookend your course with self-capacity reflection

It’s quite common for faculty to begin a course with a brief knowledge survey or “hinge questions” that gauge students’ previous knowledge of the course’s content. Pairing this with questions that ask students how they want to grow personally from your course will surprise and engage them.

For example, consider asking them to share any learning-related beliefs they’re entering the course with, such as “I am terrible at writing” or “I suck at presentations.” It’s common for students to hold these broad, generalized negative beliefs as they enter a new course. I like to point out to my students how the short reflection papers, the group work, and the presentations in my seminar courses are opportunities for them to hone their skills, to reflect meaningfully on feedback, and to question these self-limiting perceptions. “How many of you are dreading the . . . ” is a great icebreaker.

“By meeting students ‘where they are’ and expanding the ways they can express their skills and capacities, we create space in our courses for students to take risks in accessible, safe, and incremental ways.”

Then, as the course progresses, encourage incremental self-checks to note anything students are developing, unlearning, and pruning within themselves. They might also notice any unhelpful learning-related habits (e.g., procrastination) that are shifting. Prompts I have used in the past include “ I used to think . . . And now I am wondering about . . . ” and “ I am trying this new approach . . . This is making me consider how I might . . . ”

Consider offering bonus points for documenting their growth. You could create a shared power-point deck or use the portfolio feature in your course’s learning management system and ask students to upload their responses to these prompts over time. Then celebrate their progress at the end of term; students learn a lot about their own growth from witnessing that of their peers.

3. Pair complex parts of your course with opportunities to hone self-intelligence

In my teaching, I find when I pair complex parts of the curriculum with activities that create space for students to pause and connect with essential self-capacities that support the learning, they feel less discouraged by moments of discomfort. For example, when teaching an advanced psychology course on stigma, I highlight parts of the course where students’ previous knowledge or assumptions about others typically come into question. I then insert short self-reflection papers and group discussions during these moments. This gives students the space to discuss and to reflect critically on managing their own biases and assumptions. Students also better understand how their capacities to manage and to learn from dissonance is a critical part of their development as professionals.

In another introductory statistics course, I incorporate “failure demos” into the learning journey to help students learn how to persevere. I ask students to solve what are, unbeknownst to them, unsolvable problems, and then wait as they struggle to find the answers. After three to five minutes, most students question their solvability, with laughter emerging quickly in the room. We conclude with a shared class reflection; students typically express great appreciation for the space to talk about the ways they (and all of us) respond initially to a sense of setback. Situated early in the course, this low-stakes activity illuminates for students just how much their ability to work through complex problems relies on their capacity to keep trying, in addition to their technical understanding of the problem at hand.

The success of interventions like these isn’t surprising. When faculty step into the learner’s experience and address the muddy parts or spots that cause overwhelm, attrition rates diminish dramatically . By amplifying students’ awareness of their own abilities to navigate through challenge, we normalize the development of these skills as part of their learning.

4. Encourage students to step outside of their comfort zones in your courses

Make your class a space in which trying new things and challenging yourself is encouraged. You will be amazed by how forthcoming students are when you ask them to share what skills make them uncomfortable, but that they’d like to be better at—things like asking questions, sharing ideas, or presenting to an audience.

At the start of the term, ask students to fill in the phrase, “If only I could . . . ” with respect to any of the course’s learning activities. Then ask them to create mini self-contracts to achieve these goals in small ways.

For example, class participation has always been a key requirement in my courses, especially in those with less than 60 students. Yet we know it is quite common for students to express varying degrees of discomfort speaking aloud, asking questions, and sharing their ideas in class. To help students enhance these communication skills, I expanded what I count as participation in my courses and created opportunities for everyone to stretch, no matter their comfort level.

“I find when I pair complex parts of the curriculum with activities that create space for students to pause and connect with essential self-capacities that support the learning, they feel less discouraged by moments of discomfort.”

Students who feel a great deal of social anxiety, for instance, don’t have to speak out loud during class time; they can send me their thoughts and ideas after class. I also work with these students to set personal goals—to speak up in just a few larger-group shares (not necessarily in front of the whole class). These adaptations create the space for huge self-gains for these students, and it helps them not feel limited in how well they can engage and perform in the course. Students who feel more comfortable participating can set goals to facilitate a group activity or lead a class discussion.

Over the years, students have told me how these creative opportunities to stretch their communication comfort zones in ways that are right for them increased their self-confidence to share their perspectives more in future courses. By meeting students “where they are” and expanding the ways they can express their skills and capacities, we create space in our courses for students to take risks in accessible, safe, and incremental ways; and in the process, learn more about what they’re capable of.

5. Weave self- and group-resilience skills into team projects

If your course requires group work on a project or presentation, consider identifying (with your students) the self- and team-resilience behaviors that lead to successful partnerships. Ask students, At your fullest potential in your team today, what are you thinking, feeling, and doing? What is the team thinking, feeling, and doing? These become self and team North Stars that students set as they enter the work.

I find that devoting five to 10 minutes at the start of a class, seminar, or team meeting to pause and to identify the attitudes, feelings, and behaviors that we feel the learning experience requires from us individually (e.g., reframing negative self-beliefs, risk-taking, self-kindness, and self-reflection) and as a group (e.g., openness, withholding judgment, curiosity, and perspective-taking) is a very powerful catalyst for our active learning together.

From there, we can think of norms and practices to do individually and as a team to foster these self and team North Stars. For example, we agree to “and” rather than “but” statements when responding to ideas or perspectives different from our own. Or we invoke a collective “just say it” practice that we as a group exclaim during moments when individuals feel apprehensive to share, quashing self-limiting beliefs about ideas “not being good enough.”

At the end of the session, we look back at our individual and team North Stars, and we reflect on how well we did, how much self and team potential was created from our time together, and the ways we can carry these practices forward.

Don’t leave out self-intelligence skills

As educators, we strive with great heart to create learning experiences that are student-focused, forward-thinking, and transformative. We want to equip our students with not only a solid knowledge base, but the deeply human skills they’ll need to excel at whatever life throws their way. To do so, we need to intentionally create space in our courses for students to witness and to reflect on their personal growth.

If we don’t devote attention to students’ self-intelligence skills, we are equipping them with only half of the education they will need to not only survive, but thrive, amid ongoing innovation and technological evolution . By overlooking this essential part of a student’s development, we limit their discovery of their fullest potentials. We can do more to help them build their capacity to understand not only who they are, but who they can be in this rapidly changing world.

Cherie Werhun

Cherie Werhun is the associate vice provost of human development and potential at Sheridan College, Ontario, Canada. She previously served as strategic lead of signature learning and associate dean of teaching and learning at Sheridan College. Her passion for designing creative and learner-focused teaching and learning innovations earned her a national teaching and mentoring award from the Canadian Psychological Association, an award of innovation from the University of Toronto, and, recently, a national honorable mention for leadership from Colleges and Institutes Canada. She holds a PhD in social psychology from the University of Toronto, Canada.

Related Articles

CLASSROOM MANAGEMENT

We use cookies to understand how you use our site and to improve your experience, including personalizing content. Learn More . By continuing to use our site, you accept our use of cookies and revised Privacy Policy .

development of artificial intelligence essay

IMAGES

  1. Artificial Intelligence Essay

    development of artificial intelligence essay

  2. ≫ Artificial Intelligence: the Present & the Future Free Essay Sample

    development of artificial intelligence essay

  3. 📌 Philosophical Considerations on the Future of Artificial Intelligence

    development of artificial intelligence essay

  4. ≫ Present and Future of Artificial Intelligence Free Essay Sample on

    development of artificial intelligence essay

  5. 001 Artificial Intelligence Essay Example ~ Thatsnotus

    development of artificial intelligence essay

  6. Essay on Artificial Intelligence

    development of artificial intelligence essay

VIDEO

  1. INTRODUCTION TO ARTIFICIAL INTELLIGENCE

  2. Advantages of Artificial Intelligence ( AI )

  3. Identify the Different Stages of an Artificial Intelligence Project

  4. Artificial Intelligence Essay In English || Artificial Intelligence Essay || #mdwriting #handwriting

  5. INTRODUCTION TO ARTIFICIAL INTELLIGENCE

  6. ARTIFICIAL INTELLIGENCE

COMMENTS

  1. How artificial intelligence is transforming the world

    April 24, 2018. Artificial intelligence (AI) is a wide-ranging tool that enables people to rethink how we integrate information, analyze data, and use the resulting insights to improve decision ...

  2. The brief history of artificial intelligence: The world has changed

    The rise of artificial intelligence over the last 8 decades: As training computation has increased, AI systems have become more powerful 9 Each small circle in this chart represents one AI system. The circle's position on the horizontal axis indicates when the AI system was built, and its position on the vertical axis shows the amount of ...

  3. The present and future of AI

    The 2021 report is the second in a series that will be released every five years until 2116. Titled "Gathering Strength, Gathering Storms," the report explores the various ways AI is increasingly touching people's lives in settings that range from movie recommendations and voice assistants to autonomous driving and automated medical ...

  4. Development's of Artificial Intelligence, Essay Example

    The development of artificial intelligence (AI) is one the most controversial issues in the computing technology industry. The technology industry has spent many years examining AI in the fields of computer science, mathematics, and engineering. Most will agree that AI is the method of creating systems that can show characteristics of ...

  5. What Is Artificial Intelligence? Definition, Uses, and Types

    What is artificial intelligence? Artificial intelligence (AI) is the theory and development of computer systems capable of performing tasks that historically required human intelligence, such as recognizing speech, making decisions, and identifying patterns. AI is an umbrella term that encompasses a wide variety of technologies, including machine learning, deep learning, and natural language ...

  6. Development and applications of artificial intelligence

    artificial intelligence (AI), Ability of a machine to perform tasks thought to require human intelligence.Typical applications include game playing, language translation, expert systems, and robotics. Although pseudo-intelligent machinery dates back to antiquity, the first glimmerings of true intelligence awaited the development of digital computers in the 1940s.

  7. Artificial intelligence is transforming our world

    If we leave the development of artificial intelligence entirely to private companies, then we are also leaving it up these private companies what our future — the future of humanity — will be. With our work at Our World in Data we want to do our small part to enable a better informed public conversation on AI and the future we want to live in.

  8. 9 Development in Artificial Intelligence

    The origins of AI research are intimately linked with two landmark papers on chess playing by machine. 2 They were written in 1950 by Claude E. Shannon, a mathematician at Bell Laboratories who is widely acknowledged as a principal creator of information theory. In the late 1930s, while still a graduate student, he developed a method for symbolic analysis of switching systems and networks ...

  9. Exploring Artificial Intelligence in Academic Essay: Higher Education

    Higher education perceptions of artificial intelligence. Studies have explored the diverse functionalities of these AI tools and their impact on writing productivity, quality, and students' learning experiences. The integration of Artificial Intelligence (AI) in writing academic essays has become a significant area of interest in higher education.

  10. Artificial Intelligence and the Future of Humans

    Some 979 technology pioneers, innovators, developers, business and policy leaders, researchers and activists answered this question in a canvassing of experts conducted in the summer of 2018. The experts predicted networked artificial intelligence will amplify human effectiveness but also threaten human autonomy, agency and capabilities.

  11. The role of artificial intelligence in achieving the Sustainable

    The emergence of artificial intelligence (AI) and its progressively wider impact on many sectors requires an assessment of its effect on the achievement of the Sustainable Development Goals. Using ...

  12. Artificial intelligence in academic writing: a paradigm-shifting

    The use of artificial intelligence (AI) in academic writing can be divided into two broad categories: those that assist authors in the writing process; and those that are used to evaluate and ...

  13. The impact of artificial intelligence on human society and bioethics

    The new development of the long-term goal of many researchers is to create strong AI or artificial general intelligence (AGI) which is the speculative intelligence of a machine that has the capacity to understand or learn any intelligent task human being can, thus assisting human to unravel the confronted problem.

  14. Artificial Intelligence Essay

    Examples of AI-Artificial Intelligence. In this topic, we are going to provide an essay on Artificial Intelligence. This long essay on Artificial Intelligence will cover more than 1000 words, including Introduction of AI, History of AI, Advantages and disadvantages, Types of AI, Applications of AI, Challenges with AI, and Conclusion.

  15. Artificial Intelligence and Its Impact on Education Essay

    Introduction. Rooted in computer science, Artificial Intelligence (AI) is defined by the development of digital systems that can perform tasks, which are dependent on human intelligence (Rexford, 2018). Interest in the adoption of AI in the education sector started in the 1980s when researchers were exploring the possibilities of adopting ...

  16. Artificial Intelligence Essay for Students and Children

    500+ Words Essay on Artificial Intelligence. Artificial Intelligence refers to the intelligence of machines. This is in contrast to the natural intelligence of humans and animals. ... It is probably the fastest-growing development in the World of technology and innovation. Furthermore, many experts believe AI could solve major challenges and ...

  17. Artificial Intelligence For Good: How AI Is Helping Humanity

    Reporting Sexual Harassment: Artificial intelligence offers new ways of reporting gender-based violence, child sex abuse and more.AI programs are being designed to monitor internal communications ...

  18. Essay on The Rise of Artificial Intelligence

    Artificial intelligence is an imitation of human knowledge that is programmed in different machines, using algorithms, to simulate the thought process and actions of humans. The first concepts for AI started in the 1950s, where many mathematicians, scientists, and philosophers explored the possibility of machines that problem solved and made ...

  19. 500+ Words Essay on Artificial Intelligence

    Artificial Intelligence Essay. Artificial Intelligence is the science and engineering of making intelligent machines, especially intelligent computer programs. It is concerned with getting computers to do tasks that would normally require human intelligence. AI systems are basically software systems (or controllers for robots) that use ...

  20. The development of artificial intelligence in education: A review in

    Valery Okulich-Kazarin A. Artyukhov Łukasz Skowron N. Artyukhova O. Dluhopolskyi W. Cwynar. Education, Environmental Science. Sustainability. 2023. The rapid development of artificial intelligence (AI) has affected higher education. Students now receive new tools that optimize the performance of current tasks.

  21. IELTS Writing Task 2: 'artificial intelligence' essay

    Here's my full essay for the question below. Some people believe that developments in the field of artificial intelligence will have a positive impact on our lives in the near future. Others, by contrast, are worried that we are not prepared for a world in which computers are more intelligent than humans. Discuss both of these views and give your own opinion. People seem to be either excited ...

  22. Opinion

    A.I. is embedded in systems that help prevent fraud on your credit card, predict the weather and allow early detection of diseases. A.I. thinks exponentially faster than humans, can analyze orders ...

  23. Artificial Intelligence Essay

    Artificial Intelligence is the theory and development of computers, which imitates the human intelligence and senses, such as visual perception, speech recognition, decision-making, and translation between languages. Artificial Intelligence has brought a revolution in the world of technology.

  24. Debates on the nature of artificial general intelligence

    The term "artificial general intelligence" (AGI) has become ubiquitous in current discourse around AI. OpenAI states that its mission is "to ensure that artificial general intelligence benefits all of humanity." DeepMind's company vision statement notes that "artificial general intelligence…has the potential to drive one of the greatest transformations in history."

  25. How AI and other technology changed our lives

    Bitcoin has not only influenced the development of other cryptocurrencies but has also sparked discussions about the future of money in the digital age. 2014: Virtual reality. 2014 was a pivotal year in the development of virtual reality (VR) for commercial applications. ... Artificial intelligence (AI) technology has been around for some time ...

  26. In the Age of AI, Students Need to Develop Their Self-Intelligence

    A s artificial intelligence advances at an overwhelming pace and becomes more deeply ingrained in our everyday lives, we cannot lose sight of the qualities that make us distinctly human—our self-intelligence.This includes not only our self-awareness, but our resilience, flexibility, and agility, which allow us to adapt and thrive in moments of complexity, overwhelm, and uncertainty.

  27. Drone Swarms Are About to Change the Balance of Military Power

    Essay; Drone Swarms Are About to Change the Balance of Military Power On today's battlefields, drones are a manageable threat. When hundreds of them can be harnessed to AI technology, they will ...

  28. SKYNET 2023 Conception of the Artificial Super Intelligence ...

    This Book proposes a Project Conception of Artificial Super Intelligence ASI, based on (strong) system approach and wide theoretical-methodological framework - Cybernetics, Synergetics, Semiotics, Mathematics, Cognitology and Artificial Intelligence. Contents: • IDEOLOGY & STRATEGY of the ASI Project • THEORY & METHODOLOGY of ASI Development