What are your chances of acceptance?

Calculate for all schools, your chance of acceptance.

Duke University

Your chancing factors

Extracurriculars.

future of computer science essay

How to Write the “Why Computer Science?” Essay

What’s covered:, what is the purpose of the “why computer science” essay, elements of a good computer science essay, computer science essay example, where to get your essay edited.

You will encounter many essay prompts as you start applying to schools, but if you are intent on majoring in computer science or a related field, you will come across the “ Why Computer Science? ” essay archetype. It’s important that you know the importance behind this prompt and what constitutes a good response in order to make your essay stand out.

For more information on writing essays, check out CollegeVine’s extensive essay guides that include everything from general tips, to essay examples, to essay breakdowns that will help you write the essays for over 100 schools.

Colleges ask you to write a “ Why Computer Science? ” essay so you may communicate your passion for computer science, and demonstrate how it aligns with your personal and professional goals. Admissions committees want to see that you have a deep interest and commitment to the field, and that you have a vision for how a degree in computer science will propel your future aspirations.

The essay provides an opportunity to distinguish yourself from other applicants. It’s your chance to showcase your understanding of the discipline, your experiences that sparked or deepened your interest in the field, and your ambitions for future study and career. You can detail how a computer science degree will equip you with the skills and knowledge you need to make a meaningful contribution in this rapidly evolving field.

A well-crafted “ Why Computer Science? ” essay not only convinces the admissions committee of your enthusiasm and commitment to computer science, but also provides a glimpse of your ability to think critically, solve problems, and communicate effectively—essential skills for a  computer scientist.

The essay also gives you an opportunity to demonstrate your understanding of the specific computer science program at the college or university you are applying to. You can discuss how the program’s resources, faculty, curriculum, and culture align with your academic interests and career goals. A strong “ Why Computer Science? ” essay shows that you have done your research, and that you are applying to the program not just because you want to study computer science, but because you believe that this particular program is the best fit for you.

Writing an effective “ Why Computer Science ?” essay often requires a blend of two popular college essay archetypes: “ Why This Major? ” and “ Why This College? “.

Explain “Why This Major?”

The “ Why This Major? ” essay is an opportunity for you to dig deep into your motivations and passions for studying Computer Science. It’s about sharing your ‘origin story’ of how your interest in Computer Science took root and blossomed. This part of your essay could recount an early experience with coding, a compelling Computer Science class you took, or a personal project that sparked your fascination.

What was the journey that led you to this major? Was it a particular incident, or did your interest evolve over time? Did you participate in related activities, like coding clubs, online courses, hackathons, or internships?

Importantly, this essay should also shed light on your future aspirations. How does your interest in Computer Science connect to your career goals? What kind of problems do you hope to solve with your degree?

The key for a strong “ Why This Major? ” essay is to make the reader understand your connection to the subject. This is done through explaining your fascination and love for computer science. What emotions do you feel when you are coding? How does it make you feel when you figure out the solution after hours of trying? What aspects of your personality shine when you are coding? 

By addressing these questions, you can effectively demonstrate a deep, personal, and genuine connection with the major.

Emphasize “Why This College?”

The “ Why This College? ” component of the essay demonstrates your understanding of the specific university and its Computer Science program. This is where you show that you’ve done your homework about the college, and you know what resources it has to support your academic journey.

What unique opportunities does the university offer for Computer Science students? Are there particular courses, professors, research opportunities, or clubs that align with your interests? Perhaps there’s a study abroad program or an industry partnership that could give you a unique learning experience. Maybe the university has a particular teaching methodology that resonates with you.

Also, think about the larger university community. What aspects of the campus culture, community, location, or extracurricular opportunities enhance your interest in this college? Remember, this is not about general praises but about specific features that align with your goals. How will these resources and opportunities help you explore your interests further and achieve your career goals? How does the university’s vision and mission resonate with your own values and career aspirations?

It’s important when discussing the school’s resources that you always draw a connection between the opportunity and yourself. For example, don’t tell us you want to work with X professor because of their work pioneering regenerative AI. Go a step further and say because of your goal to develop AI surgeons for remote communities, learning how to strengthen AI feedback loops from X professor would bring you one step closer to achieving your dream.

By articulating your thoughts on these aspects, you demonstrate a strong alignment between the college and your academic goals, enhancing your appeal as a prospective student.

Demonstrate a Deep Understanding of Computer Science

As with a traditional “ Why This Major? ” essay, you must exhibit a deep and clear understanding of computer science. Discuss specific areas within the field that pique your interest and why. This could range from artificial intelligence to software development, or from data science to cybersecurity. 

What’s important is to not just boast and say “ I have a strong grasp on cybersecurity ”, but instead use your knowledge to show your readers your passion: “ After being bombarded with cyber attack after cyber attack, I explained to my grandparents the concept of end-to-end encryption and how phishing was not the same as a peaceful afternoon on a lake. ”

Make it Fun!

Students make the mistake of thinking their college essays have to be serious and hyper-professional. While you don’t want to be throwing around slang and want to present yourself in a positive light, you shouldn’t feel like you’re not allowed to have fun with your essay. Let your personality shine and crack a few jokes.

You can, and should, also get creative with your essay. A great way to do this in a computer science essay is to incorporate lines of code or write the essay like you are writing out code. 

Now we will go over a real “ Why Computer Science? ” essay a student submitted and explore what the essay did well, and where there is room for improvement.

Please note: Looking at examples of real essays students have submitted to colleges can be very beneficial to get inspiration for your essays. You should never copy or plagiarize from these examples when writing your own essays. Colleges can tell when an essay isn’t genuine and will not view students favorably if they plagiarized.

I held my breath and hit RUN. Yes! A plump white cat jumped out and began to catch the falling pizzas. Although my Fat Cat project seems simple now, it was the beginning of an enthusiastic passion for computer science. Four years and thousands of hours of programming later, that passion has grown into an intense desire to explore how computer science can serve society. Every day, surrounded by technology that can recognize my face and recommend scarily-specific ads, I’m reminded of Uncle Ben’s advice to a young Spiderman: “with great power comes great responsibility”. Likewise, the need to ensure digital equality has skyrocketed with AI’s far-reaching presence in society; and I believe that digital fairness starts with equality in education.

The unique use of threads at the College of Computing perfectly matches my interests in AI and its potential use in education; the path of combined threads on Intelligence and People gives me the rare opportunity to delve deep into both areas. I’m particularly intrigued by the rich sets of both knowledge-based and data-driven intelligence courses, as I believe AI should not only show correlation of events, but also provide insight for why they occur.

In my four years as an enthusiastic online English tutor, I’ve worked hard to help students overcome both financial and technological obstacles in hopes of bringing quality education to people from diverse backgrounds. For this reason, I’m extremely excited by the many courses in the People thread that focus on education and human-centered technology. I’d love to explore how to integrate AI technology into the teaching process to make education more available, affordable, and effective for people everywhere. And with the innumerable opportunities that Georgia Tech has to offer, I know that I will be able to go further here than anywhere else.

What the Essay Did Well 

This essay perfectly accomplishes the two key parts of a “ Why Computer Science? ” essay: answering “ Why This Major? ” and “ Why This College? ”. Not to mention, we get a lot of insight into this student and what they care about beyond computer science, and a fun hook at the beginning.

Starting with the “ Why This Major? ” aspect of the response, this essay demonstrates what got the student into computer science, why they are passionate about the subject, and what their goals are. They show us their introduction to the world of CS with an engaging hook: “I held my breath and hit RUN. Yes! A plump white cat jumped out and began to catch the falling pizzas. ” We then see this is a core passion because they spent “ Four years and thousands of hours ,” coding.

The student shows us why they care about AI with the sentence, “ Every day, surrounded by technology that can recognize my face and recommend scarily-specific ads ,” which makes the topic personal by demonstrating their fear at AI’s capabilities. But, rather than let panic overwhelm them, the student calls upon Spiderman and tells us their goal of establishing digital equality through education. This provides a great basis for the rest of the essay, as it thoroughly explains the students motivations and goals, and demonstrates their appreciation for interdisciplinary topics.

Then, the essay shifts into answering “ Why This College? ”, which it does very well by honing in on a unique facet of Georgia Tech’s College of Computing: threads. This is a great example of how to provide depth to the school resources you mention. The student describes the two threads and not only why the combination is important to them, but how their previous experiences (i.e. online English tutor) correlate to the values of the thread: “ For this reason, I’m extremely excited by the many courses in the People thread that focus on education and human-centered technology. ”

What Could Be Improved

This essay does a good job covering the basics of the prompt, but it could be elevated with more nuance and detail. The biggest thing missing from this essay is a strong core to tie everything together. What do we mean by that? We want to see a common theme, anecdote, or motivation that is weaved throughout the entire essay to connect everything. Take the Spiderman quote for example. If this was expanded, it could have been the perfect core for this essay.

Underlying this student’s interest in AI is a passion for social justice, so they could have used the quote about power and responsibility to talk about existing injustices with AI and how once they have the power to create AI they will act responsibly and help affected communities. They are clearly passionate about equality of education, but there is a disconnect between education and AI that comes from a lack of detail. To strengthen the core of the essay, this student needs to include real-world examples of how AI is fostering inequities in education. This takes their essay from theoretical to practical.

Whether you’re a seasoned writer or a novice trying your hand at college application essays, the review and editing process is crucial. A fresh set of eyes can provide valuable insights into the clarity, coherence, and impact of your writing. Our free Peer Essay Review tool offers a unique platform to get your essay reviewed by another student. Peer reviews can often uncover gaps, provide new insights or enhance the clarity of your essay, making your arguments more compelling. The best part? You can return the favor by reviewing other students’ essays, which is a great way to hone your own writing and critical thinking skills.

For a more professional touch, consider getting your essay reviewed by a college admissions expert . CollegeVine advisors have years of experience helping students refine their writing and successfully apply to top-tier schools. They can provide specific advice on how to showcase your strengths, address any weaknesses, and generally present yourself in the best possible light.

Related CollegeVine Blog Posts

future of computer science essay

  • Newsletters

Where computing might go next

The future of computing depends in part on how we reckon with its past.

IBM engineers at Ames Research Center

  • Margaret O’Mara archive page

If the future of computing is anything like its past, then its trajectory will depend on things that have little to do with computing itself. 

Technology does not appear from nowhere. It is rooted in time, place, and opportunity. No lab is an island; machines’ capabilities and constraints are determined not only by the laws of physics and chemistry but by who supports those technologies, who builds them, and where they grow. 

Popular characterizations of computing have long emphasized the quirkiness and brilliance of those in the field, portraying a rule-breaking realm operating off on its own. Silicon Valley’s champions and boosters have perpetuated the mythos of an innovative land of garage startups and capitalist cowboys. The reality is different. Computing’s history is modern history—and especially American history—in miniature.

The United States’ extraordinary push to develop nuclear and other weapons during World War II unleashed a torrent of public spending on science and technology. The efforts thus funded trained a generation of technologists and fostered multiple computing projects, including ENIAC —the first all-digital computer, completed in 1946. Many of those funding streams eventually became permanent, financing basic and applied research at a scale unimaginable before the war. 

The strategic priorities of the Cold War drove rapid development of transistorized technologies on both sides of the Iron Curtain. In a grim race for nuclear supremacy amid an optimistic age of scientific aspiration, government became computing’s biggest research sponsor and largest single customer. Colleges and universities churned out engineers and scientists. Electronic data processing defined the American age of the Organization Man, a nation built and sorted on punch cards. 

The space race, especially after the Soviets beat the US into space with the launch of the Sputnik orbiter in late 1957, jump-started a silicon semiconductor industry in a sleepy agricultural region of Northern California, eventually shifting tech’s center of entrepreneurial gravity from East to West. Lanky engineers in white shirts and narrow ties turned giant machines into miniature electronic ones, sending Americans to the moon. (Of course, there were also women playing key, though often unrecognized, roles.) 

In 1965, semiconductor pioneer Gordon Moore, who with colleagues had broken ranks with his boss William Shockley of Shockley Semiconductor to launch a new company, predicted that the number of transistors on an integrated circuit would double every year while costs would stay about the same. Moore’s Law was proved right. As computing power became greater and cheaper, digital innards replaced mechanical ones in nearly everything from cars to coffeemakers.

A new generation of computing innovators arrived in the Valley, beneficiaries of America’s great postwar prosperity but now protesting its wars and chafing against its culture. Their hair grew long; their shirts stayed untucked. Mainframes were seen as tools of the Establishment, and achievement on earth overshadowed shooting for the stars. Small was beautiful. Smiling young men crouched before home-brewed desktop terminals and built motherboards in garages. A beatific newly minted millionaire named Steve Jobs explained how a personal computer was like a bicycle for the mind. Despite their counterculture vibe, they were also ruthlessly competitive businesspeople. Government investment ebbed and private wealth grew. 

The ARPANET became the commercial internet. What had been a walled garden accessible only to government-funded researchers became an extraordinary new platform for communication and business, as the screech of dial-up modems connected millions of home computers to the World Wide Web. Making this strange and exciting world accessible were very young companies with odd names: Netscape, eBay, Amazon.com, Yahoo.

By the turn of the millennium, a president had declared that the era of big government was over and the future lay in the internet’s vast expanse. Wall Street clamored for tech stocks, then didn’t; fortunes were made and lost in months. After the bust, new giants emerged. Computers became smaller: a smartphone in your pocket, a voice assistant in your kitchen. They grew larger, into the vast data banks and sprawling server farms of the cloud. 

Fed with oceans of data, largely unfettered by regulation, computing got smarter. Autonomous vehicles trawled city streets, humanoid robots leaped across laboratories, algorithms tailored social media feeds and matched gig workers to customers. Fueled by the explosion of data and computation power, artificial intelligence became the new new thing. Silicon Valley was no longer a place in California but shorthand for a global industry, although tech wealth and power were consolidated ever more tightly in five US-based companies with a combined market capitalization greater than the GDP of Japan. 

It was a trajectory of progress and wealth creation that some believed inevitable and enviable. Then, starting two years ago, resurgent nationalism and an economy-­upending pandemic scrambled supply chains, curtailed the movement of people and capital, and reshuffled the global order. Smartphones recorded death on the streets and insurrection at the US Capitol. AI-enabled drones surveyed the enemy from above and waged war on those below. Tech moguls sat grimly before congressional committees, their talking points ringing hollow to freshly skeptical lawmakers. 

Our relationship with computing had suddenly changed.

The past seven decades have produced stunning breakthroughs in science and engineering. The pace and scale of change would have amazed our mid-20th-century forebears. Yet techno-optimistic assurances about the positive social power of a networked computer on every desk have proved tragically naïve. The information age of late has been more effective at fomenting discord than advancing enlightenment, exacerbating social inequities and economic inequalities rather than transcending them. 

The technology industry—produced and made wealthy by these immense advances in computing—has failed to imagine alternative futures both bold and practicable enough to address humanity’s gravest health and climatic challenges. Silicon Valley leaders promise space colonies while building grand corporate headquarters below sea level. They proclaim that the future lies in the metaverse , in the blockchain, in cryptocurrencies whose energy demands exceed those of entire nation-states. 

The future of computing feels more tenuous, harder to map in a sea of information and disruption. That is not to say that predictions are futile, or that those who build and use technology have no control over where computing goes next. To the contrary: history abounds with examples of individual and collective action that altered social and political outcomes. But there are limits to the power of technology to overcome earthbound realities of politics, markets, and culture. 

To understand computing’s future, look beyond the machine.

1. The hoodie problem

First, look to who will get to build the future of computing.

The tech industry long celebrated itself as a meritocracy, where anyone could get ahead on the strength of technical know-how and innovative spark. This assertion has been belied in recent years by the persistence of sharp racial and gender imbalances, particularly in the field’s topmost ranks. Men still vastly outnumber women in the C-suites and in key engineering roles at tech companies. Venture capital investors and venture-backed entrepreneurs remain mostly white and male. The number of Black and Latino technologists of any gender remains shamefully tiny. 

Much of today’s computing innovation was born in Silicon Valley . And looking backward, it becomes easier to understand where tech’s meritocratic notions come from, as well as why its diversity problem has been difficult to solve. 

Silicon Valley was once indeed a place where people without family money or connections could make a career and possibly a fortune. Those lanky engineers of the Valley’s space-age 1950s and 1960s were often heartland boys from middle-class backgrounds, riding the extraordinary escalator of upward mobility that America delivered to white men like them in the prosperous quarter-century after the end of World War II.  

Many went to college on the GI Bill and won merit scholarships to places like Stanford and MIT, or paid minimal tuition at state universities like the University of California, Berkeley. They had their pick of engineering jobs as defense contracts fueled the growth of the electronics industry. Most had stay-at-home wives whose unpaid labor freed husbands to focus their energy on building new products, companies, markets. Public investments in suburban infrastructure made their cost of living reasonable, the commutes easy, the local schools excellent. Both law and market discrimination kept these suburbs nearly entirely white. 

In the last half-century, political change and market restructuring slowed this escalator of upward mobility to a crawl , right at the time that women and minorities finally had opportunities to climb on. By the early 2000s, the homogeneity among those who built and financed tech products entrenched certain assumptions: that women were not suited for science, that tech talent always came dressed in a hoodie and had attended an elite school—whether or not someone graduated. It limited thinking about what problems to solve, what technologies to build, and what products to ship. 

Having so much technology built by a narrow demographic—highly educated, West Coast based, and disproportionately white, male, and young—becomes especially problematic as the industry and its products grow and globalize. It has fueled considerable investment in driverless cars without enough attention to the roads and cities these cars will navigate. It has propelled an embrace of big data without enough attention to the human biases contained in that data . It has produced social media platforms that have fueled political disruption and violence at home and abroad. It has left rich areas of research and potentially vast market opportunities neglected.

Computing’s lack of diversity has always been a problem, but only in the past few years has it become a topic of public conversation and a target for corporate reform. That’s a positive sign. The immense wealth generated within Silicon Valley has also created a new generation of investors, including women and minorities who are deliberately putting their money in companies run by people who look like them. 

But change is painfully slow. The market will not take care of imbalances on its own.

For the future of computing to include more diverse people and ideas, there needs to be a new escalator of upward mobility: inclusive investments in research, human capital, and communities that give a new generation the same assist the first generation of space-age engineers enjoyed. The builders cannot do it alone.

2. Brainpower monopolies

Then, look at who the industry's customers are and how it is regulated.

The military investment that undergirded computing’s first all-digital decades still casts a long shadow. Major tech hubs of today—the Bay Area, Boston, Seattle, Los Angeles—all began as centers of Cold War research and military spending. As the industry further commercialized in the 1970s and 1980s, defense activity faded from public view, but it hardly disappeared. For academic computer science, the Pentagon became an even more significant benefactor starting with Reagan-era programs like the Strategic Defense Initiative, the computer-­enabled system of missile defense memorably nicknamed “Star Wars.” 

In the past decade, after a brief lull in the early 2000s, the ties between the technology industry and the Pentagon have tightened once more. Some in Silicon Valley protest its engagement in the business of war, but their objections have done little to slow the growing stream of multibillion-dollar contracts for cloud computing and cyberweaponry. It is almost as if Silicon Valley is returning to its roots. 

Defense work is one dimension of the increasingly visible and freshly contentious entanglement between the tech industry and the US government. Another is the growing call for new technology regulation and antitrust enforcement, with potentially significant consequences for how technological research will be funded and whose interests it will serve. 

The extraordinary consolidation of wealth and power in the technology sector and the role the industry has played in spreading disinformation and sparking political ruptures have led to a dramatic change in the way lawmakers approach the industry. The US has had little appetite for reining in the tech business since the Department of Justice took on Microsoft 20 years ago. Yet after decades of bipartisan chumminess and laissez-faire tolerance, antitrust and privacy legislation is now moving through Congress. The Biden administration has appointed some of the industry’s most influential tech critics to key regulatory roles and has pushed for significant increases in regulatory enforcement. 

The five giants—Amazon, Apple, Facebook, Google, and Microsoft—now spend as much or more lobbying in Washington, DC, as banks, pharmaceutical companies, and oil conglomerates, aiming to influence the shape of anticipated regulation. Tech leaders warn that breaking up large companies will open a path for Chinese firms to dominate global markets, and that regulatory intervention will squelch the innovation that made Silicon Valley great in the first place.

Viewed through a longer lens, the political pushback against Big Tech’s power is not surprising. Although sparked by the 2016 American presidential election, the Brexit referendum, and the role social media disinformation campaigns may have played in both, the political mood echoes one seen over a century ago. 

We might be looking at a tech future where companies remain large but regulated, comparable to the technology and communications giants of the middle part of the 20th century. This model did not squelch technological innovation. Today, it could actually aid its growth and promote the sharing of new technologies. 

Take the case of AT&T, a regulated monopoly for seven decades before its ultimate breakup in the early 1980s. In exchange for allowing it to provide universal telephone service, the US government required AT&T to stay out of other communication businesses, first by selling its telegraph subsidiary and later by steering clear of computing. 

Like any for-profit enterprise, AT&T had a hard time sticking to the rules, especially after the computing field took off in the 1940s. One of these violations resulted in a 1956 consent decree under which the US required the telephone giant to license the inventions produced in its industrial research arm, Bell Laboratories, to other companies. One of those products was the transistor. Had AT&T not been forced to share this and related technological breakthroughs with other laboratories and firms, the trajectory of computing would have been dramatically different.

Right now, industrial research and development activities are extraordinarily concentrated once again. Regulators mostly looked the other way over the past two decades as tech firms pursued growth at all costs, and as large companies acquired smaller competitors. Top researchers left academia for high-paying jobs at the tech giants as well, consolidating a huge amount of the field’s brainpower in a few companies. 

More so than at any other time in Silicon Valley’s ferociously entrepreneurial history, it is remarkably difficult for new entrants and their technologies to sustain meaningful market share without being subsumed or squelched by a larger, well-capitalized, market-dominant firm. More of computing’s big ideas are coming from a handful of industrial research labs and, not surprisingly, reflecting the business priorities of a select few large tech companies.

Tech firms may decry government intervention as antithetical to their ability to innovate. But follow the money, and the regulation, and it is clear that the public sector has played a critical role in fueling new computing discoveries—and building new markets around them—from the start. 

3. Location, location, location

Last, think about where the business of computing happens.

The question of where “the next Silicon Valley” might grow has consumed politicians and business strategists around the world for far longer than you might imagine. French president Charles de Gaulle toured the Valley in 1960 to try to unlock its secrets. Many world leaders have followed in the decades since. 

Silicon Somethings have sprung up across many continents, their gleaming research parks and California-style subdivisions designed to lure a globe-trotting workforce and cultivate a new set of tech entrepreneurs. Many have fallen short of their startup dreams, and all have fallen short of the standard set by the original, which has retained an extraordinary ability to generate one blockbuster company after another, through boom and bust. 

While tech startups have begun to appear in a wider variety of places, about three in 10 venture capital firms and close to 60% of available investment dollars remain concentrated in the Bay Area. After more than half a century, it remains the center of computing innovation. 

It does, however, have significant competition. China has been making the kinds of investments in higher education and advanced research that the US government made in the early Cold War, and its technology and internet sectors have produced enormous companies with global reach. 

The specter of Chinese competition has driven bipartisan support for renewed American tech investment, including a potentially massive infusion of public subsidies into the US semiconductor industry. American companies have been losing ground to Asian competitors in the chip market for years. The economy-choking consequences of this became painfully clear when covid-related shutdowns slowed chip imports to a trickle, throttling production of the many consumer goods that rely on semiconductors to function.

As when Japan posed a competitive threat 40 years ago, the American agitation over China runs the risk of slipping into corrosive stereotypes and lightly veiled xenophobia. But it is also true that computing technology reflects the state and society that makes it, whether it be the American military-industrial complex of the late 20th century, the hippie-­influenced West Coast culture of the 1970s, or the communist-capitalist China of today. 

What’s next

Historians like me dislike making predictions. We know how difficult it is to map the future, especially when it comes to technology, and how often past forecasters have gotten things wrong. 

Intensely forward-thinking and impatient with incrementalism, many modern technologists—especially those at the helm of large for-profit enterprises—are the opposite. They disdain politics, and resist getting dragged down by the realities of past and present as they imagine what lies over the horizon. They dream of a new age of quantum computers and artificial general intelligence, where machines do most of the work and much of the thinking. 

They could use a healthy dose of historical thinking. 

Whatever computing innovations will appear in the future, what matters most is how our culture, businesses, and society choose to use them. And those of us who analyze the past also should take some inspiration and direction from the technologists who have imagined what is not yet possible. Together, looking forward and backward, we may yet be able to get where we need to go. 

It’s time to retire the term “user”

The proliferation of AI means we need a new word.

  • Taylor Majewski archive page

Modernizing data with strategic purpose

Data strategies and modernization initiatives misaligned with the overall business strategy—or too narrowly focused on AI—leave substantial business value on the table.

  • MIT Technology Review Insights archive page

Why it’s so hard for China’s chip industry to become self-sufficient

Chip companies from the US and China are developing new materials to reduce reliance on a Japanese monopoly. It won’t be easy.

  • Zeyi Yang archive page

Almost every Chinese keyboard app has a security flaw that reveals what users type

An encryption loophole in these apps leaves nearly a billion people vulnerable to eavesdropping.

Stay connected

Get the latest updates from mit technology review.

Discover special offers, top stories, upcoming events, and more.

Thank you for submitting your email!

It looks like something went wrong.

We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive.

The present and future of AI

Finale doshi-velez on how ai is shaping our lives and how we can shape ai.

image of Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences

Finale Doshi-Velez, the John L. Loeb Professor of Engineering and Applied Sciences. (Photo courtesy of Eliza Grinnell/Harvard SEAS)

How has artificial intelligence changed and shaped our world over the last five years? How will AI continue to impact our lives in the coming years? Those were the questions addressed in the most recent report from the One Hundred Year Study on Artificial Intelligence (AI100), an ongoing project hosted at Stanford University, that will study the status of AI technology and its impacts on the world over the next 100 years.

The 2021 report is the second in a series that will be released every five years until 2116. Titled “Gathering Strength, Gathering Storms,” the report explores the various ways AI is  increasingly touching people’s lives in settings that range from  movie recommendations  and  voice assistants  to  autonomous driving  and  automated medical diagnoses .

Barbara Grosz , the Higgins Research Professor of Natural Sciences at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) is a member of the standing committee overseeing the AI100 project and Finale Doshi-Velez , Gordon McKay Professor of Computer Science, is part of the panel of interdisciplinary researchers who wrote this year’s report. 

We spoke with Doshi-Velez about the report, what it says about the role AI is currently playing in our lives, and how it will change in the future.  

Q: Let's start with a snapshot: What is the current state of AI and its potential?

Doshi-Velez: Some of the biggest changes in the last five years have been how well AIs now perform in large data regimes on specific types of tasks.  We've seen [DeepMind’s] AlphaZero become the best Go player entirely through self-play, and everyday uses of AI such as grammar checks and autocomplete, automatic personal photo organization and search, and speech recognition become commonplace for large numbers of people.  

In terms of potential, I'm most excited about AIs that might augment and assist people.  They can be used to drive insights in drug discovery, help with decision making such as identifying a menu of likely treatment options for patients, and provide basic assistance, such as lane keeping while driving or text-to-speech based on images from a phone for the visually impaired.  In many situations, people and AIs have complementary strengths. I think we're getting closer to unlocking the potential of people and AI teams.

There's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: Over the course of 100 years, these reports will tell the story of AI and its evolving role in society. Even though there have only been two reports, what's the story so far?

There's actually a lot of change even in five years.  The first report is fairly rosy.  For example, it mentions how algorithmic risk assessments may mitigate the human biases of judges.  The second has a much more mixed view.  I think this comes from the fact that as AI tools have come into the mainstream — both in higher stakes and everyday settings — we are appropriately much less willing to tolerate flaws, especially discriminatory ones. There's also been questions of information and disinformation control as people get their news, social media, and entertainment via searches and rankings personalized to them. So, there's a much greater recognition that we should not be waiting for AI tools to become mainstream before making sure they are ethical.

Q: What is the responsibility of institutes of higher education in preparing students and the next generation of computer scientists for the future of AI and its impact on society?

First, I'll say that the need to understand the basics of AI and data science starts much earlier than higher education!  Children are being exposed to AIs as soon as they click on videos on YouTube or browse photo albums. They need to understand aspects of AI such as how their actions affect future recommendations.

But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.  I'm really excited that Harvard has the Embedded EthiCS program to provide some of this education.  Of course, this is an addition to standard good engineering practices like building robust models, validating them, and so forth, which is all a bit harder with AI.

I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc. 

Q: Your work focuses on machine learning with applications to healthcare, which is also an area of focus of this report. What is the state of AI in healthcare? 

A lot of AI in healthcare has been on the business end, used for optimizing billing, scheduling surgeries, that sort of thing.  When it comes to AI for better patient care, which is what we usually think about, there are few legal, regulatory, and financial incentives to do so, and many disincentives. Still, there's been slow but steady integration of AI-based tools, often in the form of risk scoring and alert systems.

In the near future, two applications that I'm really excited about are triage in low-resource settings — having AIs do initial reads of pathology slides, for example, if there are not enough pathologists, or get an initial check of whether a mole looks suspicious — and ways in which AIs can help identify promising treatment options for discussion with a clinician team and patient.

Q: Any predictions for the next report?

I'll be keen to see where currently nascent AI regulation initiatives have gotten to. Accountability is such a difficult question in AI,  it's tricky to nurture both innovation and basic protections.  Perhaps the most important innovation will be in approaches for AI accountability.

Topics: AI / Machine Learning , Computer Science

Cutting-edge science delivered direct to your inbox.

Join the Harvard SEAS mailing list.

Scientist Profiles

Finale Doshi-Velez

Finale Doshi-Velez

Herchel Smith Professor of Computer Science

Press Contact

Leah Burrows | 617-496-1351 | [email protected]

Related News

Harvard SEAS students Yuhan Yao and Luis Henrique Simplício Ribeiro with their master's capstone project poster

Master's student capstone spotlight: AI for Fashion

Bridging fashion cultural heritage with innovative design

Academics , AI / Machine Learning , Applied Computation , Computer Science , Design

Harvard SEAS and GSAS banners, bagpipers, students in Crimson regalia

2024 Commencement photos

Images from the 373rd Harvard Commencement on Thursday, May 23

Academics , Applied Computation , Applied Mathematics , Applied Physics , Bioengineering , Computer Science , Environmental Science & Engineering , Events , Materials Science & Mechanical Engineering , Robotics

A green energy facility featuring solar panels, wind turbines, and a building with a prominent recycling symbol on the roof. The facility is surrounded by water and greenery.

Sustainable computing project awarded $12 million from NSF

Multi-institution research initiative aims to reduce computing’s carbon footprint by 45% within the next decade

Climate , Computer Science

7 Important Computer Science Trends 2024-2027

future of computer science essay

You may also like:

  • Key Data Science Trends
  • Top AI and Machine Learning Trends
  • Important Technology Trends

Here are the 7 fastest-growing computer science trends happening right now.

And how these technologies are challenging the status quo in the office and on college campuses.

Whether you’re a fresh computer science graduate or a veteran IT executive, these are the top trends to explore.

1. Quantum computing makes waves

undefined

Quantum computing is the use of quantum mechanics, such as entanglement and superposition, to perform computations.

It uses quantum bits ( qubits ) in a similar way that regular computers use bits.

Quantum computers have the potential to solve problems that would take the world's most powerful supercomputers millions of years .

quantum computing screenshot

Companies including IBM, Microsoft and Google are all in competition to build reliable quantum computers.

In fact, In September 2019, Google AI and NASA published a joint paper that claimed to have achieved "quantum supremacy".

This is when a quantum computer outperforms a traditional one at a particular task.

Quantum computers have the potential to completely transform data science.

They also have the potential to accelerate the development of artificial intelligence, virtual reality, big data, deep learning, encryption, medicine and more.

The downside is that quantum computers are currently incredibly difficult to build and sensitive to interference.

undefined

Despite current limitations, it's fair to expect further advances from Google and others that will help make quantum computers practical to use.

Which would position quantum computing as one of the most important computer science trends in the coming years.

2. Zero Trust becomes the norm

undefined

Most information security frameworks used by organizations use traditional trust authentication methods (like passwords).

These frameworks focus on protecting network access.

And they assume that anyone that has access to the network should be able to access any data and resources they'd like.

There's a big downside to this approach: a bad actor who has got in via any entry point can then move around freely to access all data or delete it altogether.

Zero Trust information security models aim to prevent this potential vulnerability. 

Zero Trust models replace the old assumption that every user within an organization’s network can be trusted.

Instead, nobody is trusted, whether they’re already inside or outside the network.

Verification is required from everyone trying to gain access to any resource on the network.

zero-trust-screenshot.png

Huge companies like Cisco are investing heavily to develop Zero Trust solutions.

This security architecture is quickly moving from just a computer science concept to industry best practice.

And it’s little wonder why: IBM reports that the average data breach costs a company $3.86 million in damages .

And that it takes an average of 280 days to fully recover.

We will see demand for this technology continue to skyrocket in 2024 and beyond as businesses adopt zero-trust security to mitigate this risk.

3. Cloud computing hits the edge

undefined

“ Edge computing ” searches have risen 161% over the past 5 years. This market may be worth $8.67 billion by 2025.

Gartner estimates that 80% of enterprises will shut down their traditional data centers by 2025.

This is mainly because traditional cloud computing relies on servers in one central location.

undefined

If the end-user is in another country, they have to wait while data travels thousands of miles.

Latency issues like this can really hamper an application’s performance (especially for high-bandwidth media, like video).

Which is why many companies are moving over to edge computing service providers instead.

Modern edge computing brings computation, data storage, and data analytics as close as possible to the end-user location.

And when edge servers host web applications the result is massively improved response times.

edge-computing-screenshot.png

As a result, some estimates suggest that the edge computing market will be worth $61.14 billion by 2028.

And Content Delivery Networks like Cloudflare that make edge computing easy and accessible will increasingly power the web.

4. Kotlin overtakes Java

“ Kotlin ” searches are up 95% in 5 years. Interest in this programming language rocketed in 2022.

Kotlin is a general-purpose programming language that first appeared in 2011.

It’s designed specifically to be a more concise and streamlined version of Java.

And so it works for both JVM (Java Virtual Machine) and Android development.

kotlin-screenshot.png

Kotlin is billed as a modern programming language that makes developers happier.

There are over 7 million Java programmers in the world right now.

Since Kotlin offers big advantages over Java, we can expect more and more programmers to make the switch between 2023 and 2026.

Google even made the announcement in 2019 that Kotlin is now its preferred language for Android app developers.

5. The web becomes more standardized

undefined

REST (Representational State Transfer) web services power the internet and the data behind it.

But the structure of each REST API data source varies wildly.

It depends entirely on how the individual programmer behind it decided to design it.

The OpenAPI Specification (OAS) changes this. It’s essentially a description format for REST APIs.

undefined

Data sources that implement OAS are easy to learn and readable to both humans and machines.

This is because an OpenAPI file describes the entire API, including available endpoints, operations and outputs.

This standardization enables the automation of previously time-consuming tasks.

For example, tools like Swagger generate code, documentation and test cases given the OAS interface file.

This can save a huge amount of engineering time both upfront and in the long run.

Another technology that takes this concept to the next level is GraphQL . This is a data query language for APIs developed at Facebook .

It provides a complete description of the data available in a particular source. And it also gives clients the ability to ask for only the specific parts of the data they need and nothing more.

open-api-screenshot.png

GraphQL is a query language for APIs and a runtime for fulfilling those queries with your existing data.

It too has become widely used and massively popular. Frameworks and specifications like this that standardize all aspects of the internet will continue to gain wide adoption.

6. More digital twins

undefined

Interest in “ Digital twin ” has steadily grown (300%) over the last 5 years.

A digital twin is a software representation of a real-world entity or process, from which you can generate and analyze simulation data.

This way you can improve efficiency and avoid problems before devices are even built and deployed.

GE is the big name in the field and has developed internal digital twin technology to improve its own jet-engine manufacturing process.

digital-twin-screenshot.png

GE's Predix platform is a huge player in the digital twin technology market.

This technology was initially only available at the big enterprise level, with GE’s Predix industrial Internet of Things (IoT) platform.

But now we’re seeing its usage permeate across other sectors like retail warehousing, auto manufacturing, and healthcare planning.

Yet case studies of these real-world use cases are thin on the ground, so the people who produce them will set themselves up as industry experts in their field.

7. Demand for cybersecurity expertise skyrockets

“ Hack The Box ” searches have increased by 285% over 5 years.

According to CNET, at least 7.9 billion records (including credit card numbers, home addresses and phone numbers) were exposed through data breaches in 2019 alone.

As a consequence, large numbers of companies seek cybersecurity expertise to protect themselves.

undefined

Hack The Box is an online platform that has a wealth of educational information and hundreds of cybersecurity-themed challenges.

And they have 290,000 active users that test and improve their skills in penetration testing.

So they’ve become the go-to place for companies to recruit new talent for their cybersecurity teams.

hack-the-box-screenshot.png

Hack The Box is a hacker haven both in terms of content and design.

And software that helps people to identify if they’ve had their credentials compromised by data breaches will also trend.

One of the most well-known tools currently is Have I Been Pwned .

It allows you to search across multiple data breaches to see if your email address has been compromised.

That's our list of the 7 most important computer science trends to keep an eye on over the next 3-4 years.

From machine learning to blockchain to AR, it's an exciting time to be in the computer science field.

CS has always been a rapidly changing industry.

But with the growth of completely new technologies (especially cloud computing and machine learning), it's fair to expect that the rate of change will increase in 2024 and beyond.

Find Thousands of Trending Topics With Our Platform

newsletter banner

Suggestions or feedback?

MIT News | Massachusetts Institute of Technology

  • Machine learning
  • Social justice
  • Black holes
  • Classes and programs

Departments

  • Aeronautics and Astronautics
  • Brain and Cognitive Sciences
  • Architecture
  • Political Science
  • Mechanical Engineering

Centers, Labs, & Programs

  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
  • Picower Institute for Learning and Memory
  • Lincoln Laboratory
  • School of Architecture + Planning
  • School of Engineering
  • School of Humanities, Arts, and Social Sciences
  • Sloan School of Management
  • School of Science
  • MIT Schwarzman College of Computing

Envisioning the future of computing

Press contact :.

A student in a black sweater poses with an award next to a MIT Schwarzman College of Computing banner.

Previous image Next image

How will advances in computing transform human society?

MIT students contemplated this impending question as part of the Envisioning the Future of Computing Prize — an essay contest in which they were challenged to imagine ways that computing technologies could improve our lives, as well as the pitfalls and dangers associated with them.

Offered for the first time this year, the Institute-wide competition invited MIT undergraduate and graduate students to share their ideas, aspirations, and vision for what they think a future propelled by advancements in computing holds. Nearly 60 students put pen to paper, including those majoring in mathematics, philosophy, electrical engineering and computer science, brain and cognitive sciences, chemical engineering, urban studies and planning, and management, and entered their submissions.

Students dreamed up highly inventive scenarios for how the technologies of today and tomorrow could impact society, for better or worse. Some recurring themes emerged, such as tackling issues in climate change and health care. Others proposed ideas for particular technologies that ranged from digital twins as a tool for navigating the deluge of information online to a cutting-edge platform powered by artificial intelligence, machine learning, and biosensors to create personalized storytelling films that help individuals understand themselves and others.

Conceived of by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative of the MIT Schwarzman College of Computing in collaboration with the School of Humanities, Arts, and Social Sciences (SHASS), the intent of the competition was “to create a space for students to think in a creative, informed, and rigorous way about the societal benefits and costs of the technologies they are or will be developing,” says Caspar Hare, professor of philosophy, co-associate dean of SERC, and the lead organizer of the Envisioning the Future of Computing Prize. “We also wanted to convey that MIT values such thinking.”

Prize winners

The contest implemented a two-stage evaluation process wherein all essays were reviewed anonymously by a panel of MIT faculty members from the college and SHASS for the initial round. Three qualifiers were then invited to present their entries at an awards ceremony on May 8, followed by a Q&A with a judging panel and live in-person audience for the final round.

The winning entry was awarded to Robert Cunningham '23, a recent graduate in math and physics, for his paper on the implications of a personalized language model that is fine-tuned to predict an individual’s writing based on their past texts and emails. Told from the perspective of three fictional characters: Laura, founder of the tech startup ScribeAI, and Margaret and Vincent, a couple in college who are frequent users of the platform, readers gained insights into the societal shifts that take place and the unforeseen repercussions of the technology.

Cunningham, who took home the grand prize of $10,000, says he came up with the concept for his essay in late January while thinking about the upcoming release of GPT-4 and how it might be applied. Created by the developers of ChatGPT — an AI chatbot that has managed to capture popular imagination for its capacity to imitate human-like text, images, audio, and code — GPT-4, which was unveiled in March, is the newest version of OpenAI’s language model systems.

“GPT-4 is wild in reality, but some rumors before it launched were even wilder, and I had a few long plane rides to think about them! I enjoyed this opportunity to solidify a vague notion into a piece of writing, and since some of my favorite works of science fiction are short stories, I figured I'd take the chance to write one,” Cunningham says.

The other two finalists, awarded $5,000 each, included Gabrielle Kaili-May Liu '23, a recent graduate in mathematics with computer science, and brain and cognitive sciences, for her entry on using the reinforcement learning with human feedback technique as a tool for transforming human interactions with AI; and Abigail Thwaites and Eliot Matthew Watkins, graduate students in the Department of Philosophy and Linguistics, for their joint submission on automatic fact checkers, an AI-driven software that they argue could potentially help mitigate the spread of misinformation and be a profound social good.

“We were so excited to see the amazing response to this contest. It made clear how much students at MIT, contrary to stereotype, really care about the wider implications of technology, says Daniel Jackson, professor of computer science and one of the final-round judges. “So many of the essays were incredibly thoughtful and creative. Robert’s story was a chilling, but entirely plausible take on our AI future; Abigail and Eliot’s analysis brought new clarity to what harms misinformation actually causes; and Gabrielle’s piece gave a lucid overview of a prominent new technology. I hope we’ll be able to run this contest every year, and that it will encourage all our students to broaden their perspectives even further.”

Fellow judge Graham Jones, professor of anthropology, adds: “The winning entries reflected the incredible breadth of our students’ engagement with socially responsible computing. They challenge us to think differently about how to design computational technologies, conceptualize social impacts, and imagine future scenarios. Working with a cross-disciplinary panel of judges catalyzed lots of new conversations. As a sci-fi fan, I was thrilled that the top prize went to a such a stunning piece of speculative fiction!”

Other judges on the panel for the final round included:

  • Dan Huttenlocher, dean of the MIT Schwarzman College of Computing;
  • Aleksander Madry, Cadence Design Systems Professor of Computer Science;
  • Asu Ozdaglar, deputy dean of academics for the MIT Schwarzman College of Computing and head of the Department of Electrical Engineering and Computer Science;
  • Georgia Perakis, co-associate dean of SERC and the William F. Pounds Professor of Management; and
  • Agustin Rayo, dean of the MIT School of Humanities, Arts, and Social Sciences.

Honorable mentions

In addition to the grand prize winner and runners up, 12 students were recognized with honorable mentions for their entries, with each receiving $500.

The honorees and the title of their essays include:

  • Alexa Reese Canaan, Technology and Policy Program, “A New Way Forward: The Internet & Data Economy”;
  • Fernanda De La Torre Romo, Department of Brain and Cognitive Sciences, “The Empathic Revolution Using AI to Foster Greater Understanding and Connection”;
  • Samuel Florin, Department of Mathematics, "Modeling International Solutions for the Climate Crisis";
  • Claire Gorman, Department of Urban Studies and Planning (DUSP), “Grounding AI — Envisioning Inclusive Computing for Soil Carbon Applications”;
  • Kevin Hansom, MIT Sloan School of Management, “Quantum Powered Personalized Pharmacogenetic Development and Distribution Model”;
  • Sharon Jiang, Department of Electrical Engineering and Computer Science (EECS), “Machine Learning Driven Transformation of Electronic Health Records”;
  • Cassandra Lee, Media Lab, “Considering an Anti-convenience Funding Body”;
  • Martin Nisser, EECS, "Towards Personalized On-Demand Manufacturing";
  • Andi Qu, EECS, "Revolutionizing Online Learning with Digital Twins";
  • David Bradford Ramsay, Media Lab, “The Perils and Promises of Closed Loop Engagement”;
  • Shuvom Sadhuka, EECS, “Overcoming the False Trade-off in Genomics: Privacy and Collaboration”; and
  • Leonard Schrage, DUSP, “Embodied-Carbon-Computing.”

The Envisioning the Future of Computing Prize was supported by MAC3 Impact Philanthropies.

Share this news article on:

Related links.

  • Envisioning the Future of Computing Prize 2023
  • Social and Ethical Responsibilities of Computing

Related Topics

  • Contests and academic competitions
  • Awards, honors and fellowships
  • Undergraduate
  • Graduate, postdoctoral
  • Technology and society
  • Brain and cognitive sciences
  • Electrical Engineering & Computer Science (eecs)
  • Mathematics
  • Urban studies and planning
  • Technology and policy
  • Computer science and technology
  • Artificial intelligence
  • Human-computer interaction
  • MIT Sloan School of Management
  • School of Architecture and Planning
  • School of Humanities Arts and Social Sciences

Related Articles

Marion Boulicault, Dheekshita Kumar, Serena Booth, and Rodrigo Ochigame graphic

Learning to think critically about machine learning

Photo of MIT students sitting in a lecture hall

A new resource for teaching responsible technology development

Four stock images arranged in a rectangle: a photo of a person with glow-in-the-dark paint splattered on her face, an aerial photo of New York City at night, photo of a statue of a blind woman holding up scales and a sword, and an illustrated eye with a human silhouette in the pupil

Fostering ethical thinking in computing

Previous item Next item

More MIT News

A grid of 12 portrait photos of the new members.

MIT Corporation elects 10 term members, two life members

Read full story →

Diane Hoskins speaks on an indoor stage, at a lectern bearing MIT’s logo

Diane Hoskins ’79: How going off-track can lead new SA+P graduates to become integrators of ideas

Melissa Nobles stands at podium while speaking at MIT Commencement.

Chancellor Melissa Nobles’ address to MIT’s undergraduate Class of 2024

Noubar Afeyan speaks at a podium with the MIT seal on the front. Faculty and administrators in academic regalia are seated next to him.

Noubar Afeyan PhD ’87 gives new MIT graduates a special assignment

Noubar Afeyan stands at the podium.

Commencement address by Noubar Afeyan PhD ’87

MIT president Sally Kornbluth speaking at MIT’s Commencement at podium.

President Sally Kornbluth’s charge to the Class of 2024

  • More news on MIT News homepage →

Massachusetts Institute of Technology 77 Massachusetts Avenue, Cambridge, MA, USA

  • Map (opens in new window)
  • Events (opens in new window)
  • People (opens in new window)
  • Careers (opens in new window)
  • Accessibility
  • Social Media Hub
  • MIT on Facebook
  • MIT on YouTube
  • MIT on Instagram

Science News

computer chip

Monika Sakowska/EyeEm/Getty Images

Century of Science: Theme

  • The future of computing

Everywhere and invisible

You are likely reading this on a computer. You are also likely taking that fact for granted. That’s even though the device in front of you would have astounded computer scientists just a few decades ago, and seemed like sheer magic much before that. It contains billions of tiny computing elements, running millions of lines of software instructions, collectively written by countless people across the globe. The result: You click or tap or type or speak, and the result seamlessly appears on the screen.

One mill of the analytical engine

Computers once filled rooms. Now they’re everywhere and invisible, embedded in watches, car engines, cameras, televisions and toys. They manage electrical grids, analyze scientific data and predict the weather. The modern world would be impossible without them, and our dependence on them for health, prosperity and entertainment will only increase.

Scientists hope to make computers faster yet, to make programs more intelligent and to deploy technology in an ethical manner. But before looking at where we go from here, let’s review where we’ve come from.

In 1833, the English mathematician Charles Babbage conceived a programmable machine that presaged today’s computing architecture, featuring a “store” for holding numbers, a “mill” for operating on them, an instruction reader and a printer. This Analytical Engine also had logical functions like branching (if X, then Y). Babbage constructed only a piece of the machine, but based on its description, his acquaintance Ada Lovelace saw that the numbers it might manipulate could represent anything, even music, making it much more general-purpose than a calculator. “A new, a vast, and a powerful language is developed for the future use of analysis,” she wrote. She became an expert in the proposed machine’s operation and is often called the first programmer.

Colossus machine

In 1936, the English mathematician Alan Turing introduced the idea of a computer that could rewrite its own instructions , making it endlessly programmable. His mathematical abstraction could, using a small vocabulary of operations, mimic a machine of any complexity, earning it the name “universal Turing machine.”

The first reliable electronic digital computer, Colossus, was completed in 1943, to help England decipher wartime codes. It used vacuum tubes — devices for controlling the flow of electrons — instead of moving mechanical parts like the Analytical Engine’s cogwheels. This made Colossus fast, but engineers had to manually rewire it every time they wanted to perform a new task. Perhaps inspired by Turing’s concept of a more easily reprogrammable computer, the team that created the United States’ first electronic digital computer , ENIAC, drafted a new architecture for its successor, the EDVAC. The mathematician John von Neumann, who penned the EDVAC’s design in 1945, described a system that could store programs in its memory alongside data and alter the programs, a setup now called the von Neumann architecture. Nearly every computer today follows that paradigm.

ENIAC

In 1947, researchers at Bell Telephone Laboratories invented the transistor , a piece of circuitry in which the application of voltage (electrical pressure) or current controls the flow of electrons between two points. It came to replace the slower and less efficient vacuum tubes. In 1958 and 1959, researchers at Texas Instruments and Fairchild Semiconductor independently invented integrated circuits, in which transistors and their supporting circuitry were fabricated on a chip in one process.

For a long time, only experts could program computers. Then in 1957, IBM released FORTRAN, a programming language that was much easier to understand. It’s still in use today. In 1981 the company unveiled the IBM PC and Microsoft released its operating system called MS-DOS, together expanding the reach of computers into homes and offices. Apple further personalized computing with the operating systems for its Lisa, in 1982, and Macintosh, in 1984. Both systems popularized graphical user interfaces, or GUIs, offering users a mouse cursor instead of a command line.

Arpanet map

Meanwhile, researchers had been doing work that would end up connecting our newfangled hardware and software. In 1948, the mathematician Claude Shannon published “ A Mathematical Theory of Communication ,” a paper that popularized the word bit (for binary digit) and laid the foundation for information theory . His ideas have shaped computation and in particular the sharing of data over wires and through the air. In 1969, the U.S. Advanced Research Projects Agency created a computer network called ARPANET, which later merged with other networks to form the internet. In 1990, researchers at CERN — a European laboratory near Geneva, Switzerland — developed rules for transmitting data that would become the foundation of the World Wide Web.

Better hardware, better software and better communication have now connected most of the people on the planet. But how much better can the processors get? How smart can algorithms become? And what kinds of benefits and dangers should we expect to see as technology advances? Stuart Russell, a computer scientist at University of California, Berkeley and coauthor of a popular textbook on artificial intelligence, sees great potential for computers in “expanding artistic creativity, accelerating science, serving as diligent personal assistants, driving cars and — I hope — not killing us.” — Matthew Hutson

Jobs and Mac

Chasing speed

Computers, for the most part, speak the language of bits. They store information — whether it’s music, an application or a password — in strings of 1s and 0s. They also process information in a binary fashion, flipping transistors between an “on” and “off” state. The more transistors in a computer, the faster it can process bits, making possible everything from more realistic video games to safer air traffic control.

Combining transistors forms one of the building blocks of a circuit, called a logic gate. An AND logic gate, for example, is on if both inputs are on, while an OR is on if at least one input is on. Together, logic gates compose a complex traffic pattern of electrons, the physical manifestation of computation. A computer chip can contain millions of such logic gates.

So the more logic gates, and by extension the more transistors, the more powerful the computer. In 1965, Gordon Moore, a cofounder of Fairchild Semiconductor and later of Intel, published a paper on the future of chips titled “Cramming More Components onto Integrated Circuits.” He graphed the number of components (mostly transistors) on five integrated circuits (chips) that had been built from 1959 to 1965, and extended the line. Transistors per chip had doubled every year, and he expected the trend to continue.

Original Moore graph

In a 1975 talk, Moore identified three factors behind this exponential growth: smaller transistors, bigger chips and “device and circuit cleverness,” such as less wasted space. He expected the doubling to occur every two years. It did, and continued doing so for decades. That trend is now called Moore’s law.

Moore’s law is not a physical law, like Newton’s law of universal gravitation. It was meant as an observation about economics. There will always be incentives to make computers faster and cheaper — but at some point, physics interferes. Chip development can’t keep up with Moore’s law forever, as it becomes more difficult to make transistors tinier. According to what’s jokingly called Moore’s second law, the cost of chip fabrication plants, or “fabs,” doubles every few years. The semiconductor company TSMC has considered building a plant that would cost $25 billion.

Today, Moore’s law no longer holds; doubling is happening at a slower rate. We continue to squeeze more transistors onto chips with each generation, but the generations come less frequently. Researchers are looking into several ways forward: better transistors, more specialized chips, new chip concepts and software hacks.  

Computer performance from 1985 through 2015

Modern Moore graph

Until about 2005, the ability to squeeze more transistors onto each chip meant exponential improvements in computer performance (black and gray show an industry benchmark for computers with one or more “cores,” or processors). Likewise, clock frequency (green) — the number of cycles of operations performed per second — improved exponentially. Since this “Dennard-scaling era,” transistors have continued to shrink but that shrinking hasn’t yielded the same performance benefits.

Transistors

Transistors can get smaller still. Conceptually, a transistor consists of three basic elements. A metal gate (different from the logic gates above) lays across the middle of a semiconductor, one side of which acts as an electron source, and the other side a drain. Current passes from source to drain, and then on down the road, when the gate has a certain voltage. Many transistors are of a design called FinFET, because the channel from source to drain sticks up like a fin or a row of fins. The gate is like a larger, perpendicular wall that the fins pass through. It touches each fin on both sides and the top.

But, according to Sanjay Natarajan, who leads transistor design at Intel, “we’ve squeezed, we believe, everything you can squeeze out of that architecture.” In the next few years, chip manufacturers will start producing gate-all-around transistors, in which the channel resembles vertically stacked wires or ribbons penetrating the gate. These transistors will be faster and require less energy and space.

Transistors revisited

Finfet and gate all around transistor drawings

New transistor designs, a shift from the common FinFET (left) to gate-all-around transistors (right), for example, can make transistors that are smaller, faster and require less energy.

As these components have shrunk, the terminology to describe their size has gotten more confusing. You sometimes hear about chips being “14 nanometers” or “10 nanometers” in size; top-of-the-line chips in 2021 are “5 nanometers.” These numbers do not refer to the width or any other dimension of a transistor. They used to refer to the size of particular transistor features, but for several years now they have been nothing more than marketing terms.

Chip design

Even if transistors were to stop shrinking, computers would still have a lot of runway to improve, through Moore’s “device and circuit cleverness.”

A large hindrance to speeding up chips is the amount of heat they produce while moving electrons around. Too much and they’ll melt. For years, Moore’s law was accompanied by Dennard scaling, named after electrical engineer Robert Dennard, who said that as transistors shrank, they would also become faster and more energy efficient. That was true until around 2005, when they became so thin that they leaked too much current, heating up the chip. Since then, computer clock speed — the number of cycles of operations performed per second — hasn’t increased beyond a few gigahertz.

A Navajo woman sitting at a microscope

  • Materials that made us
  • Unsung characters

Core memory weavers and Navajo women made the Apollo missions possible

The stories of the women who assembled integrated circuits and wove core memory for the Apollo missions remain largely unknown.

Computers are limited in how much power they can draw and in how much heat they can disperse. Since the mid-2000s, according to Tom Conte, a computer scientist at Georgia Tech in Atlanta who co-leads the IEEE Rebooting Computing Initiative, “power savings has been the name of the game.” So engineers have turned to making chips perform several operations simultaneously, or splitting a chip into multiple parallel “cores,” to eke more operations from the same clock speed. But programming for parallel circuits is tricky.

Another speed bump is that electrons often have to travel long distances between logic gates or between chips — which also produces a lot of heat. One solution to the delays and heat production of data transmission is to move transistors closer together. Some nascent efforts have looked at stacking them vertically. More near-term, others are stacking whole chips vertically. Another solution is to replace electrical wiring with fiber optics, as light transmits information faster and more efficiently than electrical current does.

TrueNorth chip

Increasingly, computers rely on specialized chips or regions of a chip, called accelerators. Arranging transistors differently can put them to better use for specific applications. A cell phone, for instance, may have different circuitry designed for processing graphics, sound, wireless transmission and GPS signals.

“Sanjay [Natarajan] leads the parts of Intel that deliver transistors and transistor technologies,” says Richard Uhlig, managing director of Intel Labs. “We figure out what to do with the transistors,” he says of his team. One type of accelerator they’re developing is for what’s called fully homomorphic encryption, in which a computer processes data while it’s still encrypted — useful for, say, drawing conclusions about a set of medical records without revealing personal information. The project, funded by DARPA, could speed homomorphic encryption by hundreds of times.

More than 200 start-ups are developing accelerators for artificial intelligence , finding faster ways to perform the calculations necessary for software to learn from data.

Some accelerators aim to mimic, in hardware, the brain’s wiring. These “neuromorphic” chips typically embody at least one of three properties. First, memory elements may sit very close to computing elements, or the same elements may perform both functions, the way neurons both store and process information. One type of element that can perform this feat is the memristor . Second, the chips may process information using “spikes.” Like neurons, the elements sit around waiting for something to happen, then send a signal, or spike, when their activation crosses a threshold. Third, the chips may be analog instead of digital, eliminating the need for encoding continuous electrical properties such as voltage into discrete 1s and 0s.

These neuromorphic properties can make processing certain types of information orders of magnitude faster and more energy efficient. The computations are often less precise than in standard chips, but fuzzy logic is acceptable for, say, pattern matching or finding approximate solutions quickly. Uhlig says Intel has used its neuromorphic chip Loihi in tests to process odors, control robots and optimize railway schedules so that many trains can share limited tracks.

Cerebras chip

Some types of accelerators might one day use quantum computing , which capitalizes on two features of the subatomic realm. The first is superposition , in which particles can exist not just in one state or another, but in some combination of states until the state is explicitly measured. So a quantum system represents information not as bits but as qubits , which can preserve the possibility of being either 0 or 1 when measured. The second is entanglement , the interdependence between distant quantum elements. Together, these features mean that a system of qubits can represent and evaluate exponentially more possibilities than there are qubits — all combinations of 1s and 0s simultaneously.

Qubits can take many forms, but one of the most popular is as current in superconducting wires. These wires must be kept at a fraction of a degree above absolute zero, around –273° Celsius, to prevent hot, jiggling atoms from interfering with the qubits’ delicate superpositions and entanglement. Quantum computers also need many physical qubits to make up one “logical,” or effective, qubit, with the redundancy acting as error correction .

Quantum computers have several potential applications: machine learning, optimization (like train scheduling) and simulating real-world quantum mechanics, as in chemistry. But they will not likely become general-purpose computers. It’s not clear how you’d use one to, say, run a word processor.

New chip concepts

There remain new ways to dramatically speed up not just specialized accelerators but also general-purpose chips. Conte points to two paradigms. The first is superconduction. Below about 4 kelvins, around –269° C, many metals lose almost all electrical resistance, so they won’t convert current into heat. A superconducting circuit might be able to operate at hundreds of gigahertz instead of just a few, using much less electricity. The hard part lies not in keeping the circuits refrigerated (at least in big data centers), but in working with the exotic materials required to build them. 

The second paradigm is reversible computing. In 1961, the physicist Rolf Landauer merged information theory and thermodynamics , the physics of heat. He noted that when a logic gate takes in two bits and outputs one, it destroys a bit, expelling it as entropy, or randomness, in the form of heat. When billions of transistors operate at billions of cycles per second, the wasted heat adds up. Michael Frank, a computer scientist at Sandia National Laboratories in Albuquerque who works on reversible computing, wrote in 2017: “A conventional computer is, essentially, an expensive electric heater that happens to perform a small amount of computation as a side effect.”

But in reversible computing, logic gates have as many outputs as inputs. This means that if you ran the logic gate in reverse, you could use, say, three out-bits to obtain the three in-bits. Some researchers have conceived of reversible logic gates and circuits that could not only save those extra out-bits but also recycle them for other calculations. The physicist Richard Feynman had concluded that, aside from energy loss during data transmission, there’s no theoretical limit to computing efficiency.

Combine reversible and superconducting computing, Conte says, and “you get a double whammy.” Efficient computing allows you to run more operations on the same chip without worrying about power use or heat generation. Conte says that, eventually, one or both of these methods “probably will be the backbone of a lot of computing.”

Software hacks

Researchers continue to work on a cornucopia of new technologies for transistors, other computing elements, chip designs and hardware paradigms: photonics, spintronics , biomolecules, carbon nanotubes . But much more can still be eked out of current elements and architectures merely by optimizing code.

In a 2020 paper in Science , for instance, researchers studied the simple problem of multiplying two matrices, grids of numbers used in mathematics and machine learning. The calculation ran more than 60,000 times faster when the team picked an efficient programming language and optimized the code for the underlying hardware, compared with a standard piece of code in the Python language, which is considered user-friendly and easy to learn.

Computing gains through hardware and algorithm improvement

Algorithm improvement chart

Hardware isn’t the only way computing speeds up. Advances in the algorithms — the computational procedures for achieving a result — can lend a big boost to performance. The graph above shows the relative number of problems that can be solved in a fixed amount of time for one type of algorithm. The black line shows gains over time from hardware and algorithm advances; the purple line shows gains from hardware improvements alone.

Neil Thompson, a research scientist at MIT who coauthored the Science paper, recently coauthored a paper looking at historical improvements in algorithms , abstract procedures for tasks like sorting data. “For a substantial minority of algorithms,” he says, “their progress has been as fast or faster than Moore’s law.”

People have predicted the end of Moore’s law for decades. Even Moore has predicted its end several times. Progress may have slowed, at least for the time being, but human innovation, accelerated by economic incentives, has kept technology moving at a fast clip. — Matthew Hutson

Chasing intelligence

From the early days of computer science, researchers have aimed to replicate human thought. Alan Turing opened a 1950 paper titled “ Computing Machinery and Intelligence ” with: “I propose to consider the question, ‘Can machines think?’” He proceeded to outline a test, which he called “the imitation game” ( now called the Turing test ), in which a human communicating with a computer and another human via written questions had to judge which was which. If the judge failed, the computer could presumably think.

Man with wires

The term “artificial intelligence” was coined in a 1955 proposal for a summer institute at Dartmouth College. “An attempt will be made,” the proposal goes, “to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves.” The organizers expected that over two months, the 10 summit attendees would make a “significant advance.”

remington ad

More than six decades and untold person-hours later, it’s unclear whether the advances live up to what was in mind at that summer summit. Artificial intelligence surrounds us, in ways invisible (filtering spam), headline-worthy (beating us at chess, driving cars) and in between (letting us chat with our smartphones). But these are all narrow forms of AI, performing one or two tasks well. What Turing and others had in mind is called artificial general intelligence, or AGI. Depending on your definition, it’s a system that can do most of what humans do.

We may never achieve AGI, but the path has led, and will lead, to lots of useful innovations along the way. “I think we’ve made a lot of progress,” says Doina Precup, a computer scientist at McGill University in Montreal and head of AI company DeepMind’s Montreal research team. “But one of the things that, to me, is still missing right now is more of an understanding of the principles that are fundamental in intelligence.”

AI has made great headway in the last decade, much of it due to machine learning. Previously, computers relied more heavily on symbolic AI, which uses algorithms, or sets of instructions, that make decisions according to manually specified rules. Machine-learning programs, on the other hand, process data to find patterns on their own. One form uses artificial neural networks, software with layers of simple computing elements that together mimic certain principles of biological brains. Neural networks with several, or many more, layers are currently popular and make up a type of machine learning called deep learning.

Kasparov

Deep-learning systems can now play games like chess and Go better than the best human. They can probably identify dog breeds from photos better than you can. They can translate text from one language to another. They can control robots and compose music and predict how proteins will fold.

But they also lack much of what falls under the umbrella term of common sense. They don’t understand fundamental things about how the world works, physically or socially. Slightly changing images in a way that you or I might not notice, for example, can dramatically affect what a computer sees. Researchers found that placing a few innocuous stickers on a stop sign can lead software to interpret the sign as a speed limit sign, an obvious problem for self-driving cars .

photo of a person looking at the "Edmond de Belamy" portrait

Artificial intelligence challenges what it means to be creative

Computer programs can mimic famous artworks, but struggle with originality and lack self-awareness.

Types of learning

How can AI improve? Computer scientists are leveraging multiple forms of machine learning, whether the learning is “deep” or not. One common form is called supervised learning, in which machine-learning systems, or models, are trained by being fed labeled data such as images of dogs and their breed names. But that requires lots of human effort to label them. Another approach is unsupervised or self-supervised learning, in which computers learn without relying on outside labels, the way you or I predict what a chair will look like from different angles as we walk around it.

Models that process billions of words of text, predicting the next word one at a time and changing slightly when they’re wrong, rely on unsupervised learning. They can then generate new strings of text. In 2020, the research lab OpenAI released a trained language model called GPT-3 that’s perhaps the most complex neural network ever. Based on prompts, it can write humanlike news articles, short stories and poems. It can answer trivia questions, write computer code and translate language — all without being specifically trained to do any of these things. It’s further down the path toward AGI than many researchers thought was currently possible. And language models will get bigger and better from here.

Neural network

Another type of machine learning is reinforcement learning , in which a model interacts with an environment, exploring sequences of actions to achieve a goal. Reinforcement learning has allowed AI to become expert at board games like Go and video games like StarCraft II . A recent paper by researchers at DeepMind, including Precup, argues in the title that “ Reward Is Enough .” By merely having a training algorithm reinforce a model’s successful or semi-successful behavior, models will incrementally build up all the components of intelligence needed to succeed at the given task and many others.

For example, according to the paper, a robot rewarded for maximizing kitchen cleanliness would eventually learn “perception (to differentiate clean and dirty utensils), knowledge (to understand utensils), motor control (to manipulate utensils), memory (to recall locations of utensils), language (to predict future mess from dialogue) and social intelligence (to encourage young children to make less mess).” Whether trial and error would lead to such skills within the life span of the solar system — and what kinds of goals, environment and model would be required — is to be determined.

Another type of learning involves Bayesian statistics, a way of estimating what conditions are likely given current observations. Bayesian statistics is helping machines identify causal relations, an essential skill for advanced intelligence.

Generalizing

To learn efficiently, machines (and people) need to generalize, to draw abstract principles from experiences. “A huge part of intelligence,” says Melanie Mitchell, a computer scientist at the Santa Fe Institute in New Mexico, “is being able to take one’s knowledge and apply it in different situations.” Much of her work involves analogies, in a most rudimentary form: finding similarities between strings of letters. In 2019, AI researcher François Chollet of Google created a kind of IQ test for machines called the Abstraction and Reasoning Corpus, or ARC, in which computers must complete visual patterns according to principles demonstrated in example patterns. The puzzles are easy for humans but so far challenging for machines. Eventually, AI might understand grander abstractions like love and democracy.

Machine IQ test

Blocky tests

In a kind of IQ test for machines, computers are challenged to complete a visual patterning task based on examples provided. In each of these three tasks, computers are given “training examples” (both the problem, left, and the answer, right) and then have to determine the answer for “test examples.” The puzzles are typically much easier for humans than for machines.

Much of our abstract thought, ironically, may be grounded in our physical experiences. We use conceptual metaphors like important = big, and argument = opposing forces. AGI that can do most of what humans can do may require embodiment, such as operating within a physical robot. Researchers have combined language learning and robotics by creating virtual worlds where virtual robots simultaneously learn to follow instructions and to navigate within a house. GPT-3 is evidence that disembodied language may not be enough. In one demo , it wrote: “It takes two rainbows to jump from Hawaii to seventeen.”

“I’ve played around a lot with it,” Mitchell says. “It does incredible things. But it can also make some incredibly dumb mistakes.”

AGI might also require other aspects of our animal nature, like emotions , especially if humans expect to interact with machines in natural ways. Emotions are not mere irrational reactions. We’ve evolved them to guide our drives and behaviors. According to Ilya Sutskever, a cofounder and the chief scientist at OpenAI, they “give us this extra oomph of wisdom.” Even if AI doesn’t have the same conscious feelings we do, it may have code that approximates fear or anger. Already, reinforcement learning includes an exploratory element akin to curiosity .

Stop sign stickers

One function of curiosity is to help learn causality, by encouraging exploration and experimentation, Precup says. However, current exploration methods in AI “are still very far from babies playing purposefully with objects,” she notes.

Humans aren’t blank slates. We’re born with certain predispositions to recognize faces, learn language and play with objects. Machine-learning systems also require the right kind of innate structure to learn certain things quickly. How much structure, and what kind, is a matter of intense debate in the field. Sutskever says building in how we think we think is “intellectually seductive,” and he leans toward blank slates. However, “we want the best blank slate.”

One general neural-network structure Sutskever likes is called the transformer, a method for paying greater attention to important relationships between elements of an input. It’s behind current language models like GPT-3, and has also been applied to analyzing images, audio and video. “It makes everything better,” he says.

Thinking about thinking

AI itself may help us discover new forms of AI. There’s a set of techniques called AutoML, in which algorithms help optimize neural-network architectures or other aspects of AI models. AI also helps chip architects design better integrated circuits. This year, Google researchers reported in Nature that reinforcement learning performed better than their in-house team at laying out some aspects of an accelerator chip they’d designed for AI.

Estimates of AGI’s proximity vary greatly, but most experts think it’s decades away. In a 2016 survey, 352 machine-learning researchers estimated the arrival of “high-level machine intelligence,” defined as “when unaided machines can accomplish every task better and more cheaply than human workers.” On average, they gave even odds of such a feat by around 2060.

But no one has a good basis for judging. “We don’t understand our own intelligence,” Mitchell says, as much of it is unconscious. “And therefore, we don’t know what’s going to be hard or easy for AI.” What seems hard can be easy and vice versa — a phenomenon known as Moravec’s paradox, after the roboticist Hans Moravec. In 1988, Moravec wrote, “it is comparatively easy to make computers exhibit adult-level performance in solving problems on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a 1-year-old when it comes to perception and mobility.” Babies are secretly brilliant. In aiming for AGI, Precup says, “we are also understanding more about human intelligence, and about intelligence in general.”

The gap between organic and synthetic intelligence sometimes seems small because we anthropomorphize machines, spurred by computer science terms like intelligence , learning and vision . Aside from whether we even want humanlike machine intelligence — if they think just like us, won’t they essentially just be people, raising ethical and practical dilemmas? — such a thing may not be possible. Even if AI becomes broad, it may still have unique strengths and weaknesses.

Turing also differentiated between general intelligence and humanlike intelligence. In his 1950 paper on the imitation game, he wrote, “May not machines carry out something which ought to be described as thinking but which is very different from what a man does?” — Matthew Hutson

iCub robot

Ethical issues

In the 1942 short story “Runaround,” one of Isaac Asimov’s characters enumerated “the three fundamental Rules of Robotics — the three rules that are built most deeply into a robot’s positronic brain.” Robots avoided causing or allowing harm to humans, they obeyed orders and they protected themselves, as long as following one rule didn’t conflict with preceding decrees.

We might picture Asimov’s “positronic brains” making autonomous decisions about harm to humans, but that’s not actually how computers affect our well-being every day. Instead of humanoid robots killing people, we have algorithms curating news feeds. As computers further infiltrate our lives, we’ll need to think harder about what kinds of systems to build and how to deploy them, as well as meta-problems like how to decide — and who should decide — these things.

This is the realm of ethics, which may seem distant from the supposed objectivity of math, science and engineering. But deciding what questions to ask about the world and what tools to build has always depended on our ideals and scruples. Studying an abstruse topic like the innards of atoms , for instance, has clear bearing on both energy and weaponry. “There’s the fundamental fact that computer systems are not value neutral,” says Barbara Grosz, a computer scientist at Harvard University, “that when you design them, you bring some set of values into that design.”

One topic that has received a lot of attention from scientists and ethicists is fairness and bias . Algorithms increasingly inform or even dictate decisions about hiring, college admissions, loans and parole. Even if they discriminate less than people do, they can still treat certain groups unfairly, not by design but often because they are trained on biased data. They might predict a person’s future criminal behavior based on prior arrests, for instance, even though different groups are arrested at different rates for a given amount of crime.

Estimated percent of Oakland residents using drugs

Bar charts on Oakland drug use (side by side)

Percent of population that would be targeted by predictive policing

Bar charts on Oakland drug use (side by side)

A predictive policing algorithm tested in Oakland, Calif., would target Black people at roughly twice the rate of white people (right) even though data from the same time period, 2011, show that drug use was roughly equivalent across racial groups (left).

And confusingly, there are multiple definitions of fairness, such as equal false-positive rates between groups or equal false-negative rates between groups. A researcher at one conference listed 21 definitions . And the definitions often conflict. In one paper , researchers showed that in most cases it’s mathematically impossible to satisfy three common definitions simultaneously.

Another concern is privacy and surveillance , given that computers can now gather and sort information on their use in a way previously unimaginable . Data on our online behavior can help predict aspects of our private lives, like sexuality. Facial recognition can also follow us around the real world, helping police or authoritarian governments. And the emerging field of neurotechnology is already testing ways to connect the brain directly to computers. Related to privacy is security — hackers can access data that’s locked away, or interfere with pacemakers and autonomous vehicles.

Computers can also enable deception. AI can generate content that looks real. Language models might write masterpieces or be used to fill the internet with fake news and recruiting material for extremist groups. Generative adversarial networks, a type of deep learning that can generate realistic content, can assist artists or create deepfakes , images or videos showing people doing things they never did.

Putin Obama example

On social media, we also need to worry about polarization in people’s social, political and other views. Generally, recommendation algorithms optimize engagement (and platform profit through advertising), not civil discourse. Algorithms can also manipulate us in other ways. Robo-advisers — chatbots for dispensing financial advice or providing customer support — might learn to know what we really need, or to push our buttons and upsell us on extraneous products.

Multiple countries are developing autonomous weapons that have the potential to reduce civilian casualties as well as escalate conflict faster than their minders can react. Putting guns or missiles in the hands of robots raises the sci-fi specter of Terminators attempting to eliminate humankind. They might even think they’re helping us because eliminating humankind also eliminates human cancer (an example of having no common sense). More near-term, automated systems let loose in the real world have already caused flash crashes in the stock market and Amazon book prices reaching into the millions . If AIs are charged with making life-and-death decisions, they then face the famous trolley problem, deciding whom or what to sacrifice when not everyone can win. Here we’re entering Asimov territory.

That’s a lot to worry about. Russell, of UC Berkeley, suggests where our priorities should lie: “Lethal autonomous weapons are an urgent issue, because people may have already died, and the way things are going, it’s only a matter of time before there’s a mass attack,” he says. “Bias and social media addiction and polarization are both arguably instances of failure of value alignment between algorithms and society, so they are giving us early warnings of how things can easily go wrong.” He adds, “I don’t think trolley problems are urgent at all.”

Drones

There are also social, political and legal questions about how to manage technology in society. Who should be held accountable when an AI system causes harm? (For instance, “confused” self-driving cars have killed people .) How can we ensure more equal access to the tools of AI and their benefits, and make sure they don’t harm some groups much more than others? How will automating jobs upend the labor market? Can we manage the environmental impact of data centers, which use a lot of electricity? (Bitcoin mining is responsible for as many tons of carbon dioxide emissions as a small country.) Should we preferentially employ explainable algorithms — rather than the black boxes of many neural networks — for greater trust and debuggability, even if it makes the algorithms poorer at prediction?

What can be done

Michael Kearns, a computer scientist at the University of Pennsylvania and coauthor of The Ethical Algorithm , puts the problems on a spectrum of manageability. At one end is what’s called differential privacy, the ability to add noise to a dataset of, say, medical records so that it can be shared usefully with researchers without revealing much about the individual records. We can now make mathematical guarantees about exactly how private individuals’ data should remain.

Somewhere in the middle of the spectrum is fairness in machine learning. Researchers have developed methods to increase fairness by removing or altering biased training data, or to maximize certain types of equality — in loans, for instance — while minimizing reduction in profit. Still, some types of fairness will forever be in mutual conflict, and math can’t tell us which ones we want.

At the far end is explainability. As opposed to fairness, which can be analyzed mathematically in many ways, the quality of an explanation is hard to describe in mathematical terms. “I feel like I haven’t seen a single good definition yet,” Kearns says. “You could say, ‘Here’s an algorithm that will take a trained neural network and try to explain why it rejected you for a loan,’ but [the explanation] doesn’t feel principled.”

Explanation methods include generating a simpler, interpretable model that approximates the original, or highlighting regions of an image a network found salient, but these are just gestures toward how the cryptic software computes. Even worse, systems can provide intentionally deceptive explanations , to make unfair models look fair to auditors. Ultimately, if the audience doesn’t understand it, it’s not a good explanation, and measuring its success — however you define success — requires user studies.  

Something like Asimov’s three laws won’t save us from robots that hurt us while trying to help us; stepping on your phone when you tell it to hurry up and get you a drink is a likely example. And even if the list were extended to a million laws, the letter of a law is not identical to its spirit. One possible solution is what’s called inverse reinforcement learning, or IRL. In reinforcement learning, a model learns behaviors to achieve a given goal. In IRL, it infers someone’s goal by observing their behavior. We can’t always articulate our values — the goals we ultimately care about — but AI might figure them out by watching us. If we have coherent goals, that is.

“Perhaps the most obvious preference is that we prefer to be alive,” says Russell, who has pioneered IRL. “So an AI agent using IRL can avoid courses of action that cause us to be dead. In case this sounds too trivial, remember that not a single one of the prototype self-driving cars knows that we prefer to be alive. The self-driving car may have rules that in most cases prohibit actions that cause death, but in some unusual circumstance — such as filling a garage with carbon monoxide — they might watch the person collapse and die and have no notion that anything was wrong.”

Digital lives

Facebook metaverse

In 2021, Facebook unveiled its vision for a metaverse, a virtual world where people would work and play. “As so many have made clear, this is what technology wants,” says MIT sociologist and clinical psychologist Sherry Turkle about the metaverse. “For me, it would be wiser to ask first, not what technology wants, but what do people want? What do people need to be safer? Less lonely? More connected to each other in communities? More supported in their efforts to live healthier and more fulfilled lives?”

Engineer, heal thyself

In the 1950 short story “The Evitable Conflict,” Asimov articulated what became a “zeroth law,” which would supersede the others: “A robot may not harm humanity, or, by inaction, allow humanity to come to harm.” It should go without saying that the rule should apply with “roboticist” in place of “robot.” For sure, many computer scientists avoid harming humanity, but many also don’t actively engage with the social implications of their work, effectively allowing humanity to come to harm, says Margaret Mitchell, a computer scientist who co-led Google’s Ethical AI team and now consults with organizations on tech ethics. (She is no relation to computer scientist Melanie Mitchell.)

One hurdle, according to Grosz, is that they’re not properly trained in ethics. But she hopes to change that. Grosz and the philosopher Alison Simmons began a program at Harvard called Embedded EthiCS, in which teaching assistants with training in philosophy are embedded in computer science courses and teach lessons on privacy or discrimination or fake news. The program has spread to MIT, Stanford and the University of Toronto.

“We try to get students to think about values and value trade-offs,” Grosz says. Two things have struck her. The first is the difficulty students have with problems that lack right answers and require arguing for particular choices. The second is, despite their frustration, “how much students care about this set of issues,” Grosz says.

Another way to educate technologists about their influence is to widen collaborations. According to Mitchell, “computer science needs to move from holding math up as the be-all and end-all, to holding up both math and social science, and psychology as well.” Researchers should bring in experts in these topics, she says. Going the other way, Kearns says, they should also share their own technical expertise with regulators, lawyers and policy makers. Otherwise, policies will be so vague as to be useless. Without specific definitions of privacy or fairness written into law, companies can choose whatever’s most convenient or profitable.

When evaluating how a tool will affect a community, the best experts are often community members themselves. Grosz advocates consulting with diverse populations. Diversity helps in both user studies and technology teams. “If you don’t have people in the room who think differently from you,” Grosz says, “the differences are just not in front of you. If somebody says not every patient has a smartphone, boom, you start thinking differently about what you’re designing.”

According to Margaret Mitchell, “the most pressing problem is the diversity and inclusion of who’s at the table from the start. All the other issues fall out from there.” — Matthew Hutson

Editor’s note: This story was published February 24, 2022.

pic of Turing

Alan Turing (shown) sketches out the theoretical blueprint for a machine able to implement instructions for making any calculation — the principle behind modern computing devices.

Operators at the ENIAC

The University of Pennsylvania rolls out the first all-electronic general-purpose digital computer , called ENIAC (one shown). The Colossus electronic computers had been used by British code-breakers during World War II.

Grace Hopper

Grace Hopper (shown) creates the first compiler. It translated instructions into code that a computer could read and execute, making it an important step in the evolution of modern programming languages.

Kids looking at a computer

Three computers released this year — the Commodore PET, the Apple II and the TRS-80 (an early version shown) — help make personal computing a reality.

Lee Sedol playing

Google’s AlphaGo computer program defeats world champion Go player Lee Sedol (shown).

Sycamore chip

Researchers at Google report a controversial claim that they have achieved quantum supremacy, performing a computation that would be impossible in practice for a classical machine. (Google’s Sycamore chip is shown.)

From the archive

From now on: computers.

Science News Letter editor Watson Davis predicts how “mechanical brains” will push forward human knowledge.

Maze for Mechanical Mouse

Claude Shannon demonstrates his “electrical mouse,” which can learn to find its way through a maze.

Giant Electronic Brains

Science News Letter covers the introduction of a “giant electronic ‘brain’” to aid weather predictions.

Automation Changes Jobs

A peek into early worries over how technological advances will swallow up jobs.

Machine ‘Thinks’ for Itself

“An automaton that is half-beast, half-machine is able to ‘think’ for itself,” Science News Letter reports.

Predicting Chemical Properties by Computer

A report on how artificial intelligence is helping to predict chemical properties.

From Number Crunchers to Pocket Genies

The first in a series of articles on the computer revolution explores the technological breakthroughs bringing computers to the average person.

Calculators in the Classroom

Science News weighs the pros and cons of “pocket math,” noting that high school and college students are “buying calculators as if they were radios.”

Computing for Art’s Sake

Artists embrace computers as essential partners in the creative process, Science News ’ Janet Raloff reports.

PetaCrunchers

Mathematics writer Ivars Peterson reports on the push toward ultrafast supercomputing — and what it might reveal about the cosmos.

A Mind from Math

Alan Turing foresaw the potential of machines to mimic brains, reports Tom Siegfried.

Machines are getting schooled on fairness

Machine-learning programs can introduce biases that may harm job seekers, loan applicants and more, Maria Temming reports.

An illustration of a smiley face with a green monster with lots of tentacles and eyes behind it. The monster's fangs and tongue are visible at the bottom of the illustration.

AI chatbots can be tricked into misbehaving. Can scientists stop it?

To develop better safeguards, computer scientists are studying how people have manipulated generative AI chatbots into answering harmful questions.

Science News is published by Society for Science

DNA illustration

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

Logo

Essay on Future of Computer

Students are often asked to write an essay on Future of Computer in their schools and colleges. And if you’re also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic.

Let’s take a look…

100 Words Essay on Future of Computer

The future of computers.

Computers are becoming smarter every day. They can now do tasks that were once only possible for humans. In the future, they may even start thinking like us!

Artificial Intelligence

Artificial Intelligence (AI) is a big part of the future. It allows computers to learn from their experiences. This means they can improve over time without needing help from humans.

Virtual Reality

Virtual Reality (VR) is another exciting area. It allows us to enter computer-created worlds. This could change how we learn, play, and work.

Quantum Computing

Quantum computing is a new technology that could make computers incredibly fast. This could help solve problems that are currently too hard for regular computers.

250 Words Essay on Future of Computer

The evolution of computers.

Computers have evolved significantly since their inception, from room-sized behemoths to pocket-friendly devices. Their future promises even more radical transformations, underpinned by advancements in artificial intelligence (AI), quantum computing, and cloud technology.

AI is set to revolutionize the future of computers. Machine learning algorithms, a subset of AI, are becoming increasingly adept at pattern recognition and predictive analysis. This will lead to computers that can learn and adapt to their environment, making them more intuitive and user-friendly.

Quantum computing, using quantum bits or ‘qubits’, is another frontier. Unlike traditional bits that hold a value of either 0 or 1, qubits can exist in multiple states simultaneously. This allows quantum computers to perform complex calculations at unprecedented speeds. While still in its infancy, quantum computing could redefine computational boundaries.

Cloud Technology

Cloud technology is poised to further transform computer usage. With most data and applications moving to the cloud, the need for powerful personal computers may diminish. Instead, thin clients or devices with minimal hardware, relying on the cloud for processing and storage, could become the norm.

The future of computers is a fascinating blend of AI, quantum computing, and cloud technology. As these technologies mature, we can expect computers to become even more integral to our lives, reshaping society in profound ways. The only certainty is that the pace of change will continue to accelerate, making the future of computers an exciting realm of endless possibilities.

500 Words Essay on Future of Computer

The evolution of computing.

Computers have revolutionized the way we live, work, and play. From their early inception as room-sized machines to the sleek, pocket-sized devices we have today, computers have evolved dramatically. However, this is only the tip of the iceberg. The future of computing promises to be even more exciting and transformative.

One of the most anticipated advancements in the realm of computer science is quantum computing. Unlike classical computers, which use bits (0s and 1s) for processing information, quantum computers use quantum bits, or “qubits”. Qubits can exist in multiple states at once, a phenomenon known as superposition. This allows quantum computers to process vast amounts of data simultaneously, potentially solving complex problems that are currently beyond the capabilities of classical computers.

Artificial Intelligence and Machine Learning

Artificial Intelligence (AI) and Machine Learning (ML) are two other areas poised to shape the future of computing. AI refers to the ability of a machine to mimic human intelligence, while ML is a subset of AI that involves the ability of machines to learn and improve without being explicitly programmed. As these technologies advance, we can expect computers to become more autonomous, capable of complex decision-making and problem-solving.

Neuromorphic Computing

Neuromorphic computing, another promising field, aims to mimic the human brain’s architecture and efficiency. By leveraging the principles of neural networks, neuromorphic chips can process information more efficiently than traditional processors, making them ideal for applications requiring real-time processing and low power consumption.

Edge Computing

As the Internet of Things (IoT) continues to expand, so does the need for edge computing. Edge computing involves processing data closer to its source, reducing latency and bandwidth usage. This technology is crucial for real-time applications, such as autonomous vehicles and smart cities, where instant data processing is vital.

Conclusion: The Future is Now

The future of computing is already unfolding around us. Quantum computers are being developed by tech giants, AI and ML are becoming more sophisticated, neuromorphic chips are on the horizon, and edge computing is becoming a necessity in our increasingly connected world. As we move forward, the boundaries of what computers can achieve will continue to expand, leading to unprecedented advancements in technology and society. The future of computing is not just a concept—it’s a reality that’s taking shape right before our eyes.

That’s it! I hope the essay helped you.

If you’re looking for more, here are essays on other interesting topics:

  • Essay on Evolution of Computers
  • Essay on Importance of Computer in Our Life
  • Essay on History of Computer

Apart from these, you can look at all the essays by clicking here .

Happy studying!

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

future of computer science essay

Computer Science Essay Topics

Donna C

Unleash Your Creativity with 160+ Computer Science Essay Topics

12 min read

Published on: May 5, 2023

Last updated on: Jan 30, 2024

computer science essay topics

Share this article

One of the biggest challenges students face when it comes to writing an essay is choosing the right topic. 

This is especially true for computer science students, who often struggle to find a topic that is relevant to the subject.

That's where our blog comes in!

We have crafted a list of over 160 computer science essay topics to help students find inspiration. Whether you're looking to write an impressive essay or simply looking for topic suggestions, we have got you covered.

So, let's get started!

On This Page On This Page -->

Computer Science Essay - Overview

A computer science essay is a written piece that explores various topics related to computer science. These include technical and complex topics, like software development and artificial intelligence. They can also explore more general topics, like the history and future of technology.

In most cases, computer science essays are written by students as part of their coursework or academic assignments.

Computer science essays can take many forms, such as research papers, argumentative essays, or even creative writing pieces. 

Regardless of the format, a well-written computer science essay should be informative, engaging, and well-supported by evidence and research.

Now that we understand the purpose of it, let's explore some of the most popular and interesting topics within this field. 

In the following sections, we will dive into over 160 computer science essay topics to inspire your next writing project.

Computer Science Essay Topics For High School Students

  • How Artificial Intelligence is Revolutionizing the Gaming Industry
  • The Ethics of Autonomous Vehicles: Who is Responsible for Accidents?
  • The Role of Computer Science in Modern Healthcare
  • The Benefits and Drawbacks of Artificial Intelligence
  • The Future of Cybersecurity: Challenges and Opportunities
  • How Virtual Reality is Changing the Way We Learn
  • The Ethics of Autonomous Vehicles
  • The Role of Big Data in Modern Business
  • The Pros and Cons of Cloud Computing
  • The Implications of Blockchain Technology

Computer Science Essay Topics For Middle School Students

  • How Computers Work: An Introduction to Hardware and Software
  • The Evolution of Video Games: From Pong to Virtual Reality
  • Internet Safety: Tips for Staying Safe Online
  • How Search Engines Work: Understanding Google and Bing
  • Coding Basics: An Introduction to HTML and CSS
  • The Future of Technology: What Will We See in the Next 10 Years?
  • The Power of Social Media: How it Impacts Our Lives
  • The Ethics of Technology: The Pros and Cons of Social Media
  • The Science of Cryptography: How Messages are Secured
  • Robots and Artificial Intelligence: What Are They and How Do They Work?

Computer Science Essay Topics For College Students

  • The Role of Machine Learning in Business
  • Cybersecurity and Data Privacy in the Digital Age
  • The Impact of Social Media on Political Campaigns
  • The Ethics of Artificial Intelligence and Autonomous Systems
  • The Future of Cloud Computing and Cloud Storage
  • The Use of Blockchain Technology in Financial Services
  • The Integration of IoT in Smart Homes and Smart Cities
  • The Advancements and Challenges of Quantum Computing
  • The Pros and Cons of Open Source Software
  • The Impact of Technology on the Job Market: Opportunities and Threats

Computer Science Essay Topics For University Students

  • The Application of Machine Learning and Deep Learning in Natural Language Processing
  • The Future of Quantum Computing: Challenges and Prospects
  • The Impact of Artificial Intelligence on the Labor Market: An Empirical Study
  • The Ethical Implications of Autonomous Systems and Robotics
  • The Role of Data Science in Financial Risk Management
  • Blockchain and Smart Contracts: Applications and Limitations
  • The Security Challenges of Cloud Computing: A Comparative Analysis
  • The Prospects of Cognitive Computing and its Implications for Business Intelligence
  • The Integration of IoT and Edge Computing in Smart City Development
  • The Relationship between Cybersecurity and National Security: A Theoretical and Empirical Study.

 Research Paper Topics in Computer Science

  • Artificial Intelligence in Cybersecurity: Advancements and Limitations
  • Social Media and Mental Health: Implications for Research and Practice
  • Blockchain Implementation in Supply Chain Management: A Comparative Study
  • Natural Language Processing: Trends, Challenges, and Future Directions
  • Edge Computing in IoT: Opportunities and Challenges
  • Data Analytics in Healthcare Decision Making: An Empirical Study
  • Virtual Reality in Education and Training: Opportunities and Challenges
  • Cloud Computing in Developing Countries: Opportunities and Challenges
  • Security Risks of Smart Homes and IoT Devices: A Comparative Analysis
  • Artificial Intelligence and the Legal Profession: Challenges and Opportunities

Computer Science Essay Topics On Emerging Technologies

  • 5G Networks: Trends, Applications, and Challenges
  • Augmented Reality in Marketing and Advertising: Opportunities and Challenges
  • Quantum Computing in Drug Discovery: A Review of Current Research
  • Autonomous Vehicles: Advancements and Challenges in Implementation
  • Synthetic Biology: Current Developments and Future Prospects
  • Brain-Computer Interfaces: Opportunities and Challenges in Implementation
  • Robotics in Healthcare: Trends, Challenges, and Future Directions
  • Wearable Technology: Applications and Limitations in Healthcare
  • Virtual Assistants: Opportunities and Limitations in Daily Life
  • Biometric Authentication: Advancements and Challenges in Implementation

Computer Science Essay Topics On Solving Problems

  • Using Artificial Intelligence to solve traffic congestion problems
  • Implementing Machine Learning to predict and prevent cyber-attacks
  • Developing a Computer Vision system to detect early-stage skin cancer
  • Using Data Analytics to improve energy efficiency in buildings
  • Implementing an IoT-based solution for monitoring and reducing air pollution
  • Developing a software system for optimizing supply chain management
  • Using Blockchain to secure and manage digital identities
  • Implementing a Smart Grid system for energy distribution and management
  • Developing a mobile application for emergency response and disaster management
  • Using Robotics to automate and optimize warehouse operations.

Computer Science Argumentative Essay Topics

  • Should the development of autonomous weapons be banned?
  • Is social media addiction a mental health disorder?
  • Should governments regulate the use of artificial intelligence in decision-making?
  • Is online privacy a fundamental human right?
  • Should companies be held liable for data breaches?
  • Is net neutrality necessary for a free and open internet?
  • Should software piracy be treated as a criminal offense?
  • Should online hate speech be regulated by law?
  • Is open-source software better than proprietary software?
  • Should governments use surveillance technology to prevent crime?

Computer Science Persuasive Essay Topics

  • Should coding be a mandatory subject in schools?
  • Is artificial intelligence a threat to human jobs?
  • Should the use of drones for commercial purposes be regulated?
  • Is encryption important for online security?
  • Should governments provide free Wi-Fi in public spaces?
  • Is cyberbullying a serious problem in schools?
  • Should social media platforms regulate hate speech?
  • Is online voting a viable option for elections?
  • Should algorithms be used in decision-making processes in the criminal justice system?
  • Should governments invest in space exploration and colonization?

 Current Hot Topics in Computer Science

  • The ethical implications of facial recognition technology
  • The role of blockchain in data security and privacy
  • The future of quantum computing and its potential applications
  • The challenges and opportunities of implementing machine learning in healthcare
  • The impact of big data on business operations and decision-making
  • The potential of augmented and virtual reality in education and training
  • The role of computer science in addressing climate change and sustainability
  • The social and cultural implications of social media algorithms
  • The intersection of computer science and neuroscience in developing artificial intelligence

Order Essay

Paper Due? Why Suffer? That's our Job!

Controversial Topics in Computer Science

  • The ethics of Artificial Intelligence
  • The dark side of the Internet
  • The impact of social media on mental health
  • The role of technology in political campaigns
  • The ethics of autonomous vehicles
  • The responsibility of tech companies in preventing cyberbullying
  • The use of facial recognition technology by law enforcement
  • The impact of automation on employment
  • The future of privacy in a digital world
  • The dangers of deep face technology

Good Essay Topics on Computer Science and Systems

  • The history of computers and computing
  • The impact of computers on society
  • The evolution of computer hardware and software
  • The role of computers in education
  • The future of quantum computing
  • The impact of computers on the music industry
  • The use of computers in medicine and healthcare
  • The role of computers in space exploration
  • The impact of video games on cognitive development
  • The benefits and drawbacks of cloud computing

Simple & Easy Computers Essay Topics

  • How to choose the right computer for your needs
  • The basics of computer hardware and software
  • The importance of computer maintenance and upkeep
  • How to troubleshoot common computer problems
  • The role of computers in modern business
  • The impact of computers on communication
  • How to protect your computer from viruses and malware
  • The basics of computer programming
  • How to improve your computer skills
  • The benefits of using a computer for personal finance management.

Computer Science Extended Essay Topics

  • The impact of Artificial Intelligence on the job market
  • The development of a smart home system using IoT
  • The use of Blockchain in supply chain management
  • The future of quantum computing in cryptography
  • Developing an AI-based chatbot for customer service
  • The use of Machine Learning for credit scoring
  • The development of an autonomous drone delivery system
  • The role of Big Data in predicting and preventing natural disasters
  • The potential of Robotics in agriculture
  • The impact of 5G on the Internet of Things

Long Essay Topics In Computer Science

  • The ethical implications of artificial intelligence and machine learning.
  • Exploring the potential of quantum computing and its impact on cryptography.
  • The use of big data in healthcare: Opportunities and challenges.
  • The future of autonomous vehicles and their impact on transportation and society.
  • The role of blockchain technology in securing digital transactions and information.
  • The impact of social media and algorithms on the spread of misinformation.
  • The ethics of cybersecurity and the role of governments in protecting citizens online.
  • The potential of virtual reality and augmented reality in education and training.
  • The impact of cloud computing on business and IT infrastructure.
  • The challenges and opportunities of developing sustainable computing technologies

Most Interesting Computers Topics

  • The rise of artificial intelligence in information technology: opportunities and challenges.
  • The evolution of programming languages and their impact on software development.
  • The future of pursuing computer science education: online learning vs traditional classroom.
  • The impact of virtualization on computer systems and their scalability.
  • Cybersecurity threats in information technology: prevention and mitigation strategies.
  • An analysis of the most popular programming languages and their advantages and disadvantages.
  • The role of cloud computing in the digital transformation of businesses.
  • Emerging trends in pursuing computer science education: personalized learning and adaptive assessments.
  • Developing secure computer systems for critical infrastructure: challenges and solutions.
  • The potential of quantum computing in revolutionizing information technology and programming languages.

How To Choose The Right Computer Science Essay Topic

Choosing the right computer science essay topic can be a challenging task. Here are some tips to help you select the best topic for your essay:

  • Consider your Interests

Choose a topic that you are genuinely interested in. This will help you to stay motivated and engaged throughout the writing process.

  • Do your Research

Spend some time researching different computer science topics to identify areas that interest you and have plenty of research material available.

  • Narrow Down Your Focus

Once you have a list of potential topics, narrow down your focus to a specific aspect or issue within that topic.

  • Consider the Audience

Think about who your audience is and choose a topic that is relevant to their interests or needs.

  • Evaluate The Scope Of The Topic

Make sure that the topic you choose is not too broad or too narrow. You want to have enough material to write a comprehensive essay, but not so much that it becomes overwhelming.

Take some time to brainstorm different ideas and write them down. This can help you to identify patterns or themes that you can use to develop your topic.

  • Consult With Your Instructor

If you're struggling to come up with a topic, consider consulting with your instructor or a tutor. They can provide you with guidance and feedback to help you choose the right topic.

Tips To Write An Effective Computer Science Essay

Writing an effective computer science essay requires careful planning and execution. Here are some tips to help you write a great essay:

  • Start with a clear thesis statement: Your thesis statement should be concise and clearly state the purpose of your essay.
  • Use evidence to support your arguments: Use credible sources to back up your arguments. Also, make sure to properly cite your sources.
  • Write in a clear and concise manner: Use simple and straightforward language to convey your ideas. Avoid using technical jargon that your audience may not understand.
  • Use diagrams and visual aids: If appropriate, use diagrams and visual aids to help illustrate your ideas. This will make your essay look more engaging.
  • Organize your essay effectively: Use clear and logical headings and subheadings to organize your essay and make it easy to follow.
  • Proofread and edit: Before submitting, make sure to carefully proofread your essay to ensure that it is free of errors.
  • Seek feedback: Get feedback from others, to help you identify areas where you can improve your writing.

By following these tips, you can write an effective computer science essay that engages your audience and effectively communicates your ideas.

In conclusion, computer science is a vast and exciting field that offers a wide range of essay topics for students. 

Whether you're writing about emerging technologies, or hot topics in computer science, there are plenty of options to choose from.

To choose the right topic for your essay, consider your interests, the assignment requirements, and the audience you are writing for. Once you have a topic in mind, follow the tips we've outlined to write an effective essay that engages your audience.

If you're struggling to write your computer science essay, consider hiring our professional essay writing - CollegeEssay.org. 

We offer a range of services, including essay writing, editing, and proofreading, to help students achieve their academic goals.

With our essay writer AI , you can take your writing to the next level and succeed in your studies. 

So why wait? Visit our computer science essay writing service and see how we can help you!

Donna C (Law, Literature)

Donna has garnered the best reviews and ratings for her work. She enjoys writing about a variety of topics but is particularly interested in social issues, current events, and human interest stories. She is a sought-after voice in the industry, known for her engaging, professional writing style.

Paper Due? Why Suffer? That’s our Job!

Get Help

Legal & Policies

  • Privacy Policy
  • Cookies Policy
  • Terms of Use
  • Refunds & Cancellations
  • Our Writers
  • Success Stories
  • Our Guarantees
  • Affiliate Program
  • Referral Program
  • AI Essay Writer

Disclaimer: All client orders are completed by our team of highly qualified human writers. The essays and papers provided by us are not to be used for submission but rather as learning models only.

future of computer science essay

Ideas That Created the Future: Classic Papers of Computer Science

Ideas That Created the Future : Classic Papers of Computer Science

Harry Lewis is Gordon McKay Research Professor of Computer Science at Harvard University. He is the coauthor of Blown to Bits: Your Life, Liberty, and Happiness after the Digital Explosion, coeditor of What Is College For? , and editor of Ideas That Created the Futur e (MIT Press).

Classic papers by thinkers ranging from Aristotle and Leibniz to Norbert Wiener and Gordon Moore that chart the evolution of computer science.

Ideas That Created the Future collects forty-six classic papers in computer science that map the evolution of the field. It covers all aspects of computer science: theory and practice, architectures and algorithms, and logic and software systems, with an emphasis on the period of 1936–1980 but also including important earlier work. Offering papers by thinkers ranging from Aristotle and Leibniz to Alan Turing and Nobert Wiener, the book documents the discoveries and inventions that created today's digital world. A brief essay by volume editor Harry Lewis, offering historical and intellectual context, accompanies each paper.

Readers will learn that we owe to Aristotle the realization that fixed rules of logic can apply to different phenomena—that logic provides a general framework for reasoning—and that Leibniz recognized the merits of binary notation. They can read Ada Lovelace's notes on L. F. Menabrea's sketch of an analytical engine, George Boole's attempt to capture the rules of reason in mathematical form, David Hilbert's famous 1900 address, “Mathematical Problems,” and Alan Turing's illumination of a metamathematical world. Later papers document the “Cambrian era” of 1950s computer design, Maurice Wilkes's invention of microcode, Grace Hopper's vision of a computer's “education,” Ivan Sutherland's invention of computer graphics at MIT, Whitfield Diffie and Martin Hellman's pioneering work on encryption, and much more. Lewis's guided tour of a burgeoning field is especially welcome at a time when computer education is increasingly specialized.

  • Permissions
  • Cite Icon Cite

Ideas That Created the Future : Classic Papers of Computer Science Edited by: Harry R. Lewis https://doi.org/10.7551/mitpress/12274.001.0001 ISBN (electronic): 9780262363174 Publisher: The MIT Press Published: 2021

Download citation file:

  • Ris (Zotero)
  • Reference Manager

Table of Contents

  • [ Front Matter ] Doi: https://doi.org/10.7551/mitpress/12274.003.0051 Open the PDF Link PDF for [ Front Matter ] in another window
  • Preface By Harry Lewis Harry Lewis Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0001 Open the PDF Link PDF for Preface in another window
  • Introduction: The Roots and Growth of Computer Science Doi: https://doi.org/10.7551/mitpress/12274.003.0002 Open the PDF Link PDF for Introduction: The Roots and Growth of Computer Science in another window
  • 1: Prior Analytics (~350 BCE) By Aristotle Aristotle Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0003 Open the PDF Link PDF for 1: Prior Analytics (~350 BCE) in another window
  • 2: The True Method (1677) By Gottfried Wilhelm Leibniz Gottfried Wilhelm Leibniz Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0004 Open the PDF Link PDF for 2: The True Method (1677) in another window
  • 3: Sketch of the Analytical Engine (1843) By L. F. Menabrea L. F. Menabrea Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0005 Open the PDF Link PDF for 3: Sketch of the Analytical Engine (1843) in another window
  • 4: An Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities (1854) By George Boole George Boole Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0006 Open the PDF Link PDF for 4: An Investigation of the Laws of Thought on Which Are Founded the Mathematical Theories of Logic and Probabilities (1854) in another window
  • 5: Mathematical Problems (1900) By David Hilbert David Hilbert Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0007 Open the PDF Link PDF for 5: Mathematical Problems (1900) in another window
  • 6: On Computable Numbers, with an Application to the Entscheidungsproblem (1936) By Alan Mathison Turing Alan Mathison Turing Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0008 Open the PDF Link PDF for 6: On Computable Numbers, with an Application to the Entscheidungsproblem (1936) in another window
  • 7: A Proposed Automatic Calculating Machine (1937) By Howard Hathaway Aiken Howard Hathaway Aiken Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0009 Open the PDF Link PDF for 7: A Proposed Automatic Calculating Machine (1937) in another window
  • 8: A Symbolic Analysis of Relay and Switching Circuits (1938) By Claude Shannon Claude Shannon Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0010 Open the PDF Link PDF for 8: A Symbolic Analysis of Relay and Switching Circuits (1938) in another window
  • 9: A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) By Warren McCulloch , Warren McCulloch Search for other works by this author on: This Site Google Scholar Walter Pitts Walter Pitts Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0011 Open the PDF Link PDF for 9: A Logical Calculus of the Ideas Immanent in Nervous Activity (1943) in another window
  • 10: First Draft of a Report on the EDVAC (1945) By John von Neumann John von Neumann Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0012 Open the PDF Link PDF for 10: First Draft of a Report on the EDVAC (1945) in another window
  • 11: As We May Think (1945) By Vannevar Bush Vannevar Bush Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0013 Open the PDF Link PDF for 11: As We May Think (1945) in another window
  • 12: A Mathematical Theory of Communication (1948) By Claude Shannon Claude Shannon Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0014 Open the PDF Link PDF for 12: A Mathematical Theory of Communication (1948) in another window
  • 13: Error Detecting and Error Correcting Codes (1950) By R. W. Hamming R. W. Hamming Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0015 Open the PDF Link PDF for 13: Error Detecting and Error Correcting Codes (1950) in another window
  • 14: Computing Machinery and Intelligence (1950) By Alan Mathison Turing Alan Mathison Turing Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0016 Open the PDF Link PDF for 14: Computing Machinery and Intelligence (1950) in another window
  • 15: The Best Way to Design an Automatic Calculating Machine (1951) By Maurice Wilkes Maurice Wilkes Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0017 Open the PDF Link PDF for 15: The Best Way to Design an Automatic Calculating Machine (1951) in another window
  • 16: The Education of a Computer (1952) By Grace Murray Hopper Grace Murray Hopper Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0018 Open the PDF Link PDF for 16: The Education of a Computer (1952) in another window
  • 17: On the Shortest Spanning Subtree of a Graph and the Traveling Salesman Problem (1956) By Joseph B. Kruskal, Jr. Joseph B. Kruskal, Jr. Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0019 Open the PDF Link PDF for 17: On the Shortest Spanning Subtree of a Graph and the Traveling Salesman Problem (1956) in another window
  • 18: The Perceptron: A Probabilistic Model for Information Storage and Organization (1958) By Frank Rosenblatt Frank Rosenblatt Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0020 Open the PDF Link PDF for 18: The Perceptron: A Probabilistic Model for Information Storage and Organization (1958) in another window
  • 19: Some Moral and Technical Consequences of Automation (1960) By Norbert Wiener Norbert Wiener Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0021 Open the PDF Link PDF for 19: Some Moral and Technical Consequences of Automation (1960) in another window
  • 20: Man–Computer Symbiosis (1960) By J. C. R. Licklider J. C. R. Licklider Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0022 Open the PDF Link PDF for 20: Man–Computer Symbiosis (1960) in another window
  • 21: Recursive Functions of Symbolic Expressions and Their Computation by Machine (1960) By John McCarthy John McCarthy Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0023 Open the PDF Link PDF for 21: Recursive Functions of Symbolic Expressions and Their Computation by Machine (1960) in another window
  • 22: Augmenting Human Intellect: A Conceptual Framework (1962) By Douglas C. Engelbart Douglas C. Engelbart Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0024 Open the PDF Link PDF for 22: Augmenting Human Intellect: A Conceptual Framework (1962) in another window
  • 23: An Experimental Time-Sharing System (1962) By Fernando Corbató , Fernando Corbató Search for other works by this author on: This Site Google Scholar Marjorie Merwin Daggett , Marjorie Merwin Daggett Search for other works by this author on: This Site Google Scholar Robert C. Daley Robert C. Daley Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0025 Open the PDF Link PDF for 23: An Experimental Time-Sharing System (1962) in another window
  • 24: Sketchpad (1963) By Ivan E. Sutherland Ivan E. Sutherland Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0026 Open the PDF Link PDF for 24: Sketchpad (1963) in another window
  • 25: Cramming More Components onto Integrated Circuits (1965) By Gordon Moore Gordon Moore Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0027 Open the PDF Link PDF for 25: Cramming More Components onto Integrated Circuits (1965) in another window
  • 26: Solution of a Problem in Concurrent Program Control (1965) By Edsger Dijkstra Edsger Dijkstra Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0028 Open the PDF Link PDF for 26: Solution of a Problem in Concurrent Program Control (1965) in another window
  • 27: ELIZA—A Computer Program for the Study of Natural Language Communication between Man and Machine (1966) By Joseph Weizenbaum Joseph Weizenbaum Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0029 Open the PDF Link PDF for 27: ELIZA—A Computer Program for the Study of Natural Language Communication between Man and Machine (1966) in another window
  • 28: The Structure of the “THE”-Multiprogramming System (1968) By Edsger Dijkstra Edsger Dijkstra Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0030 Open the PDF Link PDF for 28: The Structure of the “THE”-Multiprogramming System (1968) in another window
  • 29: Go To Statement Considered Harmful (1968) By Edsger Dijkstra Edsger Dijkstra Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0031 Open the PDF Link PDF for 29: Go To Statement Considered Harmful (1968) in another window
  • 30: Gaussian Elimination is Not Optimal (1969) By Volker Strassen Volker Strassen Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0032 Open the PDF Link PDF for 30: Gaussian Elimination is Not Optimal (1969) in another window
  • 31: An Axiomatic Basis for Computer Programming (1969) By C. A. R. Hoare C. A. R. Hoare Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0033 Open the PDF Link PDF for 31: An Axiomatic Basis for Computer Programming (1969) in another window
  • 32: A Relational Model of Large Shared Data Banks (1970) By Edgar F. Codd Edgar F. Codd Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0034 Open the PDF Link PDF for 32: A Relational Model of Large Shared Data Banks (1970) in another window
  • 33: Managing the Development of Large Software Systems (1970) By Winston W. Royce Winston W. Royce Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0035 Open the PDF Link PDF for 33: Managing the Development of Large Software Systems (1970) in another window
  • 34: The Complexity of Theorem-Proving Procedures (1971) By Stephen A. Cook Stephen A. Cook Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0036 Open the PDF Link PDF for 34: The Complexity of Theorem-Proving Procedures (1971) in another window
  • 35: A Statistical Interpretation of Term Specificity and Its Application in Retrieval (1972) By Karen Spärck Jones Karen Spärck Jones Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0037 Open the PDF Link PDF for 35: A Statistical Interpretation of Term Specificity and Its Application in Retrieval (1972) in another window
  • 36: Reducibility among Combinatorial Problems (1972) By Richard Karp Richard Karp Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0038 Open the PDF Link PDF for 36: Reducibility among Combinatorial Problems (1972) in another window
  • 37: The Unix Time-Sharing System (1974) By Dennis Ritchie , Dennis Ritchie Search for other works by this author on: This Site Google Scholar Kenneth Thompson Kenneth Thompson Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0039 Open the PDF Link PDF for 37: The Unix Time-Sharing System (1974) in another window
  • 38: A Protocol for Packet Network Intercommunication (1974) By Vinton Cerf , Vinton Cerf Search for other works by this author on: This Site Google Scholar Robert Kahn Robert Kahn Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0040 Open the PDF Link PDF for 38: A Protocol for Packet Network Intercommunication (1974) in another window
  • 39: Programming with Abstract Data Types (1974) By Barbara Liskov , Barbara Liskov Search for other works by this author on: This Site Google Scholar Stephen Zilles Stephen Zilles Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0041 Open the PDF Link PDF for 39: Programming with Abstract Data Types (1974) in another window
  • 40: The Mythical Man-Month (1975) By Frederick C. Brooks Frederick C. Brooks Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0042 Open the PDF Link PDF for 40: The Mythical Man-Month (1975) in another window
  • 41: Ethernet: Distributed Packet Switching for Local Computer Networks (1976) By Robert Metcalfe , Robert Metcalfe Search for other works by this author on: This Site Google Scholar David R. Boggs David R. Boggs Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0043 Open the PDF Link PDF for 41: Ethernet: Distributed Packet Switching for Local Computer Networks (1976) in another window
  • 42: New Directions in Cryptography (1976) By Whitfield Diffie , Whitfield Diffie Search for other works by this author on: This Site Google Scholar Martin Hellman Martin Hellman Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0044 Open the PDF Link PDF for 42: New Directions in Cryptography (1976) in another window
  • 43: Big Omicron and Big Omega and Big Theta (1976) By Donald E. Knuth Donald E. Knuth Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0045 Open the PDF Link PDF for 43: Big Omicron and Big Omega and Big Theta (1976) in another window
  • 44: Social Processes and Proofs of Theorems and Programs (1977) By Richard DeMillo , Richard DeMillo Search for other works by this author on: This Site Google Scholar Richard Lipton , Richard Lipton Search for other works by this author on: This Site Google Scholar Alan Perlis Alan Perlis Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0046 Open the PDF Link PDF for 44: Social Processes and Proofs of Theorems and Programs (1977) in another window
  • 45: A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (1978) By Ronald Rivest , Ronald Rivest Search for other works by this author on: This Site Google Scholar Adi Shamir , Adi Shamir Search for other works by this author on: This Site Google Scholar Len Adleman Len Adleman Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0047 Open the PDF Link PDF for 45: A Method for Obtaining Digital Signatures and Public-Key Cryptosystems (1978) in another window
  • 46: How to Share a Secret (1979) By Adi Shamir Adi Shamir Search for other works by this author on: This Site Google Scholar Doi: https://doi.org/10.7551/mitpress/12274.003.0048 Open the PDF Link PDF for 46: How to Share a Secret (1979) in another window
  • Bibliography Doi: https://doi.org/10.7551/mitpress/12274.003.0049 Open the PDF Link PDF for Bibliography in another window
  • Index Doi: https://doi.org/10.7551/mitpress/12274.003.0050 Open the PDF Link PDF for Index in another window
  • Open Access

A product of The MIT Press

Mit press direct.

  • About MIT Press Direct

Information

  • Accessibility
  • For Authors
  • For Customers
  • For Librarians
  • Direct to Open
  • Media Inquiries
  • Rights and Permissions
  • For Advertisers
  • About the MIT Press
  • The MIT Press Reader
  • MIT Press Blog
  • Seasonal Catalogs
  • MIT Press Home
  • Give to the MIT Press
  • Direct Service Desk
  • Terms of Use
  • Privacy Statement
  • Crossref Member
  • COUNTER Member  
  • The MIT Press colophon is registered in the U.S. Patent and Trademark Office

This Feature Is Available To Subscribers Only

Sign In or Create an Account

Ideas That Created the Future: Classic Papers of Computer Science

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 16 October 2023

Forecasting the future of artificial intelligence with machine learning-based link prediction in an exponentially growing knowledge network

  • Mario Krenn   ORCID: orcid.org/0000-0003-1620-9207 1 ,
  • Lorenzo Buffoni 2 ,
  • Bruno Coutinho 2 ,
  • Sagi Eppel 3 ,
  • Jacob Gates Foster 4 ,
  • Andrew Gritsevskiy   ORCID: orcid.org/0000-0001-8138-8796 3 , 5 , 6 ,
  • Harlin Lee   ORCID: orcid.org/0000-0001-6128-9942 4 ,
  • Yichao Lu   ORCID: orcid.org/0009-0001-2005-1724 7 ,
  • João P. Moutinho 2 ,
  • Nima Sanjabi   ORCID: orcid.org/0009-0000-6342-5231 8 ,
  • Rishi Sonthalia   ORCID: orcid.org/0000-0002-0928-392X 4 ,
  • Ngoc Mai Tran 9 ,
  • Francisco Valente   ORCID: orcid.org/0000-0001-6964-9391 10 ,
  • Yangxinyu Xie   ORCID: orcid.org/0000-0002-1532-6746 11 ,
  • Rose Yu 12 &
  • Michael Kopp 6  

Nature Machine Intelligence volume  5 ,  pages 1326–1335 ( 2023 ) Cite this article

26k Accesses

6 Citations

1057 Altmetric

Metrics details

  • Complex networks
  • Computer science
  • Research data

A tool that could suggest new personalized research directions and ideas by taking insights from the scientific literature could profoundly accelerate the progress of science. A field that might benefit from such an approach is artificial intelligence (AI) research, where the number of scientific publications has been growing exponentially over recent years, making it challenging for human researchers to keep track of the progress. Here we use AI techniques to predict the future research directions of AI itself. We introduce a graph-based benchmark based on real-world data—the Science4Cast benchmark, which aims to predict the future state of an evolving semantic network of AI. For that, we use more than 143,000 research papers and build up a knowledge network with more than 64,000 concept nodes. We then present ten diverse methods to tackle this task, ranging from pure statistical to pure learning methods. Surprisingly, the most powerful methods use a carefully curated set of network features, rather than an end-to-end AI approach. These results indicate a great potential that can be unleashed for purely ML approaches without human knowledge. Ultimately, better predictions of new future research directions will be a crucial component of more advanced research suggestion tools.

Similar content being viewed by others

future of computer science essay

Learning on knowledge graph dynamics provides an early warning of impactful research

future of computer science essay

TrendyGenes, a computational pipeline for the detection of literature trends in academia and drug discovery

future of computer science essay

Accelerating science with human-aware artificial intelligence

The corpus of scientific literature grows at an ever-increasing speed. Specifically, in the field of artificial intelligence (AI) and machine learning (ML), the number of papers every month is growing exponentially with a doubling rate of roughly 23 months (Fig. 1 ). Simultaneously, the AI community is embracing diverse ideas from many disciplines such as mathematics, statistics and physics, making it challenging to organize different ideas and uncover new scientific connections. We envision a computer program that can automatically read, comprehend and act on AI literature. It can predict and suggest meaningful research ideas that transcend individual knowledge and cross-domain boundaries. If successful, it could greatly improve the productivity of AI researchers, open up new avenues of research and help drive progress in the field.

figure 1

The doubling rate of papers per month is roughly 23 months, which might lead to problems for publishing in these fields, at some point. The categories are cs.AI, cs.LG, cs.NE and stat.ML.

In this work, we address the ambitious vision of developing a data-driven approach to predict future research directions 1 . As new research ideas often emerge from connecting seemingly unrelated concepts 2 , 3 , 4 , we model the evolution of AI literature as a temporal network. We construct an evolving semantic network that encapsulates the content and development of AI research since 1994, with approximately 64,000 nodes (representing individual concepts) and 18 million edges (connecting jointly investigated concepts).

We use the semantic network as an input to ten diverse statistical and ML methods to predict the future evolution of the semantic network with high accuracy. That is, we can predict which combinations of concepts AI researchers will investigate in the future. Being able to predict what scientists will work on is a first crucial step for suggesting new topics that might have a high impact.

Several methods were contributions to the Science4Cast competition hosted by the 2021 IEEE International Conference on Big Data (IEEE BigData 2021). Broadly, we can divide the methods into two classes: methods that use hand-crafted network-theoretical features and those that automatically learn features. We found that models using carefully hand-crafted features outperform methods that attempt to learn features autonomously. This (somewhat surprising) finding indicates a great potential for improvements of models free of human priors.

Our paper introduces a real-world graph benchmark for AI, presents ten methods for solving it, and discusses how this task contributes to the larger goal of AI-driven research suggestions in AI and other disciplines. All methods are available at GitHub 5 .

Semantic networks

The goal here is to extract knowledge from the scientific literature that can subsequently be processed by computer algorithms. At first glance, a natural first step would be to use large language model (such as GPT3 6 , Gopher 7 , MegaTron 8 or PaLM 9 ) on each article to extract concepts and their relations automatically. However, these methods still struggle in reasoning capabilities 10 , 11 ; thus, it is not yet directly clear how these models can be used for identifying and suggesting new ideas and concept combinations.

Rzhetsky et al. 12 pioneered an alternative approach, creating semantic networks in biochemistry from co-occurring concepts in scientific papers. There, nodes represent scientific concepts, specifically biomolecules, and are linked when a paper mentions both in its title or abstract. This evolving network captures the field’s history and, using supercomputer simulations, provides insights into scientists’ collective behaviour and suggests more efficient research strategies 13 . Although creating semantic networks from concept co-occurrences extracts only a small amount of knowledge from each paper, it captures non-trivial and actionable content when applied to large datasets 2 , 4 , 13 , 14 , 15 . PaperRobot extends this approach by predicting new links from large medical knowledge graphs and formulating new ideas in human language as paper drafts 16 .

This approach was applied and extended to quantum physics 17 by building a semantic network of over 6,000 concepts. There, the authors (including one of us) formulated the prediction of new research trends and connections as an ML task, with the goal of identifying concept pairs not yet jointly discussed in the literature but likely to be investigated in the future. This prediction task was one component for personalized suggestions of new research ideas.

Link prediction in semantic networks

We formulate the prediction of future research topics as a link-prediction task in an exponentially growing semantic network in the AI field. The goal is to predict which unconnected nodes, representing scientific concepts not yet jointly researched, will be connected in the future.

Link prediction is a common problem in computer science, addressed with classical metrics and features, as well as ML techniques. Network theory-based methods include local motif-based approaches 18 , 19 , 20 , 21 , 22 , linear optimization 23 , global perturbations 24 and stochastic block models 25 . ML works optimized a combination of predictors 26 , with further discussion in a recent review 27 .

In ref. 17 , 17 hand-crafted features were used for this task. In the Science4Cast competition, the goal was to find more precise methods for link-prediction tasks in semantic networks (a semantic network of AI that is ten times larger than the one in ref. 17 ).

Potential for idea generation in science

The long-term goal of predictions and suggestions in semantic networks is to provide new ideas to individual researchers. In a way, we hope to build a creative artificial muse in science 28 . We can bias or constrain the model to give topic suggestions that are related to the research interest of individual scientists, or a pair of scientists to suggest topics for collaborations in an interdisciplinary setting.

Generation and analysis of the dataset

Dataset construction.

We create a dynamic semantic network using papers published on arXiv from 1992 to 2020 in the categories cs.AI, cs.LG, cs.NE and stat.ML. The 64,719 nodes represent AI concepts extracted from 143,000 paper titles and abstracts using Rapid Automatic Keyword Extraction (RAKE) and normalized via natural language processing (NLP) techniques and custom methods 29 . Although high-quality taxonomies such as the Computer Science Ontology (CSO) exist 30 , 31 , we choose not to use them for two reasons: the rapid growth of AI and ML may result in new concepts not yet in the CSO, and not all scientific domains have high-quality taxonomies like CSO. Our goal is to build a scalable approach applicable to any domain of science. However, future research could investigate merging these approaches (see ‘Extensions and future work’).

Concepts form the nodes of the semantic network, and edges are drawn when concepts co-appear in a paper title or abstract. Edges have time stamps based on the paper’s publication date, and multiple time-stamped edges between concepts are common. The network is edge-weighted, and the weight of an edge stands for the number of papers that connect two concepts. In total, this creates a time-evolving semantic network, depicted in Fig. 2 .

figure 2

Utilizing 143,000 AI and ML papers on arXiv from 1992 to 2020, we create a list of concepts using RAKE and other NLP tools, which form nodes in a semantic network. Edges connect concepts that co-occur in titles or abstracts, resulting in an evolving network that expands as more concepts are jointly investigated. The task involves predicting which unconnected nodes (concepts not yet studied together) will connect within a few years. We present ten diverse statistical and ML methods to address this challenge.

Network-theoretical analysis

The published semantic network has 64,719 nodes and 17,892,352 unique undirected edges, with a mean node degree of 553. Many hub nodes greatly exceed this mean degree, as shown in Fig. 3 , For example, the highest node degrees are 466,319 (neural network), 198,050 (deep learning), 195,345 (machine learning), 169,555 (convolutional neural network), 159,403 (real world), 150,227 (experimental result), 127,642 (deep neural network) and 115,334 (large scale). We fit a power-law curve to the degree distribution p ( k ) using ref. 32 and obtained p ( k )  ∝   k −2.28 for degree k  ≥ 1,672. However, real complex network degree distributions often follow power laws with exponential cut-offs 33 . Recent work 34 has indicated that lognormal distributions fit most real-world networks better than power laws. Likelihood ratio tests from ref. 32 suggest truncated power law ( P  = 0.0031), lognormal ( P  = 0.0045) and lognormal positive ( P  = 0.015) fit better than power law, while exponential ( P  = 3 × 10 −10 ) and stretched exponential ( P  = 6 × 10 −5 ) are worse. We couldn’t conclusively determine the best fit with P  ≤ 0.1.

figure 3

Nodes with the highest (466,319) and lowest (2) non-zero degrees are neural network and video compression technique, respectively. The most frequent non-zero degree is 64 (which occures 313 times). The plot, in log scale, omits 1,247 nodes with zero degrees.

We observe changes in network connectivity over time. Although degree distributions remained heavy-tailed, the ordering of nodes within the tail changed due to popularity trends. The most connected nodes and the years they became so include decision tree (1994), machine learning (1996), logic program (2000), neural network (2005), experimental result (2011), machine learning (2013, for a second time) and neural network (2015).

Connected component analysis in Fig. 4 reveals that the network grew more connected over time, with the largest group expanding and the number of connected components decreasing. Mid-sized connected components’ trajectories may expose trends, like image processing. A connected component with four nodes appeared in 1999 (brightness change, planar curve, local feature, differential invariant), and three more joined in 2000 (similarity transformation, template matching, invariant representation). In 2006, a paper discussing support vector machine and local feature merged this mid-sized group with the largest connected component.

figure 4

Primary (left, blue) vertical axis: number of connected components with more than one node. Secondary (right, orange) vertical axis: number of nodes in the largest connected component. For example, the network in 2019 comprises of one large connected component with 63,472 nodes and 1,247 isolated nodes, that is, nodes with no edges. However, the 2001 network has 19 connected components with size greater than one, the largest of which has 2,733 nodes.

The semantic network reveals increasing centralization over time, with a smaller percentage of nodes (concepts) contributing to a larger fraction of edges (concept combinations). Figure 5 shows that the fraction of edges for high-degree nodes rises, while it decreases for low-degree nodes. The decreasing average clustering coefficient over time supports this trend, suggesting nodes are more likely to connect to high-degree central nodes. This could be due to the AI community’s focus on a few dominating methods or more consistent terminology use.

figure 5

This cumulative histogram illustrates the fraction of nodes (concepts) corresponding to the fraction of edges (connections) for given years (1999, 2003, 2007, 2011, 2015 and 2019). The graph was generated by adding edges and nodes dated before each year. Nodes are sorted by increasing degrees. The y value at x  = 80 represents the fraction of edges contributed by all nodes in and below the 80th percentile of degrees.

Problem formulation

At the big picture, we aim to make predictions in an exponentially growing semantic network. The specific task involves predicting which two nodes v 1 and v 2 with degrees d ( v 1/ 2 ) ≥  c lacking an edge in the year (2021 −  δ ) will have w edges in 2021. We use δ  = 1, 3, 5, c  = 0, 5, 25 and w  = 1, 3, where c is a minimal degree. Note that c  = 0 is an intriguing special case where the nodes may not have an associated edge in the initial year, requiring the model to predict which nodes will connect to entirely new edges. The task w  = 3 goes beyond simple link prediction and seeks to identify uninvestigated concept pairs that will appear together in at least three papers. An interesting alternative task could be predicting the fastest-growing links, denoted as ‘trend’ prediction.

In this task, we provide a list of 10 million unconnected node pairs (each node having a degree ≥ c ) for the year (2021 −  δ ), with the goal of sorting this list by descending probability that they will have at least w edges in 2021.

For evaluation, we employ the receiver operating characteristic (ROC) curve 35 , which plots the true-positive rate against the false-positive rate at various threshold settings. We use the area under the curve (AUC) of the ROC curve as our evaluation metric. The advantage of AUC over mean square error is its independence from the data distribution. Specifically, in our case, where the two classes have a highly asymmetric distribution (with only about 1–3% of newly connected edges) and the distribution changes over time, AUC offers meaningful interpretation. Perfect predictions yield AUC = 1, whereas random predictions result in AUC = 0.5. AUC represents the percentage that a random true element is ranked higher than a random false one. For other metrics, see ref. 36 .

To tackle this task, models can use the complete information of the semantic network from the year (2021 −  δ ) in any way possible. In our case, all presented models generate a dataset for learning to make predictions from (2021 − 2 δ ) to (2021 −  δ ). Once the models successfully complete this task, they are applied to the test dataset to make predictions from (2021 −  δ ) to 2021. All reported AUCs are based on the test dataset. Note that solving the test dataset is especially challenging due to the δ -year shift, causing systematic changes such as the number of papers and density of the semantic network.

AI-based solutions

We demonstrate various methods to predict new links in a semantic network, ranging from pure statistical approaches and neural networks with hand-crafted features (NF) to ML models without NF. The results are shown in Fig. 6 , with the highest AUC scores achieved by methods using NF as ML model inputs. Pure network features without ML are competitive, while pure ML methods have yet to outperform those with NF. Predicting links generated at least three times can achieve a quasi-deterministic AUC > 99.5%, suggesting an interesting target for computational sociology and science of science research. We have performed numerous tests to exclude data leakage in the benchmark dataset, overfitting or data duplication both in the set of articles and the set of concepts. We rank methods based on their performance, with model M1 as the best performing and model M8 as the least effective (for the prediction of a new edge with δ  = 3, c  = 0). Models M4 and M7 are subdivided into M4A, M4B, M7A and M7B, differing in their focus on feature or embedding selection (more details in Methods ).

figure 6

Here we show the AUC values for different models that use machine learning techniques (ML), hand-crafted network features (NF) or a combination thereof. The left plot shows results for the prediction of a single new link (that is, w  = 1) and the right plot shows the results for the prediction of new triple links w  = 3. The task is to predict δ  = [1, 3, 5] years into the future, with cut-off values c  = [0, 5, 25]. We sort the models by the the results for the task ( w  = 1,  δ  = 3,  c  = 0), which was the task in the Science4Cast competition. Data points that are not shown have a AUC below 0.6 or are not computed due to computational costs. All AUC values reported are computed on a validation dataset δ years ahead of the training dataset that the models have never seen. Note that the prediction of new triple edges can be performed nearly deterministic. It will be interesting to understand the origin of this quasi-deterministic pattern in AI research, for example, by connecting it to the research interests of scientists 88 .

Model M1: NF + ML. This approach combines tree-based gradient boosting with graph neural networks, using extensive feature engineering to capture node centralities, proximity and temporal evolution 37 . The Light Gradient Boosting Machine (LightGBM) model 38 is employed with heavy regularization to combat overfitting due to the scarcity of positive examples, while a time-aware graph neural network learns dynamic node representations.

Model M2: NF + ML. This method utilizes node and edge features (as well as their first and second derivatives) to predict link formation probabilities 39 . Node features capture popularity, and edge features measure similarity. A multilayer perceptron with rectified linear unit (ReLU) activation is used for learning. Cold start issues are addressed with feature imputation.

Model M3: NF + ML. This method captures hand-crafted node features over multiple time snapshots and employs a long short-term memory (LSTM) to learn time dependencies 40 . The features were selected to be highly informative while having a low computational cost. The final configuration uses degree centrality, degree of neighbours and common neighbours as features. The LSTM outperforms fully connected neural networks.

Model M4: pure NF. Two purely statistical methods, preferential attachment 41 and common neighbours 27 , are used 42 . Preferential attachment is based on node degrees, while common neighbours relies on the number of shared neighbours. Both methods are computationally inexpensive and perform competitively with some learning-based models.

Model M5: NF + ML. Here, ten groups of first-order graph features are extracted to obtain neighbourhood and similarity properties, with principal component analysis 43 applied for dimensionality reduction 44 . A random forest classifier is trained on the balanced dataset to predict new links.

Model M6: NF + ML. The baseline solution uses 15 hand-crafted features as input to a four-layer neural network, predicting the probability of link formation between node pairs 17 .

Model M7: end-to-end ML (auto node embedding). The baseline solution is modified to use node2vec 45 and ProNE embeddings 46 instead of hand-crafted features. The embeddings are input to a neural network with two hidden layers for link prediction.

Model M8: end-to-end ML (transformers). This method learns features in an unsupervised manner using transformers 47 . Node2vec embeddings 45 , 48 are generated for various snapshots of the adjacency matrix, and a transformer model 49 is pre-trained as a feature extractor. A two-layer ReLU network is used for classification.

Extensions and future work

Developing an AI that suggests research topics to scientists is a complex task, and our link-prediction approach in temporal networks is just the beginning. We highlight key extensions and future work directly related to the ultimate goal of AI for AI.

High-quality predictions without feature engineering. Interestingly, the most effective methods utilized carefully crafted features on a graph with extracted concepts as nodes and edges representing their joint publication history. Investigating whether end-to-end deep learning can solve tasks without feature engineering will be a valuable next step.

Fully automated concept extraction. Current concept lists, generated by RAKE’s statistical text analysis, demand time-consuming code development to address irrelevant term extraction (for example, verbs, adjectives). A fully automated NLP technique that accurately extracts meaningful concepts without manual code intervention would greatly enhance the process.

Leveraging ontology taxonomies. Alongside fully automated concept extraction, utilizing established taxonomies such as the CSO 30 , 31 , Wikipedia-extracted concepts, book indices 17 or PhySH key phrases is crucial. Although not comprehensive for all domains, these curated datasets often contain hierarchical and relational concept information, greatly improving prediction tasks.

Incorporating relation extraction. Future work could explore relation extraction techniques for constructing more accurate, sparser semantic networks. By discerning and classifying meaningful concept relationships in abstracts 50 , 51 , a refined AI literature representation is attainable. Using NLP tools for entity recognition, relationship identification and classification, this approach may enhance prediction performance and novel research direction identification.

Generation of new concepts. Our work predicts links between known concepts, but generating new concepts using AI remains a challenge. This unsupervised task, as explored in refs. 52 , 53 , involves detecting concept clusters with dynamics that signal new concept formation. Incorporating emerging concepts into the current framework for suggesting research topics is an intriguing future direction.

Semantic information beyond concept pairs. Currently, abstracts and titles are compressed into concept pairs, but more comprehensive information extraction could yield meaningful predictions. Exploring complex data structures such as hypergraphs 54 may be computationally demanding, but clever tricks could reduce complexity, as shown in ref. 55 . Investigating sociological factors or drawing inspiration from material science approaches 56 may also improve prediction tasks. A recent dataset for the study of the science of science also includes more complex data structures than the ones used in our paper, including data from social networks such as Twitter 57 .

Predictions of scientific success. While predicting new links between concepts is valuable, assessing their potential impact is essential for high-quality suggestions. Introducing a metric of success, like estimated citation numbers or citation growth rate, can help gauge the importance of these connections. Adapting citation prediction techniques from the science of science 58 , 59 , 60 , 61 to semantic networks offers a promising research direction.

Anomaly detections. Predicting likely connections may not align with finding surprising research directions. One method for identifying surprising suggestions involves constraining cosine similarity between vertices 62 , which measures shared neighbours and can be associated with semantic (dis)similarity. Another approach is detecting anomalies in semantic networks, which are potential links with extreme properties 63 , 64 . While scientists often focus on familiar topics 3 , 4 , greater impact results from unexpected combinations of distant domains 12 , encouraging the search for surprising associations.

End-to-end formulation. Our method breaks down the goal of extracting knowledge from scientific literature into subtasks, contrasting with end-to-end deep learning that tackles problems directly without subproblems 65 , 66 . End-to-end approaches have shown great success in various domains 67 , 68 , 69 . Investigating whether such an end-to-end solution can achieve similar success in our context would be intriguing.

Our method represents a crucial step towards developing a tool that can assist scientists in uncovering novel avenues for exploration. We are confident that our outlined ideas and extensions pave the way for achieving practical, personalized, interdisciplinary AI-based suggestions for new impactful discoveries. We firmly believe that such a tool holds the potential to become a influential catalyst, transforming the way scientists approach research questions and collaborate in their respective fields.

Details on concept set generation and application

In this section, we provide details on the generation of our list of 64,719 concepts. For more information, the code is accessible on GitHub . The entire approach is designed for immediate scalability to other domains.

Initially, we utilized approximately 143,000 arXiv papers from the categories cs.AI, cs.LG, cs.NE and stat.ML spanning 1992 to 2020. The omission of earlier data has a negligible effect on our research question, as we show below. We then iterated over each individual article, employing RAKE (with an extended stopword list) to suggest concept candidates, which were subsequently stored.

Following the iteration, we retained concepts composed of at least two words (for example, neural network) appearing in six or more articles, as well as concepts comprising a minimum of three words (for example, recurrent neural network) appearing in three or more articles. This initial filter substantially reduced noise generated by RAKE, resulting in a list of 104,948 concepts.

Lastly, we developed an automated filtering tool to further enhance the quality of the concept list. This tool identified common, domain-independent errors made by RAKE, which primarily included phrases that were not concepts (for example, dataset provided or discuss open challenge). We compiled a list of 543 words not part of meaningful concepts, including verbs, ordinal numbers, conjunctions and adverbials. Ultimately, this process produced our final list of 64,719 concepts employed in our study. No further semantic concept/entity linking is applied.

By this construction, the test sets with c  = 0 could lead to very rare contamination of the dataset. That is because each concept will have at least one edge in the final dataset. The effects, however, are negligible.

The distribution of concepts in the articles can be seen in Extended Data Fig. 1 . As an example, we show the extraction of concepts from five randomly chosen papers:

Memristor hardware-friendly reinforcement learning 70 : ‘actor critic algorithm’, ‘neuromorphic hardware implementation’, ‘hardware neural network’, ‘neuromorphic hardware system’, ‘neural network’, ‘large number’, ‘reinforcement learning’, ‘case study’, ‘pre training’, ‘training procedure’, ‘complex task’, ‘high performance’, ‘classical problem’, ‘hardware implementation’, ‘synaptic weight’, ‘energy efficient’, ‘neuromorphic hardware’, ‘control theory’, ‘weight update’, ‘training technique’, ‘actor critic’, ‘nervous system’, ‘inverted pendulum’, ‘explicit supervision’, ‘hardware friendly’, ‘neuromorphic architecture’, ‘hardware system’.

Automated deep learning analysis of angiography video sequences for coronary artery disease 71 : ‘deep learning approach’, ‘coronary artery disease’, ‘deep learning analysis’, ‘traditional image processing’, ‘deep learning’, ‘image processing’, ‘f1 score’, ‘video sequence’, ‘error rate’, ‘automated analysis’, ‘coronary artery’, ‘vessel segmentation’, ‘key frame’, ‘visual assessment’, ‘analysis method’, ‘analysis pipeline’, ‘coronary angiography’, ‘geometrical analysis’.

Demographic influences on contemporary art with unsupervised style embeddings 72 : ‘classification task’, ‘social network’, ‘data source’, ‘visual content’, ‘graph network’, ‘demographic information’, ‘social connection’, ‘visual style’, ‘historical dataset’, ‘novel information’

The utility of general domain transfer learning for medical language tasks 73 : ‘natural language processing’, ‘long short term memory’, ‘logistic regression model’, ‘transfer learning technique’, ‘short term memory’, ‘average f1 score’, ‘class classification model’, ‘domain transfer learning’, ‘weighted average f1 score’, ‘medical natural language processing’, ‘natural language process’, ‘transfer learning’, ‘f1 score’, ’natural language’, ’deep model’, ’logistic regression’, ’model performance’, ’classification model’, ’text classification’, ’regression model’, ’nlp task’, ‘short term’, ‘medical domain’, ‘weighted average’, ‘class classification’, ‘bert model’, ‘language processing’, ‘biomedical domain’, ‘domain transfer’, ‘nlp model’, ‘main model’, ‘general domain’, ‘domain model’, ‘medical text’.

Fast neural architecture construction using envelopenets 74 : ‘neural network architecture’, ‘neural architecture search’, ‘deep network architecture’, ‘image classification problem’, ‘neural architecture search method’, ‘neural network’, ‘reinforcement learning’, ‘deep network’, ‘image classification’, ‘objective function’, ‘network architecture’, ‘classification problem’, ‘evolutionary algorithm’, ‘neural architecture’, ‘base network’, ‘architecture search’, ‘training epoch’, ‘search method’, ‘image class’, ‘full training’, ‘automated search’, ‘generated network’, ‘constructed network’, ‘gpu day’.

Time gap between the generation of edges

We use articles from arXiv, which only goes back to the year 1992. However, of course, the field of AI exists at least since the 1960s 75 . Thus, this raises the question whether the omission of the first 30–40 years of research has a crucial impact in the prediction task we formulate, specifically, whether edges that we consider as new might not be so new after all. Thus, in Extended Data Fig. 2 , we compute the time between the formation of edges between the same concepts, taking into account all or just the first edge. We see that the vast majority of edges are formed within short time periods, thus the effect of omission of early publication has a negligible effect for our question. Of course, different questions might be crucially impacted by the early data; thus, a careful choice of the data source is crucial 61 .

Positive examples in the test dataset

Table 1 shows the number of positive cases within the 10 million examples in the 18 test datasets that are used for evaluation.

Publication rates in quantum physics

Another field of research that gained a lot of attention in the recent years is quantum physics. This field is also a strong adopter of arXiv. Thus, we analyse in the same way as for AI in Fig. 1 . We find in Extended Data Fig. 3 no obvious exponential increase in papers per month. A detailed analysis of other domains is beyond the current scope. It will be interesting to investigate the growth rates in different scientific disciplines in more detail, especially given that exponential increase has been observed in several aspects of the science of science 3 , 76 .

Details on models M1–M8

What follows are more detailed explanations of the models presented in the main text. All codes are available at GitHub. The feature importance of the best model M1 is shown here, those of other models are analysed in the respective workshop contributions (cited in the subsections).

Details on M1

The best-performing solution is based on a blend of a tree-based gradient boosting approach and a graph neural network approach 37 . Extensive feature engineering was conducted to capture the centralities of the nodes, the proximity between node pairs and their evolution over time. The centrality of a node is captured by the number of neighbours and the PageRank score 77 , while the proximity between a node pair is derived using the Jaccard index. We refer the reader to ref. 37 for the list of all features and their feature importance.

The tree-based gradient boosting approach uses LightGBM 38 and applies heavy regularization to combat overfitting due to the scarcity of positive samples. The graph neural network approach employs a time-aware graph neural network to learn node representations on dynamic semantic networks. The feature importance of model M1, averaged over 18 datasets, is shown in Table 2 . It shows that the temporal features do contribute largely to the model performance, but the model remains strong even when they are removed. An example of the evolution of the training (from 2016 to 2019) and test set (2019 to 2021) for δ  = 3, c  = 25, ω  = 1 is shown in Extended Data Fig. 4 .

Details on M2

The second method assumes that the probability that nodes u and v form an edge in the future is a function of the node features f ( u ), f ( v ) and some edge feature h ( u ,  v ). We chose node features f that capture popularity at the current time t 0 (such as degree, clustering coefficient 78 , 79 and PageRank 77 ). We also use these features’ first and second time derivatives to capture the evolution of the node’s popularity over time. After variable selection during training, we chose h to consist of the HOP-rec score (high-order proximity for implicit recommendation) 80 , 81 and a variation of the Dice similarity score 82 as a measure of similarity between nodes. In summary, we use 31 node features for each node, and two edge features, which gives 31 × 2 + 2 = 64 features in total. These features are then fed into a small multilayer perceptron (5 layers, each with 13 neurons) with ReLU activation.

Cold start is the problem that some nodes in the test set do not appear in the training set. Our strategy for a cold start is imputation. We say a node v is seen if it appeared in the training data, and unseen otherwise; similarly, we say that a node is born at time t if t is the first time stamp where an edge linking this node has appeared. The idea is that an unseen node is simply a node born in the future, so its features should look like a recently born node in the training set. If a node is unseen, then we impute its features as the average of the features of the nodes born recently. We found that with imputation during training, the test AUC scores across all models consistently increased by about 0.02. For a complete description of this method, we refer the reader to ref. 39 .

Details on M3

This approach, detailed in ref. 40 , uses hand-crafted node features that have been captured in multiple time snapshots (for example, every year) and then uses an LSTM to benefit from learning the time dependencies of these features. The final configuration uses two main types of feature: node features including degree and degree of neighbours, and edge features including common neighbours. In addition, to balance the training data, the same number of positive and negative instances have been randomly sampled and combined.

One of the goals was to identify features that are very informative with a very low computational cost. We found that the degree centrality of the nodes is the most important feature, and the degree centrality of the neighbouring nodes and the degree of mutual neighbours gave us the best trade-off. As all of the extracted features’ distributions are highly skewed to the right, meaning most of the features take near zero values, using a power transform such as Yeo–Johnson 83 helps to make the distributions more Gaussian, which boosts the learning. Finally, for the link-prediction task, we saw that LSTMs perform better than fully connected neural networks.

Details on M4

The following two methods are based on a purely statistical analysis of the test data and are explained in detail in ref. 42 .

Preferential attachment. In the network analysis, we concluded that the growth of this dataset tends to maintain a heavy-tailed degree distribution, often associated with scale-free networks. As mentioned before the γ value of the degree distribution is very close to 2, suggesting that preferential attachment 41 is probably the main organizational principle of the network. As such, we implemented a simple prediction model following this procedure. Preferential attachment scores in link prediction are often quantified as

with k i , j the degree of nodes i and j . However, this assumes the scoring of links between nodes that are already connected to the network, that is k i , j  > 0, which is not the case for all the links we must score in the dataset. As a result, we define our preferential attachment model as

Using this simple model with no free parameters we could score new links and compare them with the other models. Immediately we note that preferential attachment outperforms some learning-based models, even if it never manages to reach the top AUC, but it is extremely simple and with negligible computational cost.

Common neighbours. We explore another network-based approach to score the links. Indeed, while the preferential attachment model we derived performed well, it uses no information about the distance between i and j , which is a popular feature used in link-prediction methods 27 . As such, we decided to test a method known as common neighbours 18 . We define Γ ( i ) as the set of neighbors of node i and Γ ( i ) ∩  Γ ( j ) as the set of common neighbours between nodes i and j . We can easily score the nodes with

the intuition being that nodes that share a larger number of neighbours are more likely to be connected than distant nodes that do not share any.

Evaluating this score for each pair ( i ,  j ) on the dataset of unconnected pairs, which can be computed as the second power of the adjacency matrix, A 2 , we obtained an AUC that is sometimes higher than preferential attachment and sometimes lower than it but is still consistently quite close with the best learning-based models.

Details on M5

This method is based on ref. 44 . First, ten groups of first-order graph features are extracted to get some neighbourhood and similarity properties from each pair of nodes: degree centrality of nodes, pair’s total number of neighbours, common neighbours index, Jaccard coefficient, Simpson coefficient, geometric coefficient, cosine coefficient, Adamic–Adar index, resource allocation index and preferential attachment index. They are obtained for three consecutive years to capture the temporal dynamics of the semantic network, leading to a total of 33 features. Second, principal component analysis 43 is applied to reduce the correlation between features, speed up the learning process and improve generalization, which results in a final set of seven latent variables. Lastly, a random forest classifier is trained (using a balanced dataset) to estimate the likelihood of new links between the AI concepts.

In this paper, a modification was performed in relation to the original formulation of the method 44 : two of the original features, average neighbour degree and clustering coefficient, were infeasible to extract for some of the tasks covered in this paper, as their computation can be heavy for such a very large network, and they were discarded. Due to some computational memory issues, it was not possible to run the model for some of the tasks covered in this study, and so those results are missing.

Details on M6

The baseline solution for the Science4Cast competition was closely related to the model presented in ref. 17 . It uses 15 hand-crafted features of a pair of nodes v 1 and v 2 . (Degrees of v 1 and v 2 in the current year and previous two years are six properties. The number of shared neighbours in total of v 1 and v 2 in the current year and previous two years are six properties. The number of shared neighbours between v 1 and v 2 in the current year and the previous two years are three properties). These 15 features are the input of a neural network with four layers (15, 100, 10 and 1 neurons), intending to predict whether the nodes v 1 and v 2 will have w edges in the future. After the training, the model computes the probability for all 10 million evaluation examples. This list is sorted and the AUC is computed.

Details on M7

The solution M7 was not part of the Science4Cast competition and therefore not described in the corresponding proceedings, thus we want to add more details.

The most immediate way one can apply ML to this problem is by automating the detection of features. Quite simply, the baseline solution M6 is modified such that instead of 15 hand-crafted features, the neural network is instead trained on features extracted from a graph embedding. We use two different embedding approaches. The first method is employs node2vec (M7A) 45 , for which we use the implementations provided in the nodevectors Python package 84 . The second one uses the ProNE embedding (M7B) 46 , which is based on sparse matrix factorizations modulated by the higher-order Cheeger inequality 85 .

The embeddings generate a 32-dimensional representation for each node, resulting in edge representations in [0, 1] 64 . These features are input into a neural network with two hidden layers of size 1,000 and 30. Like M6, the model computes the probability for evaluation examples to determine the ROC. We compare ProNE to node2vec, a common graph embedding method using a biased random walk procedure with return and in–out parameters, which greatly affect network encoding. Initial experiments used default values for a 64-dimensional encoding before inputting into the neural network. The higher variance in node2vec predictions is probably due to its sensitivity to hyperparameters. While ProNE is better suited for general multi-dataset link prediction, node2vec’s sensitivity may help identify crucial network features for predicting temporal evolution.

Details on M8

This model, which is detailed in ref. 47 , does not use any hand-crafted features but learns them in a completely unsupervised manner. To do so, we extract various snapshots of the adjacency matrix through time, capturing graphs in the form of A t for t  = 1994, …, 2019. We then embed each of these graphs into 128-dimensional Euclidean space via node2vec 45 , 48 . For each node u in the semantic graph, we extract different 128-dimensional vector embeddings n u ( A 1994 ), …,  n u ( A 2019 ).

Transformers have performed extremely well in NLP tasks 49 ; thus, we apply them to learn the dynamics of the embedding vectors. We pre-train a transformer to help classify node pairs. For the transformer, the encoder and decoder had 6 layers each; we used 128 as the embedding dimension, 2,048 as the feed-forward dimension and 8-headed attention. This transformer acts as our feature extractor. Once we pre-train our transformer, we add a two-layer ReLU network with hidden dimension 128 as a classifier on top.

Data availability

All 18 datasets tested in this paper are available via Zenodo at https://doi.org/10.5281/zenodo.7882892 ref. 86 .

Code availability

All of the models and codes described above can be found via GitHub at https://github.com/artificial-scientist-lab/FutureOfAIviaAI ref. 5 and a permanent Zenodo record at https://zenodo.org/record/8329701 ref. 87 .

Clauset, A., Larremore, D. B. & Sinatra, R. Data-driven predictions in the science of science. Science 355 , 477–480 (2017).

Article   Google Scholar  

Evans, J. A. & Foster, J. G. Metaknowledge. Science 331 , 721–725 (2011).

Article   MathSciNet   MATH   Google Scholar  

Fortunato, S. et al. Science of science. Science 359 , eaao0185 (2018).

Wang, D. & Barabási, A.-L. The Science of Science (Cambridge Univ. Press, 2021).

Krenn, M. et al. FutureOfAIviaAI. GitHub https://github.com/artificial-scientist-lab/FutureOfAIviaAI (2023).

Brown, T. et al. Language models are few-shot learners. Adv. Neural Inf. Process. Syst. 33 , 1877–1901 (2020).

Google Scholar  

Rae, J. W. et al. Scaling language models: methods, analysis & insights from training gopher. Preprint at https://arxiv.org/abs/2112.11446 (2021).

Smith, S. et al. Using DeepSpeed and Megatron to train Megatron-Turing NLG 530B, a large-scale generative language model. Preprint at https://arxiv.org/abs/2201.11990 (2022).

Chowdhery, A. et al. Palm: scaling language modeling with pathways. Preprint at https://arxiv.org/abs/2204.02311 (2022).

Kojima, T., Gu, S. S., Reid, M., Matsuo, Y. & Iwasawa, Y. Large language models are zero-shot reasoners. Preprint at https://arxiv.org/abs/2205.11916 (2022).

Zhang, H., Li, L. H., Meng, T., Chang, K.-W. & Broeck, G. V. d. On the paradox of learning to reason from data. Preprint at https://arxiv.org/abs/2205.11502 (2022).

Rzhetsky, A., Foster, J. G., Foster, I. T. & Evans, J. A. Choosing experiments to accelerate collective discovery. Proc. Natl Acad. Sci. USA 112 , 14569–14574 (2015).

Foster, J. G., Rzhetsky, A. & Evans, J. A. Tradition and innovation in scientists’ research strategies. Am. Sociol. Rev. 80 , 875–908 (2015).

Van Eck, N. J. & Waltman, L. Text mining and visualization using vosviewer. Preprint at https://arxiv.org/abs/1109.2058 (2011).

Van Eck, N. J. & Waltman, L. in Measuring Scholarly Impact: Methods and Practice (eds Ding, Y. et al.) 285–320 (Springer, 2014).

Wang, Q. et al. Paperrobot: Incremental draft generation of scientific ideas. Preprint at https://arxiv.org/abs/1905.07870 (2019).

Krenn, M. & Zeilinger, A. Predicting research trends with semantic and neural networks with an application in quantum physics. Proc. Natl Acad. Sci. USA 117 , 1910–1916 (2020).

Liben-Nowell, D. & Kleinberg, J. The link-prediction problem for social networks. J. Am. Soc. Inf. Sci. Technol. 58 , 1019–1031 (2007).

Albert, I. & Albert, R. Conserved network motifs allow protein–protein interaction prediction. Bioinformatics 20 , 3346–3352 (2004).

Zhou, T., Lü, L. & Zhang, Y.-C. Predicting missing links via local information. Eur. Phys. J. B 71 , 623–630 (2009).

Article   MATH   Google Scholar  

Kovács, I. A. et al. Network-based prediction of protein interactions. Nat. Commun. 10 , 1240 (2019).

Muscoloni, A., Abdelhamid, I. & Cannistraci, C. V. Local-community network automata modelling based on length-three-paths for prediction of complex network structures in protein interactomes, food webs and more. Preprint at bioRxiv https://doi.org/10.1101/346916 (2018).

Pech, R., Hao, D., Lee, Y.-L., Yuan, Y. & Zhou, T. Link prediction via linear optimization. Physica A 528 , 121319 (2019).

Lü, L., Pan, L., Zhou, T., Zhang, Y.-C. & Stanley, H. E. Toward link predictability of complex networks. Proc. Natl Acad. Sci. USA 112 , 2325–2330 (2015).

Guimerà, R. & Sales-Pardo, M. Missing and spurious interactions and the reconstruction of complex networks. Proc. Natl Acad. Sci. USA 106 , 22073–22078 (2009).

Ghasemian, A., Hosseinmardi, H., Galstyan, A., Airoldi, E. M. & Clauset, A. Stacking models for nearly optimal link prediction in complex networks. Proc. Natl Acad. Sci. USA 117 , 23393–23400 (2020).

Zhou, T. Progresses and challenges in link prediction. iScience 24 , 103217 (2021).

Krenn, M. et al. On scientific understanding with artificial intelligence. Nat. Rev. Phys. 4 , 761–769 (2022).

Rose, S., Engel, D., Cramer, N. & Cowley, W. in Text Mining: Applications and Theory (eds Berry, M. W. & Kogan, J.) Ch. 1 (Wiley, 2010).

Salatino, A. A., Thanapalasingam, T., Mannocci, A., Osborne, F. & Motta, E. The computer science ontology: a large-scale taxonomy of research areas. In Proc. Semantic Web–ISWC 2018: 17th International Semantic Web Conference Part II Vol. 17, 187–205 (Springer, 2018).

Salatino, A. A., Osborne, F., Thanapalasingam, T. & Motta, E. The CSO classifier: ontology-driven detection of research topics in scholarly articles. In Proc. Digital Libraries for Open Knowledge: 23rd International Conference on Theory and Practice of Digital Libraries Vol. 23, 296–311 (Springer, 2019).

Alstott, J., Bullmore, E. & Plenz, D. powerlaw: a Python package for analysis of heavy-tailed distributions. PLoS ONE 9 , e85777 (2014).

Fenner, T., Levene, M. & Loizou, G. A model for collaboration networks giving rise to a power-law distribution with an exponential cutoff. Soc. Netw. 29 , 70–80 (2007).

Broido, A. D. & Clauset, A. Scale-free networks are rare. Nat. Commun. 10 , 1017 (2019).

Fawcett, T. ROC graphs: notes and practical considerations for researchers. Pattern Recognit. Lett. 31 , 1–38 (2004).

Sun, Y., Wong, A. K. & Kamel, M. S. Classification of imbalanced data: a review. Int. J. Pattern Recognit. Artif. Intell. 23 , 687–719 (2009).

Lu, Y. Predicting research trends in artificial intelligence with gradient boosting decision trees and time-aware graph neural networks. In 2021 IEEE International Conference on Big Data (Big Data) 5809–5814 (IEEE, 2021).

Ke, G. et al. LightGBM: a highly efficient gradient boosting decision tree. In Proc. 31st International Conference on Neural Information Processing Systems 3149–3157 (Curran Associates Inc., 2017).

Tran, N. M. & Xie, Y. Improving random walk rankings with feature selection and imputation Science4Cast competition, team Hash Brown. In 2021 IEEE International Conference on Big Data (Big Data) 5824–5827 (IEEE, 2021).

Sanjabi, N. Efficiently predicting scientific trends using node centrality measures of a science semantic network. In 2021 IEEE International Conference on Big Data (Big Data) 5820–5823 (IEEE, 2021).

Barabási, A.-L. Network science. Phil. Trans. R. Soci. A 371 , 20120375 (2013).

Moutinho, J. P., Coutinho, B. & Buffoni, L. Network-based link prediction of scientific concepts—a Science4Cast competition entry. In 2021 IEEE International Conference on Big Data (Big Data) 5815–5819 (IEEE, 2021).

Jolliffe, I. T. & Cadima, J. Principal component analysis: a review and recent developments. Phil. Trans. R. Soc. A 374 , 20150202 (2016).

Valente, F. Link prediction of artificial intelligence concepts using low computational power. In 2021 IEEE International Conference on Big Data (Big Data) 5828–5832 (2021).

Grover, A. & Leskovec, J. node2vec: scalable feature learning for networks. In Proc. 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining 855–864 (ACM, 2016).

Zhang, J., Dong, Y., Wang, Y., Tang, J. & Ding, M. ProNE: fast and scalable network representation learning. In Proc. Twenty-Eighth International Joint Conference on Artificial Intelligence 4278–4284 (International Joint Conferences on Artificial Intelligence Organization, 2019).

Lee, H., Sonthalia, R. & Foster, J. G. Dynamic embedding-based methods for link prediction in machine learning semantic network. In 2021 IEEE International Conference on Big Data (Big Data) 5801–5808 (IEEE, 2021).

Liu, R. & Krishnan, A. PecanPy: a fast, efficient and parallelized python implementation of node2vec. Bioinformatics 37 , 3377–3379 (2021).

Vaswani, A. et al. Attention is all you need. In Proc. 31st International Conference on Neural Information Processing Systems 6000–6010 (Curran Associates Inc., 2017).

Zelenko, D., Aone, C. & Richardella, A. Kernel methods for relation extraction. J. Mach. Learn. Res. 3 , 1083–1106 (2003).

MathSciNet   MATH   Google Scholar  

Bach, N. & Badaskar, S. A review of relation extraction. Literature Review for Language and Statistics II 2 , 1–15 (2007).

Salatino, A. A., Osborne, F. & Motta, E. How are topics born? Understanding the research dynamics preceding the emergence of new areas. PeerJ Comput. Sc. 3 , e119 (2017).

Salatino, A. A., Osborne, F. & Motta, E. AUGUR: forecasting the emergence of new research topics. In Proc. 18th ACM/IEEE on Joint Conference on Digital Libraries 303–312 (IEEE, 2018).

Battiston, F. et al. The physics of higher-order interactions in complex systems. Nat. Phys. 17 , 1093–1098 (2021).

Coutinho, B. C., Wu, A.-K., Zhou, H.-J. & Liu, Y.-Y. Covering problems and core percolations on hypergraphs. Phys. Rev. Lett. 124 , 248301 (2020).

Article   MathSciNet   Google Scholar  

Olivetti, E. A. et al. Data-driven materials research enabled by natural language processing and information extraction. Appl. Phys. Rev. 7 , 041317 (2020).

Lin, Z., Yin, Y., Liu, L. & Wang, D. SciSciNet: a large-scale open data lake for the science of science research. Sci. Data 10 , 315 (2023).

Azoulay, P. et al. Toward a more scientific science. Science 361 , 1194–1197 (2018).

Liu, H., Kou, H., Yan, C. & Qi, L. Link prediction in paper citation network to construct paper correlation graph. EURASIP J. Wirel. Commun. Netw. 2019 , 1–12 (2019).

Reisz, N. et al. Loss of sustainability in scientific work. New J. Phys. 24 , 053041 (2022).

Frank, M. R., Wang, D., Cebrian, M. & Rahwan, I. The evolution of citation graphs in artificial intelligence research. Nat. Mach. Intell. 1 , 79–85 (2019).

Newman, M. Networks (Oxford Univ. Press, 2018).

Kwon, D. et al. A survey of deep learning-based network anomaly detection. Cluster Comput. 22 , 949–961 (2019).

Pang, G., Shen, C., Cao, L. & Hengel, A. V. D. Deep learning for anomaly detection: a review. ACM Comput. Surv. 54 , 1–38 (2021).

Collobert, R. et al. Natural language processing (almost) from scratch. J. Mach. Learn. Res. 12 , 2493–2537 (2011).

MATH   Google Scholar  

LeCun, Y., Bengio, Y. & Hinton, G. Deep learning. Nature 521 , 436–444 (2015).

Krizhevsky, A., Sutskever, I. & Hinton, G. E. ImageNet classification with deep convolutional neural networks. Commun. ACM 60 , 84–90 (2017).

Mnih, V. et al. Human-level control through deep reinforcement learning. Nature 518 , 529–533 (2015).

Silver, D. et al. Mastering the game of Go with deep neural networks and tree search. Nature 529 , 484–489 (2016).

Wu, N., Vincent, A., Strukov, D. & Xie, Y. Memristor hardware-friendly reinforcement learning. Preprint at https://arxiv.org/abs/2001.06930 (2020).

Zhou, C. et al. Automated deep learning analysis of angiography video sequences for coronary artery disease. Preprint at https://arxiv.org/abs/2101.12505 (2021).

Huckle, N., Garcia, N. & Nakashima, Y. Demographic influences on contemporary art with unsupervised style embeddings. In Proc. Computer Vision–ECCV 2020 Workshops Part II Vol. 16, 126–142 (Springer, 2020).

Ranti, D. et al. The utility of general domain transfer learning for medical language tasks. Preprint at https://arxiv.org/abs/2002.06670 (2020).

Kamath, P., Singh, A. & Dutta, D. Fast neural architecture construction using envelopenets. Preprint at https://arxiv.org/abs/1803.06744 (2018).

Minsky, M. Steps toward artificial intelligence. Proc. IRE 49 , 8–30 (1961).

Bornmann, L., Haunschild, R. & Mutz, R. Growth rates of modern science: a latent piecewise growth curve approach to model publication numbers from established and new literature databases. Humanit. Soc. Sci. Commun. 8 , 224 (2021).

Brin, S. & Page, L. The anatomy of a large-scale hypertextual web search engine. Comput. Netw. ISDN Syst. 30 , 107–117 (1998).

Holland, P. W. & Leinhardt, S. Transitivity in structural models of small groups. Comp. Group Studies 2 , 107–124 (1971).

Watts, D. J. & Strogatz, S. H. Collective dynamics of ‘small-world’ networks. Nature 393 , 440–442 (1998).

Yang, J.-H., Chen, C.-M., Wang, C.-J. & Tsai, M.-F. HOP-rec: high-order proximity for implicit recommendation. In Proc. 12th ACM Conference on Recommender Systems 140–144 (2018).

Lin, B.-Y. OGB_collab_project. GitHub https://github.com/brucenccu/OGB_collab_project (2021).

Sorensen, T. A. A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on danish commons. Biol. Skar. 5 , 1–34 (1948).

Yeo, I.-K. & Johnson, R. A. A new family of power transformations to improve normality or symmetry. Biometrika 87 , 954–959 (2000).

Ranger, M. nodevectors. GitHub https://github.com/VHRanger/nodevectors (2021).

Bandeira, A. S., Singer, A. & Spielman, D. A. A Cheeger inequality for the graph connection Laplacian. SIAM J. Matrix Anal. Appl. 34 , 1611–1630 (2013).

Krenn, M. et al. Predicting the future of AI with AI. Zenodo https://doi.org/10.5281/zenodo.7882892 (2023).

Krenn, M. et al. FutureOfAIviaAI code. Zenodo https://zenodo.org/record/8329701 (2023).

Jia, T., Wang, D. & Szymanski, B. K. Quantifying patterns of research-interest evolution. Nat. Hum. Behav. 1 , 0078 (2017).

Download references

Acknowledgements

We thank IARAI Vienna and IEEE for supporting and hosting the IEEE BigData Competition Science4Cast. We are specifically grateful to D. Kreil, M. Neun, C. Eichenberger, M. Spanring, H. Martin, D. Geschke, D. Springer, P. Herruzo, M. McCutchan, A. Mihai, T. Furdui, G. Fratica, M. Vázquez, A. Gruca, J. Brandstetter and S. Hochreiter for helping to set up and successfully execute the competition and the corresponding workshop. We thank X. Gu for creating Fig. 2 , and M. Aghajohari and M. Sadegh Akhondzadeh for helpful comments on the paper. The work of H.L., R.S. and J.G.F. was supported by grant TWCF0333 from the Templeton World Charity Foundation. H.L. is additionally supported by NSF grant DMS-1952339. J.P.M. acknowledges the support of FCT (Portugal) through scholarship SFRH/BD/144151/2019. B.C. thanks the support from FCT/MCTES through national funds and when applicable co-funded EU funds under the project UIDB/50008/2020, and FCT through the project CEECINST/00117/2018/CP1495/CT0001. N.M.T. and Y.X. are supported by NSF grant DMS-2113468, the NSF IFML 2019844 award to the University of Texas at Austin, and the Good Systems Research Initiative, part of University of Texas at Austin Bridging Barriers.

Open access funding provided by Max Planck Society.

Author information

Authors and affiliations.

Max Planck Institute for the Science of Light (MPL), Erlangen, Germany

Mario Krenn

Instituto de Telecomunicações, Lisbon, Portugal

Lorenzo Buffoni, Bruno Coutinho & João P. Moutinho

University of Toronto, Toronto, Ontario, Canada

Sagi Eppel & Andrew Gritsevskiy

University of California Los Angeles, Los Angeles, CA, USA

Jacob Gates Foster, Harlin Lee & Rishi Sonthalia

Cavendish Laboratories, Cavendish, VT, USA

Andrew Gritsevskiy

Institute of Advanced Research in Artificial Intelligence (IARAI), Vienna, Austria

Andrew Gritsevskiy & Michael Kopp

Alpha 8 AI, Toronto, Ontario, Canada

Independent Researcher, Barcelona, Spain

Nima Sanjabi

University of Texas at Austin, Austin, TX, USA

Ngoc Mai Tran

Independent Researcher, Leiria, Portugal

Francisco Valente

University of Pennsylvania, Philadelphia, PA, USA

Yangxinyu Xie

University of California, San Diego, CA, USA

You can also search for this author in PubMed   Google Scholar

Contributions

M. Krenn and R.Y. initiated the research. M. Krenn and M. Kopp organized the Science4Cast competition. M. Krenn generated the datasets and initial codes. S.E. and H.L. analysed the network-theoretical properties of the semantic network. M. Krenn, L.B., B.C., J.G.F., A.G, H.L., Y.L, J.P.M, N.S., R.S., N.M.T, F.V., Y.X and M. Kopp provided codes for the ten models. M. Krenn wrote the paper with input from all co-authors.

Corresponding author

Correspondence to Mario Krenn .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Nature Machine Intelligence thanks Alexander Belikov, and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Mirko Pieropan, in collaboration with the Nature Machine Intelligence team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1.

Number of concepts per article.

Extended Data Fig. 2

Time Gap between the generation of edges. Here, left shows the time it takes to create a new edge between two vertices and right shows the time between the first and the second edge.

Extended Data Fig. 3

Publications in Quantum Physics.

Extended Data Fig. 4

Evolution of the AUC during training for Model M1.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Krenn, M., Buffoni, L., Coutinho, B. et al. Forecasting the future of artificial intelligence with machine learning-based link prediction in an exponentially growing knowledge network. Nat Mach Intell 5 , 1326–1335 (2023). https://doi.org/10.1038/s42256-023-00735-0

Download citation

Received : 21 January 2023

Accepted : 11 September 2023

Published : 16 October 2023

Issue Date : November 2023

DOI : https://doi.org/10.1038/s42256-023-00735-0

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

future of computer science essay

  • Dean’s Office
  • External Advisory Council
  • Computing Council
  • Extended Computing Council
  • Undergraduate Advisory Group
  • Break Through Tech AI
  • Building 45 Event Space
  • Infinite Mile Awards: Past Winners
  • Frequently Asked Questions
  • Undergraduate Programs
  • Graduate Programs
  • Educating Computing Bilinguals
  • Online Learning
  • Industry Programs
  • AI Policy Briefs
  • Envisioning the Future of Computing Prize
  • SERC Symposium 2023
  • SERC Case Studies
  • SERC Scholars Program
  • SERC Postdocs
  • Common Ground Subjects
  • For First-Year Students and Advisors
  • For Instructors: About Common Ground Subjects
  • Common Ground Award for Excellence in Teaching
  • New & Incoming Faculty
  • Faculty Resources
  • Faculty Openings
  • Search for: Search
  • MIT Homepage

Envisioning the Future of Computing Prize 2023

future of computer science essay

The Envisioning the Future of Computing Prize invited MIT undergraduate and graduate students to share their ideas, aspirations, and vision for what they think a future propelled by advancements in computing holds.   Offered for the first time this year, the Institute-wide essay competition attracted nearly 60 submissions from students, including those majoring in mathematics, philosophy, electrical engineering and computer science, brain and cognitive sciences, chemical engineering, urban studies and planning, and management. The contest implemented a two-stage evaluation process wherein all essays were reviewed anonymously by a panel of faculty members from the MIT Schwarzman College of Computing and School of Humanities, Arts, and Social Sciences for the initial round. Three qualifiers were then invited to present their entries at an awards ceremony on May 8 followed by a Q&A with a judging panel and live in-person audience for the final round.   The grand prize of $10,000 was awarded to Robert Cunningham, a senior majoring in math and physics, for his paper, “Scribe AI,” on the implications of a personalized language model that is fine tuned to predict an individual’s writing based on their past texts and emails. Told from the perspective of three fictional characters: Laura, founder of the tech startup ScribeAI, and Margaret and Vincent, a couple in college who are frequent users of the platform, readers gained insights into the societal shifts that take place and the unforeseen repercussions of the technology. Two runners up , awarded $5,000 each, included Gabrielle Kaili-May Liu, a senior majoring in mathematics with computer science, and brain and cognitive sciences, and Abigail Thwaites and Eliot Matthew Watkins, a graduate student team from the Department of Philosophy and Linguistics. In addition, 12 students were recognized with honorable mentions for their entries, with each receiving $500.   Meet the winners and read their essays below.

Grand Prize Winner

future of computer science essay

Scribe AI by Robert Cunningham (Mathematics and Physics)

future of computer science essay

Transforming Human Interactions with AI via Reinforcement Learning with Human Feedback by Gabrielle Kaili-May Liu (Mathematics with Computer Science; Brain and Cognitive Sciences)

future of computer science essay

The Future of Fact-Checking by Abigail Thwaites (Philosophy) & Eliot Matthew Watkins (Philosophy)

Honorable Mentions

  • The Perils and Promises of Closed Loop Engagement by David Bradford Ramsay (Media Lab)
  • A New Way Forward: The Internet & Data Economy by Alexa Reese Canaan (Technology and Policy Program)
  • The Empathic Revolution Using AI to Foster Greater Understanding by Fernanda De La Torre Romo (Brain and Cognitive Sciences)
  • Modeling International Solutions for the Climate Crisis by Samuel Florin (Mathematics)
  • Grounding AI- Envisioning Inclusive Computing for Soil Carbon Applications by Claire Gorman (Urban Studies and Planning)
  • Quantum Powered Personalized Pharmacogenetic Development and Distribution by Kevin Hansom (Sloan School of Management)
  • Machine Learning Driven Transformation of Electronic Health Records by Sharon Jiang (Electrical Engineering and Computer Science)
  • Considering an Anti-convenience Funding Body by Cassandra Lee (Media Lab)
  • Towards Personalized On-Demand Manufactur ing by Martin Nisser (Electrical Engineering and Computer Science)
  • Revolutionizing Online Learning with Digital Twins by Andi Qu (Electrical Engineering and Computer Science)
  • Overcoming the False Trade-Off in Genomics: Privacy and Collaboration by Shuvom Sadhuka (Electrical Engineering and Computer Science)
  • Embodied-Carbon-Computing by Leonard Schrage (Urban Studies and Planning)

The Envisioning the Future of Computing Prize is presented by the Social and Ethical Responsibilities of Computing (SERC), a cross-cutting initiative within the MIT Schwarzman College of Computing, in collaboration with the MIT School of Humanities, Arts, and Social Sciences.   Thank you to MAC3 Impact Philanthropies for their generous support of the prize this year.

share this!

May 31, 2024

This article has been reviewed according to Science X's editorial process and policies . Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

trusted source

Researchers are promoting a safer future with AI by strengthening algorithms against attack

by Amanda Norvelle, Texas A&M University

hacked data

Trust is vital to the widespread acceptance of AI across industries, especially when safety is a concern. For example, people may be hesitant to ride in a self-driving car knowing that the AI running it can be hacked. One barrier to increasing trust is that the algorithms powering AI are vulnerable to such attacks.

Dr. Samson Zhou, assistant professor in the Department of Computer Science and Engineering at Texas A&M University, and Dr. David P. Woodruff, professor in the Computer Science Department at Carnegie Mellon University, hope to strengthen algorithms used by big data AI models against attacks. Big data AI models are scalable algorithms that are specifically designed to handle and analyze large volumes of data.

Zhou and Woodruff are a long way off from creating algorithms that are completely robust against attacks, but they aim to make progress.

"It's definitely a long-term goal to give people an algorithm that comes with a guarantee behind it," Woodruff said. "We'd like to be able to say, "We promise you that this algorithm is robust against adversaries," meaning that no matter how many queries you make to this algorithm it's still going to give you the correct answer," Woodruff said.

"People are scared to go into self-driving cars when they know an adversary can cause the car to have an accident," Zhou said. "We hope that our work will be one step in inspiring confidence towards algorithms."

Zhou and Woodruff's research focuses on a type of big data model called a streaming model. With a streaming model, information and insights must be gleaned from the data right away or they will be lost because all the data cannot be stored. Common examples of streaming models are apps that provide real-time information to users, like a public transportation app that shows the current location of buses on a route.

Challenges to creating secure algorithms

One challenge researchers face when trying to create a secure algorithm is randomness. Think of an algorithm as a set of instructions for AI. Randomness is included in these instructions to save space. However, when randomness is included, the engineers of an algorithm don't have a complete picture of the algorithm's inner workings, leaving the algorithm open to attack.

"Any algorithm that uses randomness can be attacked because the attacker kind of learns your randomness through its interaction with you" Woodruff said. "And if [the attacker] knows something about your randomness, it can find things to feed your algorithm and force it to fail."

Woodruff compared manipulating algorithms to manipulating coin tosses. "You might have a sequence of coin tosses in your algorithm, and that sequence is really good for solving most problems. But if the attacker knew that sequence of coin tosses, it could find exactly the right input that causes the result to be bad," Woodruff said.

There are also different types of attacks. Sometimes the only thing attackers know about an algorithm is how it responds to queries. In this case, attackers base future queries on the algorithm's previous output. This is called a black box attack. When attackers know the entire state of the algorithm, its inner workings and how it responds, that is a white box attack. Zhou and Woodruff want to defend against both.

"Attackers that know the internal parameters of an algorithm seem like much more powerful adversaries," Zhou said. "But we're actually able to show that there are still interesting things that can be done to defend against them."

Future research

In creating an algorithm that will be robust against attack, Zhou and Woodruff plan to develop new connections between mathematics and theoretical computer science. They will also look to the field of cryptography (data encryption) for ideas. Through their research, they hope to understand how to strengthen algorithms against attack while maintaining efficiency. They want to identify principles underlying vulnerabilities in algorithms.

Zhou and Woodruff know it will be difficult to prove that an algorithm is robust against infinite types of attack and that the algorithm will reliably give an accurate answer.

"Sometimes it's not possible to design algorithms to guarantee adversarial robustness," Zhou said. "Sometimes there is no way to promote adversarial robustness if you don't have enough space. In that case, we should stop trying to design algorithms that meet these guarantees and instead look for other ways around these problems."

Zhou and Woodruff ultimately hope to write a monograph based on their work.

Explore further

Feedback to editors

future of computer science essay

Research team introduces an agile multi-robot research platform

12 hours ago

future of computer science essay

Children's visual experience may hold key to better computer vision training

future of computer science essay

Overcoming barriers to heat pump adoption in cold climates and avoiding the 'energy poverty trap'

future of computer science essay

US wind and solar generation provided $249 billion in climate and air quality health benefits from 2019–2022: Study

future of computer science essay

How easy is it to get AIs to talk like a partisan?

future of computer science essay

Research brings together humans, robots and generative AI to create art

future of computer science essay

Stable high-energy density lithium-ion batteries could lead to fast charging electric vehicles

future of computer science essay

Using AI to help drones find lost hikers

future of computer science essay

A strategy to design anti-freezing electrolytes for batteries that can operate in extremely cold environments

future of computer science essay

Researchers develop new perovskite solar cells that set efficiency record

May 30, 2024

Related Stories

future of computer science essay

Counterexamples to completeness of major algorithms in distributed constraint optimization problem

Mar 5, 2024

future of computer science essay

The future of optimization: How 'Learn to Optimize' is reshaping algorithm design and configuration

May 15, 2024

future of computer science essay

Order up! AI finds the right material

Oct 16, 2020

future of computer science essay

A deep-learning and transfer-learning hybrid aerosol retrieval algorithm for a geostationary meteorological satellite

Mar 25, 2024

future of computer science essay

Researchers develop new algorithm with better performance for spectral technology

Oct 14, 2020

future of computer science essay

Answer to thorny question could unlock internet security

Aug 12, 2021

Recommended for you

future of computer science essay

Data-driven model generates natural human motions for virtual avatars

future of computer science essay

Using contact microphones as tactile sensors for robot manipulation

Let us know if there is a problem with our content.

Use this form if you have come across a typo, inaccuracy or would like to send an edit request for the content on this page. For general inquiries, please use our contact form . For general feedback, use the public comments section below (please adhere to guidelines ).

Please select the most appropriate category to facilitate processing of your request

Thank you for taking time to provide your feedback to the editors.

Your feedback is important to us. However, we do not guarantee individual replies due to the high volume of messages.

E-mail the story

Your email address is used only to let the recipient know who sent the email. Neither your address nor the recipient's address will be used for any other purpose. The information you enter will appear in your e-mail message and is not retained by Tech Xplore in any form.

Your Privacy

This site uses cookies to assist with navigation, analyse your use of our services, collect data for ads personalisation and provide content from third parties. By using our site, you acknowledge that you have read and understand our Privacy Policy and Terms of Use .

E-mail newsletter

Home — Essay Samples — Information Science and Technology — Computer Science — Computer Science – My Choice for Future Career

test_template

Computer Science – My Choice for Future Career

  • Categories: Academic Interests Computer Science

About this sample

close

Words: 636 |

Published: Jul 15, 2020

Words: 636 | Page: 1 | 4 min read

Works Cited

  • Chua, S. W., Chen, D., & Wong, L. H. (2017). Investigating students' learning experiences in flipped classrooms with lecture capture lectures in undergraduate mathematics courses. Australasian Journal of Educational Technology , 33(2), 77-90.
  • Deep Learning Institute. (n.d.). About DLI. Retrieved from https://www.nvidia.com/en-us/deep-learning-ai/education/
  • Henderson, C., & Dancy, M. H. (2007). Barriers to the use of research-based instructional strategies: The influence of both individual and situational characteristics. Physical Review Special Topics-Physics Education Research, 3(2), 020102.
  • Hockings, C., Cooke, S., & Bowl, M. (2012). Developing a sense of belonging: How the learning environment can support the retention of students from diverse backgrounds. Higher Education Research & Development, 31(3), 403-418.
  • Python Software Foundation. (n.d.). The Python programming language. Retrieved from https://www.python.org/
  • Reeves, T. C., & Oh, E. (2008). The impact of online learning on learners: A recension of the literature. Educational Media International, 45(4), 325-338.
  • Stowell, J. R., Addison-Wesley, L., & Olsson, A. (2017). Large-scale computer science education experiments: A review of the literature. Computer Science Education, 27(2), 107-141.
  • University of Washington. (n.d.). Computer science and engineering. Retrieved from https://www.cs.washington.edu/
  • Yang, S. H. (2010). Problem-based learning in an online course: A case study. Educational Technology & Society, 13(4), 236-248.
  • Zhang, D., Zhou, L., Briggs, R. O., & Nunamaker Jr, J. F. (2006). Instructional video in e-learning: Assessing the impact of interactive video on learning effectiveness. Information & Management, 43(1), 15-27.

Image of Alex Wood

Cite this Essay

Let us write you an essay from scratch

  • 450+ experts on 30 subjects ready to help
  • Custom essay delivered in as few as 3 hours

Get high-quality help

author

Dr. Heisenberg

Verified writer

  • Expert in: Education Information Science and Technology

writer

+ 120 experts online

By clicking “Check Writers’ Offers”, you agree to our terms of service and privacy policy . We’ll occasionally send you promo and account related email

No need to pay just yet!

Related Essays

1 pages / 820 words

4 pages / 2005 words

3 pages / 1494 words

2 pages / 1094 words

Remember! This is just a sample.

You can get your custom paper by one of our expert writers.

121 writers online

Computer Science – My Choice for Future Career Essay

Still can’t find what you need?

Browse our vast selection of original essay samples, each expertly formatted and styled

Related Essays on Computer Science

The interviewee, John Smith, is a 22-year-old college student pursuing a degree in computer science. The purpose of this interview is to gain insight into John's personal and professional life, as well as his values, beliefs, [...]

Brief introduction of the author Background in a lower-class family and education Completed Bachelor of Science in Computer Science Worked as an IT Instructor in vocational training institutes Currently [...]

The power of social media platforms, such as Facebook, is undeniable in today's world. With over 2.8 billion monthly active users worldwide, Facebook has become a household name and an essential part of our daily lives. However, [...]

Engaging in academic research is an essential part of one's academic journey. However, the process can be challenging, and choosing the right research methodology is critical to the success of any research project. In this [...]

It is the basic skill for animals in this world that lives on complex environment, but in robotics it is the most difficult problem. Because of technology advances, the robot will become an assistant of humans in the near [...]

RAM (random access memory) is the memory that the computer can use ‘randomly’, this is the memory that is kept available for programs to use – the memory available is measured in gigabytes (GB) and speed is measured in [...]

Related Topics

By clicking “Send”, you agree to our Terms of service and Privacy statement . We will occasionally send you account related emails.

Where do you want us to send this sample?

By clicking “Continue”, you agree to our terms of service and privacy policy.

Be careful. This essay is not unique

This essay was donated by a student and is likely to have been used and submitted before

Download this Sample

Free samples may contain mistakes and not unique parts

Sorry, we could not paraphrase this essay. Our professional writers can rewrite it and get you a unique paper.

Please check your inbox.

We can write you a custom essay that will follow your exact instructions and meet the deadlines. Let's fix your grades together!

Get Your Personalized Essay in 3 Hours or Less!

We use cookies to personalyze your web-site experience. By continuing we’ll assume you board with our cookie policy .

  • Instructions Followed To The Letter
  • Deadlines Met At Every Stage
  • Unique And Plagiarism Free

future of computer science essay

University of South Florida

College of Engineering

Main navigation, usf college of engineering news, usf’s computer science & engineering commemorates past, and contemplates future, technological advances.

  • May 30, 2024

Tampa, Florida – May 13, 2024 – The Department of Computer Science and Engineering proudly observes two significant milestones this year: the 35th anniversary of pioneering Internet access in Tampa and the 30th anniversary of the introduction of www.usf.edu, marking three decades of digital presence and innovation.

Commenting on these milestones, Dr. Lawrence Hall, Distinguished University Professor of Computer Science and Engineering, notes that USF has gone from point to point connections on phone lines, to such high speed internet that one has to be on campus when working with really big files or wait, wait, wait for pretty good home internet.  Search engines (Google, Bing, etc.) have enabled us to find lots of good information on the internet.  Faculty and students interact through the internet in Canvas and find “online” information about new semesters and Computer Science and Engineering at USF.

On May 3, 1989, USF introduced the Internet to Tampa, becoming one of the first institutions in the region to provide access to this technology. For many people in the Tampa Bay area and beyond, their first exposure to the Internet was as students at USF. This initiative not only facilitated academic research and collaboration but also laid the foundation for Tampa Bay's technological growth.

The initial connection was created in the Engineering Building (ENB) to connect the Departments of Computer Science and Engineering and Electrical Engineering to the fledgling Internet. The connection was partially paid through internal funding with the strong support of then Vice President of Research, Dr. George Newkome, underscoring USF's dedication to technological advancement and research.

Joseph Gomes, who is now the Director of Technology and Systems for the Healthcare Informatics Institute, was involved with the first Internet connectivity at USF.  “Our first Internet connection was a small fraction of the bandwidth of what we have today.  It was less than one megabit per second – about a hundredth of what people have in their homes now.  Setting up the equipment the first time was complicated; we had to rely on paper manuals and the handful of people around the country at other universities to solve problems. Nobody here had done this before.” said Joe.

About five years later in early 1994, USF launched the first version of www.usf.edu.

Dr. Richard Rauscher, now a Professor of Practice in the Department of Computer Science and Engineering but then a systems administrator in the College of Engineering, recalled the early days of www.usf.edu, saying, "When we first activated www.usf.edu, the web was a relatively small place, filled mostly with technical university types. Things we take for granted today were more difficult.  For example, we didn’t have digital photography. If we wanted to put a picture on the web, we had to take the photograph with a camera, get the film developed and scan it. I didn’t have access to a flatbed scanner, but we did have a 35mm slide scanner. So, I bought some slide film and took some pictures around campus for some of the earliest pages that we posted.”

asdf

A modern recreation of the first www.usf.edu web page

At the time www.usf.edu went online, there were only an estimated 2000 websites in the world.

"As we celebrate these significant milestones, we look eagerly to the future with plans to open a new college focused on artificial intelligence, cybersecurity, and computing," said Dr. Sudeep Sarkar, Chair of the Computer Science and Engineering department. "Over the last thirty-five years, the Internet and web have profoundly transformed society, reshaping how we communicate, work, and solve complex problems. The next thirty-five years are rife with opportunities for innovation and growth. At USF, we are dedicated to shaping the future of technology and continuing our tradition of excellence and leadership in the digital frontier."

AI technologies were used to proofread and improve structure and the readability of this article.  

Return to article listing

Explore More Categories

  • All Categories
  • Alumni & Friends
  • Civil Environmental

About Engineering News

News about engineering excellence by world class faculty, and outstanding students and alumni of the College of Engineering. 

Daily Bulletin

  • Daily Bulletin home
  • About the Daily Bulletin
  • Search the Daily Bulletin
  • Daily Bulletin on Twitter
  • Beyond the Bulletin Podcast

Subscribe to the e-newsletter

  • When and Where to get Support

Thursday, May 30, 2024

Internet pioneer vint cerf to deliver distinguished lecture, engineering society works to innovate the accessibility of student events, ohd launches new leadership programming for spring 2024, leaders and alumni honoured at women of the year event.

Editor: Brandon Sweet University Communications [email protected]

Vinton G. Cerf

Vinton G. Cerf,   Vice President and Chief Internet Evangelist at Google, will deliver a Cheriton School of Computer Science Distinguished Public Lecture on Tuesday, June 11 entitled  Internet: Past, Present and Future .

"We will cover about 70 years of past and future Internet “history” — beginning with the Arpanet and moving along the terrestrial Internet trajectory, present and emerging policy and technical challenges and finally move to the interplanetary Internet project," says the talk's abstract. "Along the way, we will encounter anecdotes and personal recollections of the Internet’s evolution and projections for the future. If there is time, we’ll touch on AI and heterogeneous and quantum computing. The Internet is more than a technology. It is a social and economic eco-system with many institutions created at need. It has brought to global attention the importance of accountability as well as personal privacy."

Vint Cerf is the co-designer of the TCP/IP protocols and the architecture of the Internet. He has served in executive positions at ICANN, the Internet Society, MCI, the Corporation for National Research Initiatives and the Defense Advanced Research Projects Agency. A former Stanford Professor and former member of the US National Science Board, he is also the past President of the Association for Computing Machinery, Emeritus Chairman of the Marconi Society and serves in advisory capacities at NIST, DOE, NSF, US Navy, JPL and NRO. He earned his B.S. in mathematics at Stanford and M.S. and Ph.D. degrees in computer science at UCLA. He is a member of both the US National Academies of Science and Engineering, the Worshipful Company of Information Technologists and the Worshipful Company of Stationers.

The lecture will take place in the Humanities Theatre from 2:30 p.m. to 4:00 p.m. The event is free but registration is required .

An EngSoc representative points to a bulletin board.

A message from the Disability Inclusion Team.  National AccessAbility Week is May 26 to June 1 and is just one opportunity to share initiatives that advance accessibility and disability inclusion. Share your initiative at Accessibility and Disability Inclusion Initiatives .

The Waterloo Engineering Society (A) has launched an innovative project to enhance event accessibility that focuses on continual improvement and responding to student needs. After student feedback called for more accessible and inclusive events, Engineering Society member Claire Thompson designed an advertising icon guide that could describe the accessibility of each unique event. This guide became part of the equity initiatives led by Maya Baboolal, VP Academic (A) in Winter 2024 and Fall 2024.

A set of accessibility icons developed by the Engineering Society for use on event posters.

With the support of Maia Tse, VP Communications (A), the icon guide was rolled out in advertising across social media and physical bulletin boards. While still a new project, student feedback has already led to changes to improve certain icon legibility and the wording of the legend. This feedback further highlights the crucial role of clear communication in event planning.

Maya described the Engineering Society’s future projects as an “iterative process of continually responding to students, especially as we grow the icon guide and develop instructions on how to label events to ensure accessibility and inclusion is a collaborative approach [throughout all Engineering Society roles]”. Another Engineering Society (A) member, Paige Ackerman, is already hard at work developing webforms to guide student leaders in selecting appropriate accessibility features for diverse events.

  A message from Organizational and Human Development (OHD).

You’re invited to explore Organizational Human Development’s (OHD) new Leadership Lab series, an opportunity for Waterloo staff to join a comfortable and supportive space for people leaders (managers with direct reports) and emerging leaders (project leaders or those growing their leadership skills) to co-create informal peer communities. Using the Circles of Influence Framework to guide the conversation, participants will reflect on their leadership style and practices, identify where their control and influence lies with respect to a pre-selected issue, and learn from each other on how to approach arising challenges.   

Each in-person meeting will focus on a specific theme connected to the University community. The themes for our June offerings are:  

June 17 : Emerging Leadership -  Personalizing Institutional Direction  

June 19 : People Leadership - Making Meaning out of Institutional Direction  

Refreshments will be provided at each gathering. Register on Portal for the date and theme that resonates with you.

The packed ballroom at Bingemans for the Waterloo Oktoberfest Women of the Year event.

By Jennifer Ferguson. This is an excerpt of an article originally published on Waterloo News .

For almost 50 years, Kitchener-Waterloo Oktoberfest has recognized the achievements of local women with the  Rogers Women of the Year Awards .

Following a sold-out ceremony on the evening of May 23, 2024, Kitchener-Waterloo Oktoberfest announced the winners and their accomplishments. This year’s recipients include a senior leader at the University of Waterloo, three alumni and a former member of Waterloo’s Board of Governors.  

“On behalf of the University of Waterloo, I’m pleased to congratulate my colleague Nenone Donaldson and all the inspiring women from our university community on their well-deserved awards and nominations,” said Vivek Goel, president and vice-chancellor of the University of Waterloo. “I commend their passion and contributions to their respective fields and to our community.”

Professional 40+: Nenone Donaldson

Nenone Donaldson

Read the rest of the article on Waterloo News

Share via Facebook

Register for the "From Targeting in Academia to Promoting Trust and Understanding" conference

Registration for the upcoming international conference, " From Targeting in Academia to Promoting Trust and Understanding ," is now open. The conference will take place from June 27 to 28 at Federation Hall.

Link of the day

45 years ago:  Alien

When and Where

The  Student Health Pharmacy  (located in the lower level of the Student Life Centre) is offering flu shots with no appointments needed daily from 9:30 a.m. to 3:30 p.m. Call 519-746-4500 or extension 33784 for more info. COVID shots will be available on appointment basis only. You can register online at  studenthealthpharmacy.ca .

Warriors Youth Summer Camps.  Basketball, Baseball, Football, Hockey, Multi-Sport and Volleyball.  Register today!

Safeguarding Science workshop and more , throughout May and June. Public Safety Canada invites faculty, staff and students to attend a series of virtual event via MS Teams. Register to receive a link.

Food Truck Wednesday , Wednesday, May 8 to Wednesday, July 24, 11:30 a.m. to 2:30 p.m., Arts Quad.

Tri-Agencies webinar on Sensitive Research and Affiliations of Concern (STRAC) policy (in French), Thursday, May 30, 1:00 p.m. to 2:30 p.m. Register .

Chemistry Seminar:   Advanced catalyst discovery for clean energy transformation using computational material design , featuring Samira Siahrostami, Associate Professor, Canada Research Chair, Department of Chemistry, Simon Fraser University, Thursday, May 30, 2:30 p.m., C2-361 reading room.

Sexual Violence Awareness Month Speaker Series , Thursday, May 30, 7:00 p.m. to 8:00 p.m., MS Teams.

Reunion 2024 , Friday, May 31 and Saturday, June 1.

Jewish Heritage Month reception : in recognition of Jewish Heritage Month, the Rohr Chabad Centre for Jewish Life and the University of Waterloo are pleased to host a reception for UWaterloo students, faculty, staff and alumni, Friday, May 31, 4:00 p.m. to 5:30 p.m., remarks at 4:30 p.m., Student Life Centre Black & Gold Room (SLC 2136.) Kosher refreshments will be served.

Velocity pitch competition application deadline , Sunday, June 2.

Pride Month flag-raising ceremony , Monday, June 3, 8:45 a.m. to 9:45 a.m., outside South Campus Hall.

Generative Artificial Intelligence and the Literature Review , Wednesday, June 5, 1:00 p.m. to 3:00 p.m., LIB 323 learning lab.

The Future-Ready Workforce Series: Building inclusive workplaces for 2SLGBTQIA+ students , Wednesday, June 5, 1:00 p.m. to 2:00 p.m.

Engineering Graduate Studies Fair , Wednesday, June 5 , 2:00 p.m. to 5:00 p.m., Engineering 7 second floor event space.

Inert Atmosphere Fabrication and RAC Capabilities Open House , Thursday, June 6, 11:45 a.m. to 2:00 p.m., Research Advancement Centre (RAC).

WISE Public Lecture, “The Role of Nuclear Energy in Ontario's Clean Economy,"  by Danielle LaCroix (Sr. Director, Environment, Sustainability & Net Zero, Bruce Power)., Friday, June 7, 1:30 p.m. to 2:30 p.m., W.G. Davis Computer Research Centre (DC), Room DC 1302., in-person and on Zoom.  Register  today.

Soapbox Science Kitchener-Waterloo , Sunday, June 9, 1:00 p.m. to 4:00 p.m., Victoria Park near the playground and picnic area. Hear from twelve STEM researchers as they take to their soapboxes with short discussions and fun demos. Questions from the public are encouraged!

University Senate meeting , Monday, June 10, 3:30 p.m., NH 3407 and Zoom.

Hallman Lecture featuring Rick Hansen: In motion towards building an inclusive and healthy world without barriers , Monday, June 10, 7:00 p.m. to 9:00 p.m., EXP 1689.

Spring 2024 Convocation , Tuesday, June 11 to Saturday, June 15.

School of Planning Graduation Luncheon , Tuesday, June 11, 12:30 p.m. to 2:45 p.m., Federation Hall.

Cheriton School of Computer Science Distinguished Lecture featuring Vint Cerf, "Internet: Past, Present and Future," Tuesday, June 11, 2:30 p.m. to 4:00 p.m., Humanities Theatre.

Staff Association open meeting featuring the Conflict Management and Human Rights Office , Thursday, June 13, 12 noon to 1:00 p.m., online.

Indigenous Community Concert | Sultans of String "Walking Through the Fire" , Monday, June 17, 6:00 p.m. to 9:00 p.m., Federation Hall.

How to Disconnect from Work (for staff), Tuesday, June 18, 2:00 p.m. to 3:00 p.m., online.

Upcoming service interruptions

Stay up to date on service interruptions, campus construction, and other operational changes on  the Plant Operations website . Upcoming service interruptions include:

  • Hagey Hall Room 1814 elevator maintenance , Thursday, May 30, 8:00 a.m. to 4:00 p.m., loss of service for 4 hours during the maintenance window.
  • School of Architecture fire alarm testing , Friday, May 31, 8:30 a.m. to 9:00 a.m., fire alarm will sound, building evacuation not required.
  • E7 Bicycle Shelter snow guard installation , Friday, May 31, areas around and under the E7 shelter will be blocked off for vehicle and personnel access.
  • Mathematics & Computer Building electrical shutdown , Saturday, June 1, 7:00 a.m. to 7:00 p.m., power will be disrupted to several areas in the building.
  • East Campus 3 electrical shutdown , Sunday, June 2, beginning at 8:00 a.m., power to the building will be shut off for approximately four hours
  • East Campus 1, East Campus 2, East Campus 3 fire alarm testing , Monday, June 3, 10:00 a.m. to 11:00 a.m., fire alarm will sound, building evacuation not required.
  • Quantum-Nano Centre electrical panel shutdown , Wednesday, June 5, 6:00 a.m. to 8:00 a.m., affecting all floors of QNC. Occupants with sensitive equipment and/or research have been pre-notified. HVAC and controls could be affected in some areas of the building.
  • ESC and Chemistry 2 crane operation , Thursday, June 6, 6:00 a.m. to 5:00 p.m., Chemistry road will be closed from DC Library to the C2/ESC bridge to all vehicular traffic, pedestrians use alternate trail as marked, ESC loading dock and parking stalls closed for the day.

The Daily Bulletin is published by Internal and Leadership Communications , part of University Communications

Contact us at [email protected] Submission guidelines

  • Contact Waterloo
  • Maps & Directions
  • Accessibility

The University of Waterloo acknowledges that much of our work takes place on the traditional territory of the Neutral, Anishinaabeg and Haudenosaunee peoples. Our main campus is situated on the Haldimand Tract, the land granted to the Six Nations that includes six miles on each side of the Grand River. Our active work toward reconciliation takes place across our campuses through research, learning, teaching, and community building, and is co-ordinated within the Office of Indigenous Relations .

We've detected unusual activity from your computer network

To continue, please click the box below to let us know you're not a robot.

Why did this happen?

Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy .

For inquiries related to this message please contact our support team and provide the reference ID below.

Help | Advanced Search

Computer Science > Machine Learning

Title: application of machine learning in agriculture: recent trends and future research avenues.

Abstract: Food production is a vital global concern and the potential for an agritech revolution through artificial intelligence (AI) remains largely unexplored. This paper presents a comprehensive review focused on the application of machine learning (ML) in agriculture, aiming to explore its transformative potential in farming practices and efficiency enhancement. To understand the extent of research activity in this field, statistical data have been gathered, revealing a substantial growth trend in recent years. This indicates that it stands out as one of the most dynamic and vibrant research domains. By introducing the concept of ML and delving into the realm of smart agriculture, including Precision Agriculture, Smart Farming, Digital Agriculture, and Agriculture 4.0, we investigate how AI can optimize crop output and minimize environmental impact. We highlight the capacity of ML to analyze and classify agricultural data, providing examples of improved productivity and profitability on farms. Furthermore, we discuss prominent ML models and their unique features that have shown promising results in agricultural applications. Through a systematic review of the literature, this paper addresses the existing literature gap on AI in agriculture and offers valuable information to newcomers and researchers. By shedding light on unexplored areas within this emerging field, our objective is to facilitate a deeper understanding of the significant contributions and potential of AI in agriculture, ultimately benefiting the research community.

Submission history

Access paper:.

  • HTML (experimental)
  • Other Formats

References & Citations

  • Google Scholar
  • Semantic Scholar

BibTeX formatted citation

BibSonomy logo

Bibliographic and Citation Tools

Code, data and media associated with this article, recommenders and search tools.

  • Institution

arXivLabs: experimental projects with community collaborators

arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs .

IMAGES

  1. ≫ The Future of Computers Free Essay Sample on Samploon.com

    future of computer science essay

  2. The Future of Computer Science

    future of computer science essay

  3. Studying The Future Prospective Of Nanotechnology Computer Science

    future of computer science essay

  4. Exploring Computer Science: My Interests, Education, and Future Plans

    future of computer science essay

  5. Future of Computer Science and Engineering

    future of computer science essay

  6. (PDF) On the future of computer science

    future of computer science essay

VIDEO

  1. after devin ai

  2. Coding the Future

  3. Harvard CS50 2023

  4. Harvard CS50 2023

  5. Harvard CS50 2023

  6. SSLC SOCIAL SCIENCE

COMMENTS

  1. How to Write the "Why Computer Science?" Essay

    The "Why This Major?" essay is an opportunity for you to dig deep into your motivations and passions for studying Computer Science. It's about sharing your 'origin story' of how your interest in Computer Science took root and blossomed. This part of your essay could recount an early experience with coding, a compelling Computer ...

  2. Where computing might go next

    Margaret O'Mara. October 27, 2021. If the future of computing is anything like its past, then its trajectory will depend on things that have little to do with computing itself. Technology does ...

  3. 15+ Computer Science Essay Examples to Help You Stand Out

    Here are ten examples of computer science essay topics to get you started: The impact of artificial intelligence on society: benefits and drawbacks. Cybersecurity measures in cloud computing systems. The Ethics of big data: privacy, bias, and Transparency. The future of quantum computing: possibilities and challenges.

  4. The present and future of AI

    They need to understand aspects of AI such as how their actions affect future recommendations. But for computer science students in college, I think a key thing that future engineers need to realize is when to demand input and how to talk across disciplinary boundaries to get at often difficult-to-quantify notions of safety, equity, fairness, etc.

  5. Top Trends in Computer Science and Technology

    Computer science is constantly evolving. Learn more about the latest trends in AI, cybersecurity, regenerative agritech, and other developing areas of the field. ... Computer science is among the most future-proof career fields — it's always changing and becomes more enmeshed in our lives every day. The latest computer technology can learn to ...

  6. 7 Important Computer Science Trends 2024-2027

    Which would position quantum computing as one of the most important computer science trends in the coming years. 2. Zero Trust becomes the norm. "Zero Trust" searches have increased by 642%. General awareness of this security concept started to take off in 2019.

  7. The Future of Computer Science (CS): [Essay Example], 495 words

    Most computers of the future will be low-cost, embedded, and massively distributed. They will be networked, and they will be physically embedded in our environments. These pervasive physical sensor networks will enable us to monitor in minute detail our own physical activities, and they will provide a seamless bridge to the virtual world.

  8. Envisioning the future of computing

    Robert Cunningham '23, a recent graduate in math and physics, is the winner of the Envisioning the Future of Computing Prize. Cunningham's essay was among nearly 60 entries submitted for the first-ever essay competition that challenged MIT students to imagine ways that computing technologies could improve our lives, as well as the pitfalls and dangers associated with them.

  9. Computers rule our lives. Where will they take us next?

    1975. The first in a series of articles on the computer revolution explores the technological breakthroughs bringing computers to the average person. 1975. Science News weighs the pros and cons of ...

  10. Envisioning the future of computing

    Robert Cunningham '23, a recent graduate in math and physics, is the winner of the Envisioning the Future of Computing Prize. Cunningham's essay was among nearly 60 entries submitted for the first-ever essay competition that challenged MIT students to imagine ways that computing technologies could improve our lives, as well as the pitfalls and dangers associated with them.

  11. Essay on Future of Computer

    Students are often asked to write an essay on Future of Computer in their schools and colleges. And if you're also looking for the same, we have created 100-word, 250-word, and 500-word essays on the topic. ... One of the most anticipated advancements in the realm of computer science is quantum computing. Unlike classical computers, which use ...

  12. 160+ Computer Science Essay Topics for Your Next Assignment

    Computer Science Essay - Overview. A computer science essay is a written piece that explores various topics related to computer science. These include technical and complex topics, like software development and artificial intelligence. They can also explore more general topics, like the history and future of technology.

  13. The Future of Computer Science Essay

    The Future of Computer Science Essay. Computer Science, Software Engineering and Information Systems are international qualifications, enabling people to work globally, and in a very broad variety of roles. There is steady growth in demand for technically adept and flexible IT graduates. Declining student enrollment, while growth continues in ...

  14. Ideas That Created the Future: Classic Papers of Computer Science

    Classic papers by thinkers ranging from Aristotle and Leibniz to Norbert Wiener and Gordon Moore that chart the evolution of computer science. Ideas That Created the Future collects forty-six classic papers in computer science that map the evolution of the field. It covers all aspects of computer science: theory and practice, architectures and ...

  15. Essays on Computer Science

    2 pages / 700 words. Before talking about the future of computers it is important to know of Moore's Law. Moore's law is the prediction that the number of transistors on a chip will roughly double every two years, as their cost goes down. Decades after this law was created,... Computer Science Impact of Technology.

  16. The future of computing beyond Moore's Law

    Evolving technology in the absence of Moore's Law will require an investment now in computer architecture and the basic sciences (including materials science), to study candidate replacement materials and alternative device physics to foster continued technology scaling. Figure 1.

  17. The Future of Computers Technology: [Essay Example], 700 words

    Get custom essay. In 1953, a 100-word magnetic core memory was constructed by the Burroughs Corporation in order to provide the ENIAC with memory abilities. The ENIAC filled ~1,800 square feet by the end of its development in 1956. It was composed of nearly 20,000 vacuum tubes, 1,500 relays, 10,000 capacitors, and 70,000 resistors.

  18. Ideas That Created the Future: Classic Papers of Computer Science

    Classic papers by thinkers ranging from Aristotle and Leibniz to Norbert Wiener and Gordon Moore that chart the evolution of computer science. Ideas That Created the Future collects forty-six classic papers in computer science that map the evolution of the field. It covers all aspects of computer science: theory and practice, architectures and ...

  19. Essay on the Future of Computers

    1. This essay sample was donated by a student to help the academic community. Papers provided by EduBirdie writers usually outdo students' samples. Cite this essay. Download. In nowadays, the technology that has more impact on human beings is the computer. The computer had changed our lives dramatically in the 20th century.

  20. Forecasting the future of artificial intelligence with machine learning

    In this work, we address the ambitious vision of developing a data-driven approach to predict future research directions 1.As new research ideas often emerge from connecting seemingly unrelated ...

  21. Envisioning the Future of Computing Prize 2023

    Offered for the first time this year, the Institute-wide essay competition attracted nearly 60 submissions from students, including those majoring in mathematics, philosophy, electrical engineering and computer science, brain and cognitive sciences, chemical engineering, urban studies and planning, and management.

  22. Quantum computers could soon speed the development of novel ...

    Classical computers can simulate how multiple photons interact, but only a few at a time, because of the computational intensity. So, Ting Rei Tan, a physicist at the University of Sydney, and colleagues used a trapped ion quantum computer to simulate how a single quantum "wave packet" of energy moves between neighboring molecules.

  23. Researchers are promoting a safer future with AI by strengthening

    Dr. Samson Zhou, assistant professor in the Department of Computer Science and Engineering at Texas A&M University, and Dr. David P. Woodruff, professor in the Computer Science Department at Carnegie Mellon University, hope to strengthen algorithms used by big data AI models against attacks.

  24. My Future In Computer Science Essay

    8 Pages. My Future in Computer Science. Computer Science is defined as being a branch of science that deals with the theory of computation or the design of computers. Computer science is considered an engineering discipline, and one of the fastest growing industries in the world. With the constant expectation for companies to be more advanced ...

  25. Don't Forget to Connect! Improving RAG with Graph-based Reranking

    We introduce G-RAG, a reranker based on graph neural networks (GNNs) between the retriever and reader in RAG. Our method combines both connections between documents and semantic information (via Abstract Meaning Representation graphs) to provide a context-informed ranker for RAG. G-RAG outperforms state-of-the-art approaches while having ...

  26. Computer Science

    Words: 636 | Page: 1 | 4 min read. Published: Jul 15, 2020. Computer Science amazes me as it encompasses logical and systematic workings to carry out tasks at a speed and efficiency beyond an individual's ability. With its foundation in Mathematics and logic, I thoroughly enjoy the process of programming as it provides a constant challenge to ...

  27. USF's Computer Science & Engineering Commemorates Past, and

    At the time www.usf.edu went online, there were only an estimated 2000 websites in the world. "As we celebrate these significant milestones, we look eagerly to the future with plans to open a new college focused on artificial intelligence, cybersecurity, and computing," said Dr. Sudeep Sarkar, Chair of the Computer Science and Engineering department.

  28. Thursday, May 30, 2024

    Internet pioneer Vint Cerf to deliver distinguished lecture Vinton G. Cerf, Vice President and Chief Internet Evangelist at Google, will deliver a Cheriton School of Computer Science Distinguished Public Lecture on Tuesday, June 11 entitled Internet: Past, Present and Future. "We will cover about 70 years of past and future Internet "history" — beginning with the Arpanet and

  29. Fake Science Generated by Paper Mills Will Get Worse With AI

    That may not sound big, but somewhere between 2 million and 6 million scientific papers are published every year, so 2% adds up to a lot. Some journals are more than 50%-generated by paper mills ...

  30. [2405.17465] Application of Machine Learning in Agriculture: Recent

    Food production is a vital global concern and the potential for an agritech revolution through artificial intelligence (AI) remains largely unexplored. This paper presents a comprehensive review focused on the application of machine learning (ML) in agriculture, aiming to explore its transformative potential in farming practices and efficiency enhancement. To understand the extent of research ...