How critical thinking can help you learn to code

critical thinking in computer programming

Become a Software Engineer in Months, Not Years

From your first line of code, to your first day on the job — Educative has you covered. Join 2M+ developers learning in-demand programming skills.

Experienced programmers frequently say that being able to problem-solve effectively is one of the most important skills they use in their work. In programming as in life, problems don’t usually have magical solutions. Solving a coding problem often means looking at the problem from multiple perspectives, breaking it down into its constituent parts, and then considering (and maybe trying) several approaches to addressing it.

In short, being a good problem-solver requires critical thinking .

Today, we’ll discuss what critical thinking is, why it’s important, and how it can make you a better programmer.

We’ll cover :

What is critical thinking?

Why is critical thinking important in programming, how you can start thinking more critically, apply critical thinking today.

Learn to code today. Try one of our courses on programming fundamentals: Learn to Code: Python for Absolute Beginners Learn to Code: C++ for Absolute Beginners Learn to Code: C# for Absolute Beginners Learn to Code: Java for Absolute Beginners Learn to Code: JavaScript for Absolute Beginners Learn to Code: Ruby for Absolute Beginners

widget

Learn in-demand tech skills in half the time

Mock Interview

Skill Paths

Assessments

Learn to Code

Tech Interview Prep

Generative AI

Data Science

Machine Learning

GitHub Students Scholarship

Early Access Courses

For Individuals

Try for Free

Gift a Subscription

Become an Author

Become an Affiliate

Earn Referral Credits

Cheatsheets

Frequently Asked Questions

Privacy Policy

Cookie Policy

Terms of Service

Business Terms of Service

Data Processing Agreement

Copyright © 2024 Educative, Inc. All rights reserved.

How to think like a programmer — lessons in problem solving

How to think like a programmer — lessons in problem solving

by Richard Reis

aNP21-ICMABUCyfdi4Pys7P0D2wiZqTd3iRY

If you’re interested in programming, you may well have seen this quote before:

“Everyone in this country should learn to program a computer, because it teaches you to think.” — Steve Jobs

You probably also wondered what does it mean, exactly, to think like a programmer? And how do you do it??

Essentially, it’s all about a more effective way for problem solving .

In this post, my goal is to teach you that way.

By the end of it, you’ll know exactly what steps to take to be a better problem-solver.

Why is this important?

Problem solving is the meta-skill.

We all have problems. Big and small. How we deal with them is sometimes, well…pretty random.

Unless you have a system, this is probably how you “solve” problems (which is what I did when I started coding):

  • Try a solution.
  • If that doesn’t work, try another one.
  • If that doesn’t work, repeat step 2 until you luck out.

Look, sometimes you luck out. But that is the worst way to solve problems! And it’s a huge, huge waste of time.

The best way involves a) having a framework and b) practicing it.

“Almost all employers prioritize problem-solving skills first.
Problem-solving skills are almost unanimously the most important qualification that employers look for….more than programming languages proficiency, debugging, and system design.
Demonstrating computational thinking or the ability to break down large, complex problems is just as valuable (if not more so) than the baseline technical skills required for a job.” — Hacker Rank ( 2018 Developer Skills Report )

Have a framework

To find the right framework, I followed the advice in Tim Ferriss’ book on learning, “ The 4-Hour Chef ”.

It led me to interview two really impressive people: C. Jordan Ball (ranked 1st or 2nd out of 65,000+ users on Coderbyte ), and V. Anton Spraul (author of the book “ Think Like a Programmer: An Introduction to Creative Problem Solving ”).

I asked them the same questions, and guess what? Their answers were pretty similar!

Soon, you too will know them.

Sidenote: this doesn’t mean they did everything the same way. Everyone is different. You’ll be different. But if you start with principles we all agree are good, you’ll get a lot further a lot quicker.

“The biggest mistake I see new programmers make is focusing on learning syntax instead of learning how to solve problems.” — V. Anton Spraul

So, what should you do when you encounter a new problem?

Here are the steps:

1. Understand

Know exactly what is being asked. Most hard problems are hard because you don’t understand them (hence why this is the first step).

How to know when you understand a problem? When you can explain it in plain English.

Do you remember being stuck on a problem, you start explaining it, and you instantly see holes in the logic you didn’t see before?

Most programmers know this feeling.

This is why you should write down your problem, doodle a diagram, or tell someone else about it (or thing… some people use a rubber duck ).

“If you can’t explain something in simple terms, you don’t understand it.” — Richard Feynman

Don’t dive right into solving without a plan (and somehow hope you can muddle your way through). Plan your solution!

Nothing can help you if you can’t write down the exact steps.

In programming, this means don’t start hacking straight away. Give your brain time to analyze the problem and process the information.

To get a good plan, answer this question:

“Given input X, what are the steps necessary to return output Y?”

Sidenote: Programmers have a great tool to help them with this… Comments!

Pay attention. This is the most important step of all.

Do not try to solve one big problem. You will cry.

Instead, break it into sub-problems. These sub-problems are much easier to solve.

Then, solve each sub-problem one by one. Begin with the simplest. Simplest means you know the answer (or are closer to that answer).

After that, simplest means this sub-problem being solved doesn’t depend on others being solved.

Once you solved every sub-problem, connect the dots.

Connecting all your “sub-solutions” will give you the solution to the original problem. Congratulations!

This technique is a cornerstone of problem-solving. Remember it (read this step again, if you must).

“If I could teach every beginning programmer one problem-solving skill, it would be the ‘reduce the problem technique.’
For example, suppose you’re a new programmer and you’re asked to write a program that reads ten numbers and figures out which number is the third highest. For a brand-new programmer, that can be a tough assignment, even though it only requires basic programming syntax.
If you’re stuck, you should reduce the problem to something simpler. Instead of the third-highest number, what about finding the highest overall? Still too tough? What about finding the largest of just three numbers? Or the larger of two?
Reduce the problem to the point where you know how to solve it and write the solution. Then expand the problem slightly and rewrite the solution to match, and keep going until you are back where you started.” — V. Anton Spraul

By now, you’re probably sitting there thinking “Hey Richard... That’s cool and all, but what if I’m stuck and can’t even solve a sub-problem??”

First off, take a deep breath. Second, that’s fair.

Don’t worry though, friend. This happens to everyone!

The difference is the best programmers/problem-solvers are more curious about bugs/errors than irritated.

In fact, here are three things to try when facing a whammy:

  • Debug: Go step by step through your solution trying to find where you went wrong. Programmers call this debugging (in fact, this is all a debugger does).
“The art of debugging is figuring out what you really told your program to do rather than what you thought you told it to do.”” — Andrew Singer
  • Reassess: Take a step back. Look at the problem from another perspective. Is there anything that can be abstracted to a more general approach?
“Sometimes we get so lost in the details of a problem that we overlook general principles that would solve the problem at a more general level. […]
The classic example of this, of course, is the summation of a long list of consecutive integers, 1 + 2 + 3 + … + n, which a very young Gauss quickly recognized was simply n(n+1)/2, thus avoiding the effort of having to do the addition.” — C. Jordan Ball

Sidenote: Another way of reassessing is starting anew. Delete everything and begin again with fresh eyes. I’m serious. You’ll be dumbfounded at how effective this is.

  • Research: Ahh, good ol’ Google. You read that right. No matter what problem you have, someone has probably solved it. Find that person/ solution. In fact, do this even if you solved the problem! (You can learn a lot from other people’s solutions).

Caveat: Don’t look for a solution to the big problem. Only look for solutions to sub-problems. Why? Because unless you struggle (even a little bit), you won’t learn anything. If you don’t learn anything, you wasted your time.

Don’t expect to be great after just one week. If you want to be a good problem-solver, solve a lot of problems!

Practice. Practice. Practice. It’ll only be a matter of time before you recognize that “this problem could easily be solved with <insert concept here>.”

How to practice? There are options out the wazoo!

Chess puzzles, math problems, Sudoku, Go, Monopoly, video-games, cryptokitties, bla… bla… bla….

In fact, a common pattern amongst successful people is their habit of practicing “micro problem-solving.” For example, Peter Thiel plays chess, and Elon Musk plays video-games.

“Byron Reeves said ‘If you want to see what business leadership may look like in three to five years, look at what’s happening in online games.’
Fast-forward to today. Elon [Musk], Reid [Hoffman], Mark Zuckerberg and many others say that games have been foundational to their success in building their companies.” — Mary Meeker ( 2017 internet trends report )

Does this mean you should just play video-games? Not at all.

But what are video-games all about? That’s right, problem-solving!

So, what you should do is find an outlet to practice. Something that allows you to solve many micro-problems (ideally, something you enjoy).

For example, I enjoy coding challenges. Every day, I try to solve at least one challenge (usually on Coderbyte ).

Like I said, all problems share similar patterns.

That’s all folks!

Now, you know better what it means to “think like a programmer.”

You also know that problem-solving is an incredible skill to cultivate (the meta-skill).

As if that wasn’t enough, notice how you also know what to do to practice your problem-solving skills!

Phew… Pretty cool right?

Finally, I wish you encounter many problems.

You read that right. At least now you know how to solve them! (also, you’ll learn that with every solution, you improve).

“Just when you think you’ve successfully navigated one obstacle, another emerges. But that’s what keeps life interesting.[…]
Life is a process of breaking through these impediments — a series of fortified lines that we must break through.
Each time, you’ll learn something.
Each time, you’ll develop strength, wisdom, and perspective.
Each time, a little more of the competition falls away. Until all that is left is you: the best version of you.” — Ryan Holiday ( The Obstacle is the Way )

Now, go solve some problems!

And best of luck ?

Special thanks to C. Jordan Ball and V. Anton Spraul . All the good advice here came from them.

Thanks for reading! If you enjoyed it, test how many times can you hit in 5 seconds. It’s great cardio for your fingers AND will help other people see the story.

If this article was helpful, share it .

Learn to code for free. freeCodeCamp's open source curriculum has helped more than 40,000 people get jobs as developers. Get started

OPINION article

Some evidence on the cognitive benefits of learning to code.

\nRonny Scherer

  • 1 Centre for Educational Measurement, Faculty of Educational Sciences, University of Oslo, Oslo, Norway
  • 2 Department of Education and Quality in Learning, Unit for Digitalisation and Education, Kongsberg, Norway
  • 3 Department of Biology, Humboldt University of Berlin, Berlin, Germany

Introduction

Computer coding—an activity that involves the creation, modification, and implementation of computer code and exposes students to computational thinking—is an integral part of today's education in science, technology, engineering, and mathematics (STEM) ( Grover and Pea, 2013 ). As technology is advancing, coding is becoming a necessary process and much-needed skill to solve complex scientific problems efficiently and reproducibly, ultimately elevating the careers of those who master the skill. With many countries around the world launching coding initiatives and integrating computational thinking into the curricula of higher education, secondary education, primary education, and kindergarten, the question arises, what lies behind this enthusiasm for learning to code? Part of the reasoning is that learning to code may ultimately aid students' learning and acquiring of skills in domains other than coding. Researchers, policy-makers, and leaders in the field of computer science and education have made ample use of this argument to attract students into computer science, bring to attention the need for skilled programmers, and make coding compulsory for students. Bill Gates once stated that “[l]earning to write programs stretches your mind, and helps you think better, creates a way of thinking about things that I think is helpful in all domains” (2013). Similar to the claims surrounding chess instruction, learning Latin, video gaming, and brain training ( Sala and Gobet, 2017 ), this so-called “transfer effect” assumes that students learn a set of skills during coding instruction that are also relevant for solving problems in mathematics, science, and other contexts. Despite this assumption and the claims surrounding transfer effects, the evidence backing them seems to stand on shaky legs—a recently published paper even claimed that such evidence does not exist at all ( Denning, 2017 ), yet without reviewing the extant body of empirical studies on the matter. Moreover, simply teaching coding does not ensure that students are able to transfer the knowledge and skills they have gained to other situations and contexts—in fact, instruction needs to be designed for fostering this transfer ( Grover and Pea, 2018 ).

In this opinion paper, we (a) argue that learning to code involves thinking processes similar to those in other domains, such as mathematical modeling and creative problem solving, (b) highlight the empirical evidence on the cognitive benefits of learning computer coding that has bearing on this long-standing debate, and (c) describe several criteria for documenting these benefits (i.e., transfer effects). Despite the positive evidence suggesting that these benefits may exist, we argue that the transfer debate has not yet to be settled.

Computer Coding as Problem Solving

Computer coding comprises activities to create, modify, and evaluate computer code along with the knowledge about coding concepts and procedures ( Tondeur et al., 2019 ). Ultimately, computer science educators consider it a vehicle to teaching computational thinking through, for instance, (a) abstraction and pattern generalization, (b) systematic information processing, (c) symbol systems and representations, (d) algorithmic thinking, (e) problem decomposition, (f) debugging and systematic error detection ( Grover and Pea, 2013 ). These skills share considerable similarities with general problem solving and problem solving in specific domains ( Shute et al., 2017 ). Drawing from the “theory of common elements,” one may therefore expect possible transfer effects between coding and problem solving skills ( Thorndike and Woodworth, 1901 ). For instance, creative problem solving requires students to encode, recognize, and formulate the problem (preparation phase), represent the problem (incubation phase), search for and find solutions (illumination phase), evaluate the creative product and monitor the process of creative activities (verification phase)—activities that also play a critical role in coding ( Clements, 1995 ; Grover and Pea, 2013 ). Similarly, solving problems through mathematical modeling requires students to decompose a problem into its parts (e.g., variables), understand their relations (e.g., functions), use mathematical symbols to represent these relations (e.g., equations), and apply algorithms to obtain a solution—activities mimicking the coding process. These two examples illustrate that the processes involved in coding are close to those involved in performing skills outside the coding domain ( Popat and Starkey, 2019 ). This observation has motivated researchers and educators to hypothesize transfer effects of learning to code, and, in fact, some studies found positive correlations between coding skills and other skills, such as information processing, reasoning, and mathematical skills ( Shute et al., 2017 ). Nevertheless, despite the conceptual backing of such transfer effects, which evidence exists to back them empirically?

Cognitive Benefits of Learning Computer Coding

Despite the conceptual argument that computer coding engages students in general problem-solving activities and may ultimately be beneficial for acquiring cognitive skills beyond coding, the empirical evidence backing these transfer effects is diverse ( Denning, 2017 ). While some experimental and quasi-experimental studies documented mediocre to large effects of coding interventions on skills such as reasoning, creative thinking, and mathematical modeling, other studies did not find support for any transfer effect. Several research syntheses were therefore aimed at clarifying and explaining this diversity.

In 1991, Liao and Bright reviewed 65 empirical studies on the effects of learning-to-code interventions on measures of cognitive skills ( Liao and Bright, 1991 ). Drawing from the published literature between 1960 and 1989, the authors included experimental, quasi-experimental, and pre-experimental studies in classrooms with a control group (non-programming) and a treatment group (programming). The primary studies had to provide quantitative information about the effectiveness of the interventions on a broad range of cognitive skills, such as planning, reasoning, and metacognition. Studies that presented only correlations between programming and other cognitive skills were excluded. The interventions focused on learning the programming languages Logo, BASIC, Pascal, and mixtures thereof. This meta-analysis resulted in a positive effect size quantified as the well-known Cohen's d coefficient, indicating that control group and experimental group average gains in cognitive skills differed by 0.41 standard deviations. Supporting the existence of transfer effects, this evidence indicated that learning coding aided the acquisition of other skills to a considerable extent. Although this meta-analysis was ground-breaking at the time, transferring it into today's perspective on coding and transfer is problematic for several reasons: First, during the last three decades, the tools used to engage students in computer coding have changed substantially, and visual programming languages such as Scratch simplify the creation and understanding of computer code. Second, Liao and Bright included any cognitive outcome variable without considering possible differences in the transfer effects between skills (e.g., reasoning may be enhanced more than reading skills). Acknowledging this limitation, Liao (2000) performed a second, updated meta-analysis in 2000 summarizing the results of 22 studies and found strong effects on coding skills ( d ¯ = 2.48), yet insignificant effects on creative thinking ( d ¯ = −0.13). Moderate effects occurred for critical thinking, reasoning, and spatial skills ( d ¯ = 0.37–0.58).

Drawing from a pool of 105 intervention studies and 539 reported effects, Tondeur et al. (2019) put the question of transfer effects to a new test. Their meta-analysis included experimental and quasi-experimental intervention studies with pretest-posttest and posttest-only designs. Each educational intervention had to include at least one control group and at least one treatment group with a design that allowed for studying the effects of coding (e.g., treatment group: intervention program of coding with Scratch ® , control group: no coding intervention at all; please see the meta-analysis for more examples of study designs). Finally, the outcome measures were performance-based measures, such as the Torrance Test of Creative Thinking or intelligence tests. This meta-analysis showed that learning to code had a positive and strong effect on coding skills ( g ¯ = 0.75) and a positive and medium effect on cognitive skills other than coding ( g ¯ = 0.47). The authors distinguished further between the different types of cognitive skills and found a range of effect sizes, g ¯ = −0.02–0.73 ( Figure 1 ). Ultimately, they documented the largest effects for creative thinking, mathematical skills, metacognition, reasoning, and spatial skills ( g ¯ = 0.37–0.73). At the same time, these effects were context-specific and depended on the study design features, such as randomization and the treatment of control groups.

www.frontiersin.org

Figure 1 . Effect sizes of learning-to-code interventions on several cognitive skills and their 95% confidence intervals ( Tondeur et al., 2019 ). The effect sizes represent mean differences in the cognitive skill gains between the control and experimental groups in units of standard deviations (Hedges' g ).

These research syntheses provide some evidence for the transfer effects of learning to code on other cognitive skills—learning to code may indeed have cognitive benefits. At the same time, as the evidence base included some study designs that deviated from randomized controlled trials, strictly causal conclusions (e.g., “Students' gains in creativity were caused by the coding intervention.”) cannot be drawn. Instead, one may conclude that learning to code was associated with improvements in other skills measures. Moreover, the evidence does not indicate that transfer just “happens”; yet, it must be facilitated and trained explicitly ( Grover and Pea, 2018 ). This represents a “cost” of transfer in the context of coding: among others, teaching for transfer requires sufficient teaching time, student-centered, cognitively activating, supportive, and motivating learning environments, and teacher training—in fact, possible transfer effects can be moderated by these instructional conditions (e.g., Gegenfurtner, 2011 ; Yadav et al., 2017 ; Waite et al., 2020 ; Beege et al., 2021 ). The extant body of research on fostering computational thinking through teaching programming suggests that problem-based learning approaches that involve information processing, scaffolding, and reflection activities are effective ways to promote the near transfer of coding ( Lye and Koh, 2014 ; Hsu et al., 2018 ). Beside the cost of effective instructional designs, another cost refers to the cognitive demands of the transfer: existing models of transfer suggest that the more similar the tasks during the instruction in one domain (e.g., coding) are to those in another domain (e.g., mathematical problem solving), the more likely students can transfer their knowledge and skills between domains ( Taatgen, 2013 ). Mastering this transfer involves additional cognitive skills, such as executive functioning (e.g., switching between tasks) and metacognition (e.g., recognizing similar tasks and solution patterns; Salomon and Perkins, 1987 ; Popat and Starkey, 2019 ). It is therefore key to further investigate the conditions and mechanisms underlying the possible transfer of the skills students acquire and the knowledge they gain during coding instruction via carefully designed learning interventions and experimental studies are needed that include the teaching, mediating, and assessment of transfer.

Challenges With Measuring Cognitive Benefits

Despite the promising evidence on the cognitive benefits of learning to code, the existing body of research still needs to address several challenges to detect and document transfer effects—these challenges include but are not limited to Tondeur et al. (2019) :

• Measuring coding skills. To identify the effects of learning-to-code interventions on coding skills, reliable and valid measures of these skills (e.g., performance scores) must be included. These measures allow researchers to establish baseline effects, that is, the effects on the skills trained during the intervention ( Melby-Lervåg et al., 2016 ). However, the domain of computer coding largely lacks measures showing sufficient quality ( Tang et al., 2020 ).

• Measuring other cognitive skills. Next to the measures of coding skills, measures of other cognitive skills must be administered to trace whether coding interventions are beneficial for learning skills outside the coding domain and ultimately document transfer effects. This design allows researchers to examine both near and far transfer effects and to test whether gains in cognitive skills may be caused by gains in coding skills ( Melby-Lervåg et al., 2016 ).

• Implementing experimental research designs. To detect and interpret intervention effects over time, pre- and post-test measures of coding and other cognitive skills are taken, the assignment to the experimental group(s) is random, and students in the control group(s) do not receive the coding intervention. Existing meta-analyses examining the near and far transfer effects of coding have shown that these designs features play a pivotal, moderating role, and the effects tend to be lower for randomized experimental studies with active control groups (e.g., Liao, 2000 ; Scherer et al., 2019 , 2020 ). Scholars in the field of transfer in education have emphasized the need for taking into account more aspects related to transfer than only changes in scores between the pre- and post-tests. These aspects include, for instance, continuous observations and tests of possible transfer over larger periods of time and qualitative measures of knowledge application that could make visible students' ability to learn new things and to solve (new) problems in different types of situations ( Bransford and Schwartz, 1999 ; Lobato, 2006 ).

Ideally, research studies address all of these challenges; however, in reality, researchers must examine the consequences of the departures from a well-structured experimental design and evaluate the validity of the resultant transfer effects.

Overall, the evidence supporting the cognitive benefits of learning to code is promising. In the first part of this opinion paper, we argued that coding skills and other skills, such as creative thinking and mathematical problem solving, share skillsets and that these common elements form the ground for expecting some degree of transfer from learning to code into other cognitive domains (e.g., Shute et al., 2017 ; Popat and Starkey, 2019 ). In fact, the existing meta-analyses supported the possible existence of this transfer for the two domains. This reasoning assumes that students engage in activities during coding through which they acquire a set of skills that could be transferred to other contexts and domains (e.g., Lye and Koh, 2014 ; Scherer et al., 2019 ). The specific mechanisms and beneficial factors of this transfer, however, still need to be clarified.

The evidence we have presented in this paper suggests that students' performance on tasks in several domains other than coding is not enhanced to the same extent—that is, acquiring some cognitive skills other than coding is more likely than acquiring others. We argue that the overlap of skillsets between coding and skills in other domains may differ across domains and the extent to which transfer seems likely may depend on the degree of this overlap (i.e., the common elements), next to other key aspects, such as task designs, instruction, and practice. Despite the evidence that cognitive skills may be prompted, the direct transfer of what is learned through coding is complex and does not happen automatically. To shed further light on the possible causes of why transferring coding skills to situations in which students are required to, for instance, think creatively may be more likely than transferring coding skills to situations in which students are required to comprehend written text as part of literacy, researchers are encouraged to continue testing these effects with carefully designed intervention studies and valid measures of coding and other cognitive skills. The transfer effects, although large enough to be significant, establish some evidence on the relation between learning to code and gains in other cognitive skills; however, for some skills, they are too modest to settle on the ongoing debate whether transfer effects were only due to the learning of coding or exist at all. More insights into the successful transfer are needed to inform educational practice and policy-making about the opportunities to leverage the potential that lies within the teaching of coding.

Author Contributions

RS conceived the idea of the paper and drafted the manuscript. FS and BS-S drafted additional parts of the manuscript and performed revisions. All authors contributed to the article and approved the submitted version.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher's Note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Beege, M., Schneider, S., Nebel, S., Zimm, J., Windisch, S., and Rey, G. D. (2021). Learning programming from erroneous worked-examples. which type of error is beneficial for learning? Learn. Instruct. 75:101497. doi: 10.1016/j.learninstruc.2021.101497

CrossRef Full Text | Google Scholar

Bransford, J. D., and Schwartz, D. L. (1999). Rethinking transfer: a simple proposal with multiple implications. Rev. Res. Educ. 24, 61–100. doi: 10.3102/0091732X024001061

Clements, D. H. (1995). Teaching creativity with computers. Educ. Psychol. Rev. 7, 141–161. doi: 10.1007/BF02212491

Denning, P. J. (2017). Remaining trouble spots with computational thinking. Commun. ACM 60, 33–39. doi: 10.1145/2998438

Gegenfurtner, A. (2011). Motivation and transfer in professional training: A meta-analysis of the moderating effects of knowledge type, instruction, and assessment conditions. Educ. Res. Rev. 6, 153–168. doi: 10.1016/j.edurev.2011.04.001

Grover, S., and Pea, R. (2013). Computational thinking in K-12:a review of the state of the field. Educ. Res. 42, 38–43. doi: 10.3102/0013189X12463051

Grover, S., and Pea, R. (2018). “Computational thinking: a competency whose time has come,” in Computer Science Education: Perspectives on Teaching and Learning in School , eds S. Sentance, S. Carsten, and E. Barendsen (London: Bloomsbury Academic), 19–38.

Google Scholar

Hsu, T.-C., Chang, S.-C., and Hung, Y.-T. (2018). How to learn and how to teach computational thinking: suggestions based on a review of the literature. Comput. Educ. 126, 296–310. doi: 10.1016/j.compedu.2018.07.004

Liao, Y.-K. C. (2000). A Meta-analysis of Computer Programming on Cognitive Outcomes: An Updated Synthesis . Montréal, QC: EdMedia + Innovate Learning.

Liao, Y.-K. C., and Bright, G. W. (1991). Effects of computer programming on cognitive outcomes: a meta-analysis. J. Educ. Comput. Res. 7, 251–268. doi: 10.2190/E53G-HH8K-AJRR-K69M

Lobato, J. (2006). Alternative perspectives on the transfer of learning: history, issues, and challenges for future research. J. Learn. Sci. 15, 431–449. doi: 10.1207/s15327809jls1504_1

Lye, S. Y., and Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: what is next for K-12? Comput. Human Behav. 41, 51–61. doi: 10.1016/j.chb.2014.09.012

Melby-Lervåg, M., Redick, T. S., and Hulme, C. (2016). Working memory training does not improve performance on measures of intelligence or other measures of “far transfer”: evidence from a meta-analytic review. Perspect. Psychol. Sci. 11, 512–534. doi: 10.1177/1745691616635612

PubMed Abstract | CrossRef Full Text | Google Scholar

Popat, S., and Starkey, L. (2019). Learning to code or coding to learn? a systematic review. Comput. Educ. 128, 365–376. doi: 10.1016/j.compedu.2018.10.005

Sala, G., and Gobet, F. (2017). Does far transfer exist? negative evidence from chess, music, and working memory training. Curr. Dir. Psychol. Sci. 26, 515–520. doi: 10.1177/0963721417712760

Salomon, G., and Perkins, D. N. (1987). Transfer of cognitive skills from programming: when and how? J. Educ. Comput. Res. 3, 149–169. doi: 10.2190/6F4Q-7861-QWA5-8PL1

Scherer, R., Siddiq, F., and Sánchez Viveros, B. (2019). The cognitive benefits of learning computer programming: a meta-analysis of transfer effects. J. Educ. Psychol. 111, 764–792. doi: 10.1037/edu0000314

Scherer, R., Siddiq, F., and Viveros, B. S. (2020). A meta-analysis of teaching and learning computer programming: Effective instructional approaches and conditions. Comput. Human Behav. 109:106349. doi: 10.1016/j.chb.2020.106349

Shute, V. J., Sun, C., and Asbell-Clarke, J. (2017). Demystifying computational thinking. Educ. Res. Rev. 22, 142–158. doi: 10.1016/j.edurev.2017.09.003

Taatgen, N. A. (2013). The nature and transfer of cognitive skills. Psychol. Rev. 120, 439–471. doi: 10.1037/a0033138

Tang, X., Yin, Y., Lin, Q., Hadad, R., and Zhai, X. (2020). Assessing computational thinking: a systematic review of empirical studies. Comput. Educ. 148:103798. doi: 10.1016/j.compedu.2019.103798

Thorndike, E. L., and Woodworth, R. S. (1901). The influence of improvement in one mental function upon the efficiency of other functions. (I). Psychol. Rev. 8, 247–261. doi: 10.1037/h0074898

Tondeur, J., Scherer, R., Baran, E., Siddiq, F., Valtonen, T., and Sointu, E. (2019). Teacher educators as gatekeepers: preparing the next generation of teachers for technology integration in education. Br. J. Educ. Technol. 50, 1189–1209. doi: 10.1111/bjet.12748

Waite, J., Curzon, P., Marsh, W., and Sentance, S. (2020). Difficulties with design: the challenges of teaching design in K-5 programming. Comput. Educ. 150:103838. doi: 10.1016/j.compedu.2020.103838

Yadav, A., Gretter, S., Good, J., and McLean, T. (2017). “Computational thinking in teacher education,” in Emerging Research, Practice, and Policy on Computational Thinking , eds P. J. Rich and C. B. Hodges (New York, NY: Springer International Publishing), 205–220.

Keywords: computational thinking skills, transfer of learning, cognitive skills, meta-analysis, experimental studies

Citation: Scherer R, Siddiq F and Sánchez-Scherer B (2021) Some Evidence on the Cognitive Benefits of Learning to Code. Front. Psychol. 12:559424. doi: 10.3389/fpsyg.2021.559424

Received: 06 May 2020; Accepted: 17 August 2021; Published: 09 September 2021.

Reviewed by:

Copyright © 2021 Scherer, Siddiq and Sánchez-Scherer. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Ronny Scherer, ronny.scherer@cemo.uio.no

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

  • IEEE CS Standards
  • Career Center
  • Subscribe to Newsletter
  • IEEE Standards

critical thinking in computer programming

  • For Industry Professionals
  • For Students
  • Launch a New Career
  • Membership FAQ
  • Membership FAQs
  • Membership Grades
  • Special Circumstances
  • Discounts & Payments
  • Distinguished Contributor Recognition
  • Grant Programs
  • Find a Local Chapter
  • Find a Distinguished Visitor
  • Find a Speaker on Early Career Topics
  • Technical Communities
  • Collabratec (Discussion Forum)
  • Start a Chapter
  • My Subscriptions
  • My Referrals
  • Computer Magazine
  • ComputingEdge Magazine
  • Let us help make your event a success. EXPLORE PLANNING SERVICES
  • Events Calendar
  • Calls for Papers
  • Conference Proceedings
  • Conference Highlights
  • Top 2024 Conferences
  • Conference Sponsorship Options
  • Conference Planning Services
  • Conference Organizer Resources
  • Virtual Conference Guide
  • Get a Quote
  • CPS Dashboard
  • CPS Author FAQ
  • CPS Organizer FAQ
  • Find the latest in advanced computing research. VISIT THE DIGITAL LIBRARY
  • Open Access
  • Tech News Blog
  • Author Guidelines
  • Reviewer Information
  • Guest Editor Information
  • Editor Information
  • Editor-in-Chief Information
  • Volunteer Opportunities
  • Video Library
  • Member Benefits
  • Institutional Library Subscriptions
  • Advertising and Sponsorship
  • Code of Ethics
  • Educational Webinars
  • Online Education
  • Certifications
  • Industry Webinars & Whitepapers
  • Research Reports
  • Bodies of Knowledge
  • CS for Industry Professionals
  • Resource Library
  • Newsletters
  • Women in Computing
  • Digital Library Access
  • Organize a Conference
  • Run a Publication
  • Become a Distinguished Speaker
  • Participate in Standards Activities
  • Peer Review Content
  • Author Resources
  • Publish Open Access
  • Society Leadership
  • Boards & Committees
  • Local Chapters
  • Governance Resources
  • Conference Publishing Services
  • Chapter Resources
  • About the Board of Governors
  • Board of Governors Members
  • Diversity & Inclusion
  • Open Volunteer Opportunities
  • Award Recipients
  • Student Scholarships & Awards
  • Nominate an Election Candidate
  • Nominate a Colleague
  • Corporate Partnerships
  • Conference Sponsorships & Exhibits
  • Advertising
  • Recruitment
  • Publications
  • Education & Career

How Computer Engineering Helps You Think Creatively

Creative Thinking

But in reality, learning the basics of computer science can help you think more critically and with more novel inspiration, ultimately helping you in other areas of your life.

Thinking of a career in computing? Our “Careers in Computing” blog will get you there

Apply Creative Problem Solving to Other Areas

Let’s start by explaining why the creative problem-solving skills you’ll learn in computer science can help you in everyday life:

  • Novel solutions and new products. Being familiar with creating and polishing hardware and/or software can help you come up with ingenious solutions for your everyday life. You’re used to thinking about problems as solvable challenges, so you naturally come up with ways to address them. This applies to areas beyond computer science as well; for example, one former computer engineer used his creativity to engineer a pillow that reduces pressure on your face while sleeping .
  • Lateral thinking and breaking patterns. Writing code and creating applications from scratch also incentivizes you to think laterally and break the patterns you’d otherwise fall into. Traditional lines of thinking just won’t work for some problems, so you’ll be forced to think in new, creative ways. That allows you to experiment with new approaches and keep trying until you find something that works.
  • Seeing problems from other perspectives. As a computer engineer, you’ll be forced to see problems from other perspectives, whether you want to or not. That might mean reviewing code that someone else wrote, getting feedback from a client who has no familiarity with engineering, or imagining how an application might look to a user who’s never seen it before. In any case, you’ll quickly learn how to broaden your perspective, which means you’ll see problems in an entirely new light.

How Computer Engineering Improves Your Abilities

So how exactly does computer engineering improve your creative abilities in this way?

  • Generating new ideas. You have to be creative if you’re going to generate new ideas . In some roles, you’ll be responsible for coming up with the ideas yourself—either designing your own apps for circulation, or making direct recommendations to your clients. In other scenarios, you’ll be responsible for coming up with novel ways to include a feature that might otherwise be impossible. In any case, you’ll be forced to come up with ideas constantly, which gets easier the more you practice it.
  • Reviewing code. You’ll also be responsible for reviewing code—including code that you wrote and code that other people wrote. Reviewing your own code forces you to see it from an outsider’s perspective, and reviewing the code of others gives you insight into how they think. That diverse experience lends itself to imagining scenarios from different perspectives.
  • Fixing bugs. Finding and fixing bugs is an important part of the job, and it’s one of the most creatively enlightening. To resolve the problem, you first have to understand why it’s happening. If you’ve written the code yourself, it’s easy to think the program will run flawlessly, so you’ll have to challenge yourself to start looking for the root cause of the problem. Sometimes, tinkering with the code will only result in more problems, which forces you to go back to the drawing board with a new angle of approach. It’s an ideal problem-solving exercise, and one you’ll have to undergo many times.
  • Aesthetics and approachability. Finally, you’ll need to think about the aesthetics and approachability of what you’re creating. Your code might be perfectly polished on the backend, but if users have a hard time understanding the sequence of actions to follow to get a product to do what they want, you may need to rebuild it.

Latest career advice from our Career Round Table: With Demand for Data Scientists at an All-Time High, Top Experts Offer Sound Career Advice

Is Computer Science Worth Learning?

If you’re not already experienced in a field related to computer science, you might feel intimidated at the idea of getting involved in the subject. After all, people spend years, if not decades studying computer science to become professionals.

The good news is, you don’t need decades of experience to see the creative problem-solving benefits of the craft. Learning the basics of a programming language, or even familiarizing yourself with the type of logic necessary to code, can be beneficial to you in your daily life. Take a few hours and flesh out your skills; you’ll be glad you did.

Recommended by IEEE Computer Society

critical thinking in computer programming

From Data to Discovery: AI’s Revolutionary Impact on Upstream Oil and Gas Transformation

critical thinking in computer programming

How to Implement Sustainable Innovation in Your Business

critical thinking in computer programming

Understanding Cloud Native Security

critical thinking in computer programming

The Role of a Green Data Center for a Sustainable IT Infrastructure

critical thinking in computer programming

7 Tips for Implementing an Effective Cyber GRC Program

critical thinking in computer programming

Continuous Deployment: Trends and Predictions for 2024

critical thinking in computer programming

5 Ways to Save on Database Costs in AWS

critical thinking in computer programming

Redefining Trust: Cybersecurity Trends and Tactics for 2024

Improving Critical Thinking through Coding

How learning coding can help you with critical thinking, building and honing critical thinking skills is one of the key takeaways of learning coding.

A set of Scratch blocks to explain the importance of critical thinking for programmers and developers

Aug 07, 2018    By Team YoungWonks *

That learning coding has many benefits to it is well known fact. At a time when we have all come to rely heavily on computers and smartphones on a daily basis, it is not surprising to see how coding, aka software programming, is in great demand today. What, however, is often underplayed is the role of coding in teaching critical thinking.  

How does coding help develop critical thinking? Before we answer that question, it is important to examine the concept of critical thinking. 

What is Critical Thinking?

Critical thinking, to put it simply, is the objective analysis of facts to form a judgment. It is self-monitored, self-corrective thinking that mandates rational, skeptical, unbiased evaluation of factual evidence, which, in turn guides, one to take a particular course of action. This is also why it is described as the ability to choose a certain belief or action after careful consideration of the data available. By its very nature, critical thinking encourages a creative, problem-solving approach. 

While critical thinking is crucial in professional life, it is also extremely important in day-to-day life. So on one hand, one may argue that Charles Darwin used a critical thinking mindset to come up with his theory of evolution as it involved him questioning and connecting the aspects of his field of study to others. At the same time, critical thinking is also the skill set one employs when doing something as simple as assessing the authenticity of a particular email. So asking oneself questions such as, “Who emailed this to me?”; “Why have I received this email?”; “What sources are being cited for the information shared in the email”; “What is the purpose behind this email?” and “Are they who they claim to be in the email?” come under the purview of critical thinking as it helps one arrive at a logical solution to the given problem (in this case, determining if the email is spam or not). 

Similarly, making a decision about something as mundane as what bag to buy can also involve critical thinking. Given the popularity of e-commerce, one may have a plethora of options available but this also means one needs to factor in things before making a choice for the purchase. So a person looking up blogs, websites and forums to read reviews about bags is, in fact, employing critical thinking. 

Coding and Critical Thinking 

How then does coding help with critical thinking? Now critical thinking may come across as fairly common, but often its importance is understated. Coding, however, is widely considered to be one of the best tools to teach critical thinking thanks to its authentic, real-world approach. We list below the reasons why coding helps with critical thinking:

1. A similar approach to problem solving: Coding and critical thinking have these process steps in common: a) Identifying a problem or task b) Analyzing the given problem/task c) Coming up with initial solutions d) Testing e) Repeating the process for improved results. A good example of this process in coding is troubleshooting, as this is where programmers need to identify issues and try different tactics until they find a strong solution. 

2. Practice reigns supreme: Coding promotes thinking differently by approaching a problem from different angles and thus coming up with as many possible solutions as possible. This iteration during the coding creation process lets students practice their critical thinking skills in every class session. 

3. Having an open mind: What if there’s no one right answer to a problem? In coding, this is a rather common scenario as there are multiple correct answers in the coding creation process. For instance, each website, animation, or game will be different from the other depending on the design aesthetics of the user, the functionality, and the technology available. This variability exposes students to the reality that one must be open to new ideas and stay flexible. This, in turn, paves the way for constant improvements. 

What the studies and experts reveal

 Several studies have found a positive correlation between computer programming and improved cognitive skills. Students with computer programming experiences are said to typically score higher on various cognitive ability tests than students who do not have programming experiences. 

Vishal Raina, founder and senior instructor at YoungWonks, sums it up well by emphasising how the tech field thrives on critical thinking. “Since the tech industry is driven by innovation more than anything else, critical thinking plays an integral role in a coder’s career. After all, coding literally has one use syntax and semantics to deal with problems,” he says, adding that a coder’s job also requires perseverance, which means that he / she has to keep going when even they come across an obstacle. “When you come across a dead end in coding, there’s always a way you can go back and start again. This attitude encourages thinking outside the box and is thus tremendously helpful not just in coding but even in other fields.” 

Enhancing Critical Thinking Skills with Coding

One of the most valuable benefits of learning to code is the enhancement of critical thinking skills. Through coding, students learn to analyze complex problems, break them down into manageable parts, and devise logical solutions. At YoungWonks, our Coding Classes for Kids are designed to challenge young minds in a supportive environment, encouraging them to think critically and creatively. The Python Coding Classes for Kids focus on teaching students how to tackle problems systematically using Python, one of the most versatile programming languages. Meanwhile, our Raspberry Pi, Arduino and Game Development Coding Classes provide a hands-on approach to learning, where students can apply their coding skills to build and program their own devices. These experiences not only improve critical thinking but also instill a sense of accomplishment and a passion for learning.

*Contributors: Written by: Vidya Prabhu

This blog is presented to you by YoungWonks. The leading coding program for kids and teens. YoungWonks offers instructor led one-on-one online classes and in-person classes with 4:1 student teacher ratio. Sign up for a free trial class by filling out the form below:

Python Classes for Kids and Teens

  • Research article
  • Open access
  • Published: 12 December 2017

Computational thinking development through creative programming in higher education

  • Margarida Romero   ORCID: orcid.org/0000-0003-3356-8121 1 ,
  • Alexandre Lepage 2 &
  • Benjamin Lille 2  

International Journal of Educational Technology in Higher Education volume  14 , Article number:  42 ( 2017 ) Cite this article

25k Accesses

96 Citations

31 Altmetric

Metrics details

Creative and problem-solving competencies are part of the so-called twenty-first century skills. The creative use of digital technologies to solve problems is also related to computational thinking as a set of cognitive and metacognitive strategies in which the learner is engaged in an active design and creation process and mobilized computational concepts and methods. At different educational levels, computational thinking can be developed and assessed through solving ill-defined problems. This paper introduces computational thinking in the context of Higher Education creative programming activities. In this study, we engage undergraduate students in a creative programming activity using Scratch. Then, we analyze the computational thinking scores of an automatic analysis tool and the human assessment of the creative programming projects. Results suggested the need for a human assessment of creative programming while pointing the limits of an automated analytical tool, which does not reflect the creative diversity of the Scratch projects and overrates algorithmic complexity.

Creativity as a context-related process

Creativity is a key competency within different frameworks for twenty-first century education (Dede, 2010 ; Voogt & Roblin, 2012 ) and is considered a competency-enabling way to succeed in an increasingly complex world (Rogers, 1954 ; Wang, Schneider, & Valacich, 2015 ). Creativity is a context-related process in which a solution is individually or collaboratively developed and considered as original, valuable, and useful by a reference group (McGuinness & O’Hare, 2012 ). Creativity is also considered under the principle of parsimony, which occurs when one prefers the development of a solution using the fewest resources possible. In computer science creative parsimony has been described as a representation or design that requires fewer resources (Hoffman & Moncet, 2008 ). The importance or the usefulness of the ideas or acts that are considered as creative is highlighted by Franken ( 2007 ). These authors consider creativity as “the tendency to generate or recognize ideas, alternatives, or possibilities that may be useful in solving problems, communicating with others, and entertaining ourselves and others” (p. 348). In this sense, creativity is no longer considered a mysterious breakthrough, but a process happening in a certain context which can be fostered both by the activity orchestration and enhanced creative education activities (Birkinshaw & Mol, 2006 ). Teachers should develop their capacities to integrate technologies in a reflective and innovative way (Hepp, Fernández, & García, 2015 ; Maor, 2017 ), in order to develop the creative use of technologies (Brennan, Balch, & Chung, 2014 ; McCormack & d’Inverno, 2014 ), including the creative use of programming.

From code writing to creative programming

Programming is not only about writing code but also about the capacity to analyze a situation, identify its key components, model the data and processes, and create or refine a program through an agile design-thinking approach. Because of its complexity, programming is often performed as a team-based task in professional settings. Moreover, professionals engaged in programming tasks are often specialized in specific aspects of the process, such as the analysis, the data modelling or even the quality test. In educational settings, programming could be used as a knowledge building and modeling tool for engaging participants in creative problem-solving activities. When learners engage in a creative programming activity, they are able to develop a modelling activity in the sense of Jonassen and Strobel ( 2006 ), who define modelling as “using technology-based environments to build representational models of the phenomena that are being studied” (p.3). The interactive nature of the computer programs created by the learners allows them to test their models, while supporting a prototype-oriented approach (Ke, 2014 ). Despite its pedagogical potential, programming activities must be pedagogically integrated in the classroom. Programming should be considered as a pedagogical strategy, and not only as a technical tool or as a set of coding techniques to be learnt. While some uses of technologies engage the learner in a passive or interactive situation where there is little room for knowledge creation, other uses engage the learner in a creative knowledge-building process in which the technology aims at enhancing the co-creative learning process (Romero, Laferrière & Power, 2016 ). As shown in the figure below, we distinguish five levels of creative engagement in computer programming education based on the creative learner engagement in the learning-to-program activity: (1) passive exposure to teacher-centered explanations, videos or tutorials on programming; (2) procedural step-by-step programming activities in which there is no creativity potential for the learner; creating original content through individual programming (3) or team-based programming (4), and finally, (5) participatory co-creation of knowledge through programming Fig. 1 .

Five levels of creative engagement in educational programming activities

Creative programming engages the learner in the process of designing and developing an original work through coding. In this approach, learners are encouraged to use the programming tool as a knowledge co-constructing tool. For example, they can (co-)create the history of their city at a given historical period or transpose a traditional story in a visual programming tool like Scratch ( http://scratch.mit.edu/ /). In such activities, learners must use skills and knowledge in mathematics (measurement, geometry and Cartesian plane to locate and move their characters, objects and scenery), Science and Technology (universe of hardware, transformations, etc.), Language Arts (narrative patterns, etc.) and Social Sciences (organization in time and space, companies and territories).

Computational thinking in the context of creative programming

We now expand on cognitive and metacognitive strategies potentially used by learners when engaged in creating programming activities: procedural and creative programming. In puzzle-based coding activities, both the learning path and outcomes have been predefined to ensure that each of the learners is able to successfully develop the same activity. These step-by-step learning to code activities do not solicit the level of thinking and cognitive and metacognitive strategies required by ill-defined co-creative programming activities. The ill-defined situations embed a certain level of complexity and uncertainty. In ill-defined co-creative programming activities, the learner should understand the ill-defined situation, empathize (Bjögvinsson, Ehn, & Hillgren, 2012 ), model, structure, develop, and refine a creative program that responds in an original, useful, and valuable way to the ill-defined task. These sets of cognitive and metacognitive strategies could be considered under the umbrella of the computational thinking (CT) concept initially proposed by Wing ( 2006 ) as a fundamental skill that draws on computer science. She defines it as “an approach to solving problems, designing systems and understanding human behavior that draws on concepts fundamental to computing” (Wing, 2008 , p. 3717). Later, she refined the CT concept as “the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent” (Cuny, Snyder, & Wing, 2010 ). Open or semi-open tasks in which the process and the outcome are not decided can address more dimensions of CT than closed tasks like step-by-step tutorials (Zhong, Wang, Chen, & Li, 2016 ).

Ongoing discussion about computational thinking

The boundaries of computational thinking vary among authors. This poses an important barrier when it comes to operationalizing CT in concrete activities (Chen et al., 2017 ). Although some associate it strictly with the understanding of algorithms, others insist to integrate problem solving, cooperative work, and attitudes in the concept of CT. The identification of the core components of computational thinking is also discussed by Chen et al. ( 2017 ). Selby and Woollard ( 2013 ) addressed that problem and made a review of literature to propose a definition based on elements that are widely accepted: abstraction, decomposition, evaluations, generalization, and algorithmic thinking. On the one hand, these authors’ definition deliberately rejected problem solving, logical thinking, systems design, automation, computer science content, and modelling. These elements were rejected because they were not widely accepted by the community. On the other hand, other authors such de Araujo, Andrade, and Guerrero ( 2016 , p.8) stress, through their literature review on the CT concept and components, that 96% of selected papers considered problem solving as a CT component. Therefore, we claim that the previously named components are relevant to the core of computational thinking and should be recognized as part of it.

Roots of the computational thinking concept

Following Wing ( 2006 , 2008 ), Duschl, Schweingruber, Shouse, and others ( 2007 ) have described CT as a general analytic approach to problem solving, designing systems, and understanding human behaviors. Based on a socio-constructivist (Nizet & Laferrière, 2005 ), constructionist (Kafai & Resnick, 1996 ) and design thinking approach (Bjögvinsson et al., 2012 ), we consider learning as a collaborative design and knowledge creation process that occurs in a non-linear way. In that, we partially agree with Wing ( 2008 ), who considers the process of abstraction as the core of computational thinking. Abstraction is part of computational thinking, but Papert ( 1980 , 1992 ) pointed out that programming solicits both concrete and abstract thinking skills and the line between these skills is not easy to trace. Papert ( 1980 ) suggests that an exposure to computer science concepts may give concrete meaning to what may be considered at first glance as abstract. He gives the example of using loops in programming, which may lose its abstract meaning after repeated use. If we expand that example by applying it to a widely-accepted definition of abstraction from the APA dictionary (i.e. “such a concept, especially a wholly intangible one, such as “goodness” or “beauty””, VandenBoss, 2006 ), we could envision a loop as something tangible in that it may be seen as such in the environment. The core of CT might be the capacity to transpose abstract meaning into concrete meaning. This makes CT a way to reify an abstract concept into something concrete like a computer program or algorithm. In this sense programming is a process by which, after a phase of analysis and entities identification and structuration, there is a reification of the abstract model derived from the analysis into a set of concrete instructions. CT is a set of cognitive and metacognitive strategies paired with processes and methods of computer science (analysis, abstraction, modelling). It may be related to computer science the same way as algorithmic thinking is related to mathematics.

“Algorithmic thinking is a method of thinking and guiding thought processes that uses step-by-step procedures, requires inputs and produces outputs, requires decisions about the quality and appropriateness of information coming and information going out, and monitors the thought processes as a means of controlling and directing the thinking process. In essence, algorithmic thinking is simultaneously a method of thinking and a means for thinking about one's thinking.” (Mingus & Grassl, 1998 , p. 34) .

Algorithmic thinking has to deal with the same problem as computational thinking: its limits are under discussion. To some authors it is limited to mathematics. But definitions such as that from Mingus and Grassl ( 1998 ) make the concept go beyond mathematics (Modeste, 2012 ). Viewing algorithmic thinking as the form of thinking associated to computer science at large instead of part of mathematics allows a more adequate understanding of its nature (Modeste, 2012 ). We can consider that algorithmic thinking is an important aspect of computational thinking. However, when considering computational thinking as a creative prototype-based approach we should not only consider the design thinking components (exploration, empathy, definition, ideation, prototyping, and creation) (Brown, 2009 ) but also the hardware dimension of computational thinking solutions (i.e. use of robotic components to execute a program). Within a design thinking perspective, different solutions are created and tested in the attempts to advance towards a solution. From this perspective we conceptualize CT as a set of cognitive and metacognitive strategies related to problem finding, problem framing, code literacy, and creative programming (Brennan & Resnick, 2013 ). It is a way to develop new thinking strategies to analyze, identify, and organize relatively complex and ill-defined tasks (Rourke & Sweller, 2009 ) and as creative problem solving activity (Brennan et al., 2014 ). We now elaborate on how computational thinking can be assessed in an ill-defined creative programming activity.

Assessment of computational thinking

There is a diversity of approaches for assessing CT. We analyze three approaches in this section: Computer Science Teachers Association’s (CSTA) curriculum in the USA (Reed & Nelson, 2016 ; Seehorn et al., 2011 ), Barefoot’s computational thinking model in the UK (Curzon, Dorling, Ng, Selby, & Woollard, 2014 ), and the analytical tool Dr. Scratch (Moreno-León & Robles, 2015 ).

CSTA’s curriculum includes expectation in terms of levels to reach at every school grade. It comprises five strands: (1) Collaboration, (2) Computational Thinking, (3) Computing Practice and Programming, (4) Computers and Communication Devices, and (5) Community, Global, and Ethical Impacts. Thus, CSTA’s model considers computational thinking as part of a wider computer science field. The progression between levels appears to be based on the transition between low-level programming and object-oriented programming (i.e. computer programs as step-by-step sequences at level 1, and parallelism at level 3). CSTA standards for K12 suggest that programming activities in K12 should “be designed with a focus on active learning, creativity, and exploration and will often be embedded within other curricular areas such as social science, language arts, mathematics, and science”. However, CSTA standards do not give creativity a particular status in their models; moreover, the evaluation of creativity in programming activities seems to be ultimately up to the evaluator. From our perspective, and because of the creative nature of CT, we need to consider creativity in an explicit way and provide educators with guidelines for assessing it.

The Barefoot CT framework is defined through five components: logic, algorithms, decomposition, patterns, abstraction, and evaluation. We agree on their relevance in computational thinking. Barefoot CT framework provides concrete examples of how each of the components may be observed in children of different ages. However, relying only on that model may result in assessing abilities instead of competency. These concepts represent a set of abilities more than an entire competency (Hoffmann, 1999 ).

Dr. Scratch is a code-analyzer that outputs a score for elements such as abstraction, logic, and flow control (Moreno-León & Robles, 2015 ). Scores are computed automatically from any Scratch program. It also provides instant feedback and acts as a tutorial about how one can improve his program, which makes it especially adequate for self-assessment. Hoover et al. ( 2016 ) believe that automated assessment of CT can potentially encourage CT development. However, Dr. Scratch only considers the complexity of programs, not their meaning. This tool is suitable to evaluate the level of technical mastery of Scratch that a user has, but it cannot be used to evaluate every component of a CT competency as we defined it (i.e. the program does not give evidence of thought processes, and does not consider the task demanded). Finally, it would be hard for an automated process to measure or teach creativity since that behaviour is an act of intelligence (Chomsky, 2008 ) which should be analyzed considering the originality, value, and usefulness for a given problem-situation. In other terms, there is a need to evaluate the appropriateness of the creative solution according to the context and avoid over-complex solutions, which use unnecessary or inappropriate code for a given situation. In automatic code-analyzers tools, it is impossible to rate creativity, parsimony and appropriateness of a program considering that ill-defined problem-situations could lead to different solutions. We therefore now elaborate on how computational thinking can be assessed while considering that CT is intertwined with other twenty-first century competencies such as creativity and problem-solving.

Computational thinking components within the #5c21

We consider CT as a coherent set of cognitive and metacognitive strategies engaged in (complex) systems identification, representation, programming, and evaluation. After identifying and analyzing a problem or a user need, programming is a creative problem-solving activity. The programming activity aims to design, write, test, debug, and maintain a set of information and instructions expressed through code, using a particular programming language, to produce a concrete computer program which aims to meet the problem or users’ needs. Programming is not a linear predefined activity, but rather a prototype-oriented approach in which intermediate solutions are considered before releasing a solution which is considered good enough to solve the situation problem. Within this approach of programming, which is not only focused on the techniques to code a program, we should consider different components which are related to the creative problem-solving process. In this sense, we identify six components of the CT competency in the #5c21 model: two related to code and technologies literacies and four related to the four phases of Collaborative Problem Solving (CPS) of PISA 2015. Firstly, component 1 (COMP1) is related to the ability to identify the components of a situation and their structure (analysis/representation), which certain authors refer to as problem identification. Component 2 (COMP2) is the ability to organize and model the situation efficiently (organize/model). Component 3 (COMP3) is the code literacy. Component 4 (COMP4) is the (technological) systems literacy (software/hardware). Component 5 (COMP5) focuses on the capacity to create a computer program (programming). Finally, component 6 (COMP6) is the ability to engage in the evaluation and iterative process of improving a computer program Fig. 2 .

Six components of the CT competency within the #5c21 framework

When we relate computational thinking components to the four phases of Collaborative Problem Solving (CPS) of PISA 2015, we can link the analysis/abstraction component (COMP1) to collaborative problem solving (CPS-A) Exploring and Understanding phase. model component (COMP2) is related to (CPS-B) representing/model component (COMP2) is related to (CPS-B) representing and formulating. The capacity to plan and create a computer program (COMP5) is linked to (CPS-C) planning and executing but also to (CPS-D) monitoring and reflecting (COMP6) Fig 3 .

Four components of the CT competency related to CPS of PISA 2015

Code literacy (COMP3) and (technological) systems literacy (COMP4) are programming and system concepts and processes that will help to better operationalize the other components. They are also important in CT because knowing about computer programming concepts and processes can help develop CT strategies (Brennan & Resnick, 2013 ) and at the same time, CT strategies can be enriched by code-independent cognitive and metacognitive strategies of thinking represented by CPS related components (COMP1, 2, 5 and 6). Like in the egg-hen paradox, knowing about the concepts and process (COMP3 and 4) could enrich the problem-solving process (COMP1, 2, 5 and 6) and vice versa. The ability to be creative when analyzing, organizing/modeling, programming, and evaluating a computer program is a meta-capacity that shows that the participant had to think of different alternatives and imagine a novel, original and valuable process, concept, or solution to the situation.

Advancing creative programming assessment through the #5c21 model

After revising three models of CT assessment (CSTA, Barefoot, Dr. Scratch), we describe in this section our proposal to evaluate CT in the context of creative programming activities. We named our creative programming assessment the #5c21 model, because of the importance of the five key competencies in twenty-first century education: CT, creativity, collaboration, problem solving, and critical thinking. First, we discuss the opportunity of learning the object-oriented programming (OOP) paradigm from the early steps of CT learning activities. Second, we examine the opportunity to develop CT in an interdisciplinary way without creating a new CT curriculum in one specific discipline such as mathematics. Third, we discuss the opportunity of developing CT at different levels of education from primary education to lifelong learning activities.

In certain computer sciences curricula, low-level programming is introduced before OOP, which is considered as a higher-level of programming. Nevertheless, following Kölling ( 1999 ) if the OOP paradigm is to be learnt, it should not be avoided in the early stages of the learning activities to avoid difficulties due to paradigmatic changes. For that reason, our model does not restrict programming to step-by-step at early stages of development and embraces the OOP paradigm from its early stages. Moreover, we should consider the potential of non-programmers to understand OOP concepts without knowing how to operationalize it through a programming language. For instance, we may partially understand the concept of heritage through the concept of family without knowing the heritage concepts in computer science. Our model of a CT competency recognizes the possibility for certain components to develop at different rhythms, or for an individual with no prior programming experience to master some components (i.e. abstraction). For that reason, we did not integrate age-associated expectations. These should be built upon the context and should be task-specific. While CSTA considers concept mapping as a level 1 skill (K-3), our model would consider that this skill may be evaluated with different degrees of complexity according to the context and prior pupil’s experience. Our view is that computational thinking encompasses many particular skills related to abstraction.

Our model pays attention to the integration of CT into existing curricula. We recognize the identification of CT related skills in the CSTA’s model, and we agree to its relevance in computer science courses. However, our CT model is intended for use in any subject. Thus, it carefully tries not to give over relative importance to subjects such as mathematics and science. In that, we are working to define computational thinking as a transferable skill that does not only belong to the field of computer science. We also made it to be reusable in different tasks and to measure abilities as well as interactions between them (i.e. “algorithm creation based on the data modelling”).

Our model of computational thinking is intended for both elementary and high school pupils. In that it differs from CSTA, which expects nothing in term of computational thinking for children under grade 4. Though CSTA expects K-3 pupils to “use technology resources […] to solve age-appropriate problems”, some statements suggest that they should be passive in problem-solving (i.e.: “Describe how a simulation can be used to solve a problem” instead of creating a simulation, “gather information” instead of produce information, and “recognize that software is created to control computer operations” instead of actually controlling something like a robot).

Methodology for assessing CT in creative programming activities

In order to assess the CT from the theoretical framework and its operationalization as components described in the prior section, we have developed an assessment protocol and a tool (#5c21) to evaluate CT in creative programming activities. Before the assessment, the teacher defines the specific observables to be evaluated through the use of the tool. Once the observables are identified, four levels of achievement for each observable are described in the tool. The #5c21 tool allows a pre-test, post-test or just-in time teacher-based assessment or learner self-assessment which aims at collecting the level of achievement in each observable for the activity. At the end of a certain period of time (e.g., session and academic year) the teacher can generate reports showing the evolution in learners’ CT assessment.

A distinctive characteristic of the #5c21 approach to assess CT is the consideration of ill-defined problem-situations. The creative potential of these activities engages the participants in the analysis, modelling and creation of artifacts, which may provide the teacher with evidence of an original, valuable, relevant, and parsimonious solution to a given problem-situation.

Participants

A total of 120 undergraduate students at Université Laval in Canada ( N  = 120) were engaged in a story2code creative challenge. All of them were undergraduate students of a bachelor’s degree in elementary school education. They were in the third year of a four-year program and had no former educational technology courses. At the second week of the semester, they were asked to perform a programming task using Scratch. Scratch is a block-based programming language intended for children from 7 years of age. Participants were only presented two features of the language: creation of a new sprite (object) and the possibility to drag and drop blocks in each sprite’s program. They were also advised about the use of the green flag to start the program.

The ill-defined problem proposed to the students is rooted in the narrative frame of a children’ book introducing basic concepts of programming and robotics, Vibot the robot (Romero & Loufane, 2016 ). The story introduces a robot, which has to be programmed for play. The Scratch Cat is the mascot of the visual programming tool Scratch and the default sprite appearing in each new project. Vibot is a fictional robot character, which waits for instructions to act in its environment. Based on these two characters, the story2code are short text-based stories, which engage learners in analyzing, modeling, and creating a Scratch project representing the story. In our study, participants were given a story2code and were asked to represent it in Scratch. The situation invited participants to create a Scratch project featuring a dialog between two characters: Scratch and Vibot. The students were given a text-based script for creating the dialog including 9 quotations in which Scratch and Vibot the robot introduce themselves. After the dialog between the two characters, Scratch asks Vibot to draw a blue line. The scenario of this story2code could be solved with a certain degree of diversity in the Scratch visual programming software. Participants were asked to remix a Scratch project containing the two characters’ sprites. Remixing is a feature of Scratch that allows users to duplicate existing projects and edit them. Participants were required to share their projects in order to develop a double assessment: the code-analyzer Dr. Scratch and the #5c21 assessment by an external evaluator.

Computational thinking was assessed from a Scratch project developed by each undergraduate student using two different tools: Dr. Scratch and the #5c21 CT competency model. Firstly, all the Scratch projects were passed through the Dr. Scratch analytical tool. Then, they were evaluated by an evaluator following the #5c21 CT competency model. Dr. Scratch is an automated tool and has been selected in order to highlight the need of a competency-based approach in the assessment of CT.

All participants had to submit a Scratch program by providing its URL. They were required to share it in order to make it accessible to an evaluator. In this section we highlight the results obtained from the Dr. Scratch analytical tool and those obtained using the #5c21 CT competency model.

CT assessment using Dr. scratch

Dr. Scratch results are computed from seven criteria: abstraction, parallelism, logic, synchronization, flow control, user interactivity, and data representation. Each of them may be given a maximum of three points, for a possible total score of 21. Thirteen projects have not been passed through Dr. Scratch due to technical problems (i.e. URL not provided), so we have Dr. Scratch’s results for 107 projects ( n  = 107, M  = 0.27%; ET  = 0.06%) Fig. 4 .

Automatic CT analysis by Dr. Scratch (0.251 ± 0.0184)

Ninety-one participants out of 107 got a total score of 6 at Dr. Scratch. The highest score was 10 and was reached by two participants. Instead of organizing the dialogs using timers (wait instruction), those two participants used broadcasting. Broadcasting is a feature of Scratch that allows the user to trigger events and program the events’ handler (or listener) or each of the sprites (objects). Event handlers or listeners are callback subroutines which are able to react to certain inputs. Using the broadcasting feature caused Dr. Scratch to attribute additional points of parallelism and synchronization. One of these two highest-score projects received points for the use of a single event’s handler or listener “when backdrop switches to …” in both parallelism and synchronization. Typical projects (those who scored 6) all worked about the same way: there was only one event’s handler for each sprite (when green flag is clicked), and dialogs were synchronized using timers ( wait instruction).

Assessment using #5c21 CT model

Results from the #5c21 computational thinking model are determined through some observables derived from the model, its components, and their subcomponents. These subcomponents were selected because of their relevance to the situation and the possibility to observe them in programs handed. Each of these subcomponents is converted into an observable that is task-specific. For instance, “Identification of entities” is a subcomponent of “Analysis/abstraction” (COMP1). It has been converted to an observable item specific to the task demanded: “Dialogs are well-integrated and the blue line is traced”. When applicable, these observables were rated twice: one time for the level of execution and a second time for the level of creativity. These four subcomponents were converted into observables: “Identification of entities” (plus creativity assessment), “Identification of events” (no creativity assessment), “Identifying the function (or code block) for a certain objective” (with creativity assessment), and “Analysis of errors leading to improvement of the computer program” (no creativity assessment). All of them were rated on a 4-point scale. When applicable, creativity was also assessed on a 4-point scale. That makes a possible total score of 24 points (4 points for each of the 4 observables, plus 8 points of creativity). Thirty-three participants have no score using the #5c21 CT competency model due to technical problems (i.e. the Scratch project was not shared), so we have results for 87 participants. The two highest-score projects using the #5c21 CT competency model are not the same as the two from Dr. Scratch’s results. Their Dr. Scratch’s scores are 8 and 6, and their CT competency score are respectively 22 and 23 ( n  = 87, M  = 0.64%, ET  = 0.3%) Fig. 5 .

#5c21 CT expert analysis (0.469 ± 0.0551)

Only 11 projects where given a score higher than 1 (in a 4-point scale) in creativity for “Identification of entities”. The two aforementioned projects scored 3 points out of 4 and 2 points out of 4 in creativity for that subcomponent. In the first project, creativity was assessed from the untaught use of sounds (instruction “play sound”) and the relevance of sounds chosen (i.e. the cat says “meow” and the robot makes “laser sounds”). Using irrelevant sounds would not have been considered an evidence of creativity because of the principle of parsimony, which aims to value the use of the fewer resources possible when solving a given situation through a creative solution. Creativity in the second project was assessed through the use of a loop to make the cat walk (using a change in costumes and delays). That was considered a higher level of modelling by the expert evaluators, since the walking is more realistic than a translation. Because participants had no prior experience with Scratch, using untaught blocks, such as broadcasting, was considered by the evaluator as an evidence of creativity.

Dr. scratch and #5C21 CT assessment differences

The analysis of CT based on the automatic Dr. Scratch analysis and the human expert #5c21 CT model lead to important differences. While the automatic analysis of the Scratch projects leads to similar scores in terms of algorithmic complexity (with a low standard deviation, ET = 0.0184), the expert analysis shows a high diversity in the creative programming performance (ET = 0.0551) Fig. 6 .

Difference of means (d = −0.218, SE = 0.0294, p  < 0.001)

Discussion on creative diversity assessment

Even considering the simplicity of the task demanded when solving the story2code , each Scratch project submitted by the students was different. Each of the 120 projects created by the undergraduate students was original. Despite the simplicity of the task demanded (to create a Scratch project featuring a dialog between two characters), none of the projects was identical to another. For instance, the project differences might come from the choice of blocks, the order in which they are placed, the duration of timers, the type of events’ handlers and their operationalization, or the use of optional features such as backdrops switches. However, results from an automated analytical tool like Dr. Scratch do not reflect that wide creative diversity. The model we proposed is based on both computer science and problem-solving as defined by PISA 2015. It is an attempt to define criteria that may be suitable to evaluate ill-defined tasks involving CT. Results from Dr. Scratch and from the #5c21 CT competency model are not to be compared on any basis as they do not evaluate the same components. Dr. Scratch evaluates the algorithmic complexity of a Scratch project based on a unique model of CT assessment which is generic and does not consider the program in relation to the situation problem. In that, it is not intended for use when the aim is to evaluate CT as a creative problem-solving competency. However, the use of Dr. Scratch is a useful tool for allowing the learners to reflect on the algorithmic implementation and could provide useful tips for improving the quality of the program. Dr. Scratch is able to identify missing names of instances, repetitions, and some coding practices that could be improved. Also, the automatic analysis allows a generalized use without requiring a human activity of evaluation. The #5C21 CT model is based on humans with a certain knowledge of CT to carry on an assessment which is focused not only on the algorithmic properties of the program, but considers also the creative process by which the learner has developed a valuable, original and parsimonious solution to a specific situation.

Contribution of the #5c21 model for creative programming

The #5c21 CT model is not language-specific and could serve to evaluate different types of creative programming activities that can be developed in different computer languages but also within unplugged activities. Compared to other CT conceptualizations, we explicitly integrate a hardware component (COMP 4), which could be part of the creative solution in a creative programming task. The results on the evaluation of the story2code task suggest the pertinence of combining automatic code analysis tools and human expertise assessment on the creative aspects of programming. By developing this study, we intended to advance the CT competency conceptualization and assessment; from a creative programming perspective, we should critically consider the possibility to assess human creativity using automated tools such Dr. Scratch.

Creative programming activities through the lens of the zone of proximal development

The wide diversity of projects collected brings us to think about the possible application of the Vygotskian concept of Zone of Proximal Development (Vygotsky, 1978 ). To place an individual in a situation of competency, the situation proposed to the learner must offer an appropriate degree of newness, a certain ambiguity or ill-definition and the creativity potential to engage the learner in a creative process where there is not only a single process or solution to accomplish, but a creative scope of processes and solutions. In this sense, creative programming activities should engage learners in problem situations where the process and the solution are not known in advance and could be very diverse in order to allow the learners to develop their own creative process and solution. We should, at the same time, recognize the need to design programming activities with an adequate potential for their creative activity. In that way, the ill-defined problem-situation could be analyzed while allowing learners to create and implement a solution When activities are too externally guided or structured, there is no room for creativity, while too much ambiguity in the ill-defined situation could lead to uncertainty and confusion. Meanwhile, there is the Zone of Proximal Creativity (ZPC). The ZPC describes an appropriate level of creative potential to be developed by the learner when engaged in an activity that allows an appropriate potential of creativity during the development of a creative solution, which is original, valuable, useful, and parsimonious for a given situation and context. From the observations of this study, we highlight the value not only of developing the CT competency by considering creative-enough programming activities within the ZPC, but also of encouraging ambiguity tolerance among learners in order to embrace ill-defined situations as an opportunity to express their creativity.

Limits of the study and future research directions

While the results and insights of this study contributes in offering a better understanding on creative and context-related implementation of programming in education, we also want to point out that this study was focused on a story2code task based on a dialogue between two characters followed by an instruction to draw a line. This story2code task offers different degrees of creative potential to be solved while being simple to achieve. The simplicity of this task could have had an influence on the creative expression of the students and we should develop further studies in which more complex tasks are analysed in relation to the creative expression in order to identify the influence of the degree of complexity of the task on creative programming. The present study is also limited to a very specific task involving undergraduate students with no prior experience in programming. Future research should therefore analyze CT skills in more complex and open activities in order to deepen our understanding on how CT skills are deployed in an ill-defined creative programming task. We advocate the need for research with a wider range of learners in order to better understand how CT components may show or develop across a lifespan and through different creative programming activities including not only Scratch but also other technological supports from mobile based programming to educational robotics devices aiming to engage the learner in creative programming.

Birkinshaw, J. M., & Mol, M. J. (2006). How management innovation happens. MIT Sloan Management Review , 47 (4), 81–88.

Google Scholar  

Bjögvinsson, E., Ehn, P., & Hillgren, P.-A. (2012). Design things and design thinking: Contemporary participatory design challenges. Design Issues , 28 (3), 101–116.

Article   Google Scholar  

Brennan, K., Balch, C., & Chung, M. (2014). Creative computing . Harvard University Press: Cambridge. Retrived from http://scratched.gse.harvard.edu/guide/

Brennan, K., & Resnick, M. (2013). Imagining, creating, playing, sharing, reflecting: How online community supports young people as designers of interactive media. In Emerging technologies for the classroom (p. 253–268). New York: Springer.

Brown, T. (2009). Change by design. How design thinking transforms organizations and inspires innovation . New York, NY, USA: Harper Collins.

Chen, G., Shen, J., Barth-Cohen, L., Jiang, S., Huang, X., & Eltoukhy, M. (2017). Assessing elementary students’ computational thinking in everyday reasoning and robotics programming. Computers & Education , 109 , 162–175.

Chomsky, N. (2008). Language and mind , (3rd ed., ). Cambridge: Cambrdige University Press.

Cuny, J., Snyder, L., & Wing, J. M. (2010). Demystifying computational thinking for non-computer scientists. Unpublished Manuscript in Progress, Referenced in https://www.cs.cmu.edu/link/research-notebook-computational-thinking-what-and-why .

Curzon, P., Dorling, M., Ng, T., Selby, C., & Woollard, J. (2014). Developing computational thinking in the classroom: A framework.

de Araujo, A. L. S. O., Andrade, W. L., & Guerrero, D. D. S. (2016). A systematic mapping study on assessing computational thinking abilities. In Frontiers in education conference (FIE), 2016 IEEE , (pp. 1–9). IEEE.

Dede, C. (2010). Comparing frameworks for 21st century skills. In J. A. Bellanca, & R. S. Brandt (Eds.), 21st century skills: Rethinking how students learn , (vol. 20, pp. 51–76). Bloomington, IN: Solution Tree Press.

Duschl, R. A., Schweingruber, H. A., & Shouse, A. W. (2007). Taking science to school: Learning and teaching science in grades K-8. National Academies report . Washington, DC: National Academies Press. 

Franken, R. E. (2007).  Human motivation  (6th ed). Belmont, CA: Thomson/Wadsworth.

Hepp, P., Fernández, M. À. P., & García, J. H. (2015). Teacher training: Technology helping to develop an innovative and reflective professional profile. International Journal of Educational Technology in Higher Education , 12 (2), 30–43.

Hoffman, R. N., & Moncet, J.-L. (2008). All Data are Useful, but not All Data are Used! What’S Going on Here? In Geoscience and Remote Sensing Symposium, IGARSS 2008 (p. II-1-II-4). Boston, MA: IEEE. https://doi.org/10.1109/IGARSS.2008.4778912 .

Hoffmann, T. (1999). The meanings of competency. Journal of European Industrial Training , 23 (6), 275–286.

Hoover, A. K., Barnes, J., Fatehi, B., Moreno-León, J., Puttick, G., Tucker-Raymond, E., & Harteveld, C. (2016). Assessing computational thinking in students’ game designs. In Proceedings of the 2016 annual symposium on computer-human interaction in play companion extended abstracts , (pp. 173–179). ACM.

Jonassen, D., & Strobel, J. (2006). Modeling for meaningful learning. In Engaged learning with emerging technologies (p. 1–27). Dordrecht: Springer.  

Kafai, Y. B., & Resnick, M. (1996). Constructionism in practice: Designing, thinking, and learning in a digital world . (Vol. 1). New York, NY: Routledge. 

Ke, F. (2014). An implementation of design-based learning through creating educational computer games: A case study on mathematics learning during design and computing. Computers & Education , 73 , 26–39.

Kölling, M. (1999). The problem of teaching object-oriented programming. Journal of Object Oriented Programming , 11 (8), 8–15.

Maor, D. (2017). Using TPACK to develop digital pedagogues: a higher education experience.  Journal of Computers in Education , 4(1), 71–86.

McCormack, J., & d’Inverno, M. (2014). On the future of computers and creativity. In AISB 2014 Symposium on Computational Creativity, London .

McGuinness, C., & O’Hare, L. (2012). Introduction to the special issue: New perspectives on developing and assessing thinking: Selected papers from the 15th international conference on thinking (ICOT2011). Thinking Skills and Creativity , 7 (2), 75–77 https://doi.org/10.1016/j.tsc.2012.04.004 .

Mingus, T. T. Y., & Grassl, R. M. (1998). Algorithmic and recursive thinking - current beliefs and their implications for the future. In L. Morrow, & M. J. Kenney (Eds.), The teaching and learning of algorithm in school mathematics , (pp. 32–43).

Modeste, S. (2012). La pensée algorithmique : Apports d’un point de vue extérieur aux mathématiques. Presented at the Colloque espace mathématique francophone.

Moreno-León, J., & Robles, G. (2015). Dr. scratch: A web tool to automatically evaluate scratch projects. In Proceedings of the workshop in primary and secondary computing education , (pp. 132–133). ACM.

Nizet, I., & Laferrière, T. (2005). Description des modes spontanés de co-construction de connaissances: contributions à un forum électronique axé sur la pratique réflexive. Recherche et Formation , 48 , 151–166.

Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas . Inc: Basic Books.

Papert, S. (1992). The Children’s machine . New York: BasicBooks.

Reed, D., & Nelson, M. R. (2016). Current initiatives and future directions of the computer science teachers association (CSTA). In Proceedings of the 47th ACM technical symposium on computing science education , (pp. 706–706). ACM.

Rogers, C. R. (1954). Toward a theory of creativity. ETC: A Review of General Semantics , 11 , 249–260.

Romero, M., Laferriere, T., & Power, T. M. (2016). The move is on! From the passive multimedia learner to the engaged co-creator. eLearn , 2016 (3), 1.

Romero, M., & Loufane (2016). Vibot the robot . Québec, QC: Publications du Québec.

Rourke, A., & Sweller, J. (2009). The worked-example effect using ill-defined problems: Learning to recognise designers’ styles. Learning and Instruction , 19 (2), 185–199.

Seehorn, D., Carey, S., Fuschetto, B., Lee, I., Moix, D., O’Grady-Cunniff, D., … Verno, A. (2011). CSTA K–12 computer science standards: Revised 2011.

Selby, C. C., & Woollard, J. (2013). Computational thinking: The developing definition. In Presented at the 18th annual conference on innovation and Technology in Computer Science Education, Canterbury .

VandenBos, G. R. (Ed.). (2006). APA dictionary of psychology. Washington, DC: American Psychological.

Voogt, J., & Roblin, N. P. (2012). A comparative analysis of international frameworks for 21st century competences: Implications for national curriculum policies. Journal of Curriculum Studies , 44 (3), 299–321.

Vygotsky, L. S. (1978). Mind and society: The development of higher mental processes . Cambridge, MA: Harvard University Press.

Wang, X., Schneider, C., & Valacich, J. S. (2015). Enhancing creativity in group collaboration: How performance targets and feedback shape perceptions and idea generation performance. Computers in Human Behavior , 42 , 187–195.

Wing, J. M. (2006). Computational thinking. Communications of the ACM , 49 (3), 33–35.

Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical Transactions of the Royal Society of London A: Mathematical, Physical and Engineering Sciences , 366 (1881), 3717–3725.

Article   MathSciNet   MATH   Google Scholar  

Wing, J. (2011). Research notebook: Computational thinking-What and why? The Link Newsletter, 6 , 1–32. Retrieved from http://link.cs.cmu.edu/files/11-399_The_Link_Newsletter-3.pdf .

Zhong, B., Wang, Q., Chen, J., & Li, Y. (2016). An exploration of three-dimensional integrated assessment for computational thinking. Journal of Educational Computing Research , 53 (4), 562–590.

Download references

Acknowledgements

We acknowledge the contribution of John Teye for his advice during the linguistic revision.

This project has been funded by the Fonds de recherche du Québec – Société et culture (FRQSC).

Author information

Authors and affiliations.

Laboratoire d’Innovation et Numérique pour l’Education, Université Nice Sophia Antipolis, Nice, France

Margarida Romero

Université Laval, Québec, Canada

Alexandre Lepage & Benjamin Lille

You can also search for this author in PubMed   Google Scholar

Contributions

All persons who meet authorship criteria are listed as authors (RLL), and all authors (RLL), certify that we have participated sufficiently in the work to take public responsibility for the content, including participation in the concept, design, analysis, writing, or revision of the manuscript. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Margarida Romero .

Ethics declarations

Competing interests.

All authors (Romero, Lepage, Lille) declare that we have no competing financial, professional or personal interests that might have influenced the performance or presentation of the work described in this manuscript.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Cite this article.

Romero, M., Lepage, A. & Lille, B. Computational thinking development through creative programming in higher education. Int J Educ Technol High Educ 14 , 42 (2017). https://doi.org/10.1186/s41239-017-0080-z

Download citation

Received : 14 June 2017

Accepted : 20 November 2017

Published : 12 December 2017

DOI : https://doi.org/10.1186/s41239-017-0080-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Computational thinking
  • Problem-solving

critical thinking in computer programming

Relationships between computational thinking and the quality of computer programs

  • Open access
  • Published: 03 March 2022
  • Volume 27 , pages 8289–8310, ( 2022 )

Cite this article

You have full access to this open access article

critical thinking in computer programming

  • Kay-Dennis Boom 1 ,
  • Matt Bower   ORCID: orcid.org/0000-0002-4161-5816 2 ,
  • Jens Siemon 1 &
  • Amaël Arguel 3  

6628 Accesses

16 Citations

5 Altmetric

Explore all metrics

Computational thinking – the ability to reformulate and solve problems in ways that can be undertaken by computers – has been heralded as a foundational capability for the 21st Century. However, there are potentially different ways to conceptualise and measure computational thinking, for instance, as generalized problem solving capabilities or as applied practice during computer programming tasks, and there is little evidence to substantiate whether higher computational thinking capabilities using either of these measures result in better quality computer programs. This study examines the relationship between different forms of computational thinking and two different measures of programming quality for a group of 37 pairs of pre-service teachers. General computational thinking capabilities were measured using Bebras tests, while applied computational thinking processes were measured using a Computational Thinking Behavioural Scheme. The quality of computer programs was measured using a qualitative rubric, and programs were also assessed using the Dr Scratch auto-grading platform. The Test of Nonverbal Intelligence (3rd edition, TONI-3) was used to test for confounding effects. While significant correlations between both measures of computational thinking and program quality were detected, regression analysis revealed that only applied computational thinking processes significantly predicted program quality (general computational thinking capability and non-verbal intelligence were not significant predictors). The results highlight the importance of students developing applied computational thinking procedural capabilities more than generalized computational thinking capabilities in order to improve the quality of their computer programs.

Similar content being viewed by others

The use of cronbach’s alpha when developing and reporting research instruments in science education.

critical thinking in computer programming

Artificial intelligence in higher education: the state of the field

critical thinking in computer programming

Systematic Reviews in Educational Research: Methodology, Perspectives and Application

Avoid common mistakes on your manuscript.

1 Introduction

1.1 context of the problem.

Since its first major appearance in 2006 by Wing, computational thinking has been intensively discussed in the field of computer science education (Tang, Chou, & Tsai, 2020 ). CT can be regarded as the ability to reformulate problems in ways that computers can then be used to help solve those problems (International Society for Technology in Education [ISTE] & the Computer Science Teachers Association [CSTA], 2011 ). The value proposition of computational thinking capabilities in a digital age is that they can help people solve a range of problems that lead to personal satisfaction and success, not only in the technology area but also life more broadly. However, the conjecture that possessing computational thinking knowledge, or applying computational thinking skills while solving problems, leads to higher quality solutions, has rarely been empirically validated.

One aspect of computational thinking that is often emphasized by advocates is that it is not simply computer programming capability. Research about the effects of computational thinking knowledge and/or skill can be divided into the area of effects regarding computational problem solving (e.g. computer programming) and effects regarding diverse non-programming problems or tasks. For example, a wide range of problems, from finding the shortest route between map locations to designing an online shopping platforms, rely on people applying computational thinking processes while they are writing computer programs to solve those problems. However, computational thinking skills (such as problem decomposition, pattern recognition, algorithmic thinking and abstraction) can also be used to solve a range problems that do not involve computer programming, such as finding a way through a maze or specifying the steps in a dance sequence. While learning computer programming relies on people utilizing and applying computational thinking as part of the process they undertake, instructional settings will often use computational thinking foundations to teach subjects and ideas that do not involve computer programming (e.g. Bull, Garofalo, & Hguyen, 2020 ). In fact, a literature review conducted by Tang et al. ( 2020 ) concluded that there were far more computational thinking effects analyzed in subject areas not related to computer science ( n = 240) than for effects related to computer science ( n = 78). However, we note that while computational thinking can be applied in a range of disciplines, it is considered absolutely essential and fundamental to successful computer programming (Angeli & Giannakos, 2020 ; Lye & Koh, 2014 ). Yet, we could not find any studies amongst the literature that examined whether or not computational thinking capabilities did in fact relate to higher quality computer programs.

The purpose of this study is to evaluate the extent to which general computational thinking knowledge, as well as computational thinking processes applied during problem solving tasks, influence the quality of computer-programming solutions. This was achieved by comparing university students’ computational knowledge (as measured by Bebras tests) and the computational thinking processes observed while they wrote computer programs with the quality of the final computing products that they produced. The findings of this study have implications for how computational thinking is framed, conceptualized and emphasized within education and society.

2 Literature review

2.1 defining computational thinking and its subcomponents.

Computational thinking is generally seen as an attitude and skill for solving problems, designing complex systems, and understanding human thoughts and behaviors, based on concepts fundamental to computer science (Lye & Koh, 2014 ). Recent reviews of computational thinking definitions and components by Shute, Sun and Asbell-Clarke ( 2017 ) and Ezeamuzie and Leung ( 2021 ) point out the lack of consistent definition regarding what is meant by computational thinking, though with some terms being more popular (such as abstraction, algorithm design, decomposition, and pattern recognition as generalisation), particularly when academics devise explicit definitions with relation to their research. Some inconsistency between definitions of components can occur, at times not because there is disagreement about what computational thinking involves, but because other frequently used terms such as ‘sequencing’, ‘conditional logic’ and ‘loops’ can conceptually fall within overarching categories (in this case, ‘algorithm design’).

In this study, we will draw upon generally accepted core components of computational thinking as being comprised of problem decomposition, pattern recognition (generalisation) algorithmic thinking and abstraction, which accords with other definitional work from the research field (Angeli & Giannakos, 2020 ; Cansu & Cansu, 2019 ; Tsai, Liang, Lee, & Hsu, 2021 ). We acknowledge that there are other aspects of computational thinking that are identified in some studies, such as ‘parallelism’, ‘data collection’, and ‘modelling’, as outlined by Shute et al. ( 2017 ), however, as Ezeamuzie and Leung ( 2021 ) points out, these sorts of other terms are relatively uncommon, and they are not processes utilised for all computational thinking problems. Selecting problem decomposition, pattern recognition (as generalisation), algorithmic thinking and abstraction as the components of computational thinking in this study also corresponds with approaches adopted in industry (for example, Csizmadia, Curzon, Dorling, Humphreys, Ng, Selby, & Woollard, 2015 ; McNicholl, 2019 ).

One crucial part of any computational thinking task is problem decomposition , the division of a problem into smaller chunks. Problem decomposition has been identified as a general problem solving strategy well before the advent of computational thinking (Anderson, 2015 ). In computational problems, decomposition is particularly important because of its relationship to modularity, where the complexity of a task can be simplified by identifying smaller parts that can each be addressed separately (Atmatzidou & Demetriadis, 2016 ). For example, when programming a multimedia story, one might first identify the different scenes that occur, and then break each scene into a series of actions by the characters.

Another component of computational thinking is abstraction , in terms of ignoring unimportant details and instead focusing on relevant information. From a psychological perspective, abstraction is a thought process that is used to achieve organised thinking (Shivhare & Kumar, 2016 ). In computational problems, abstraction enables people to concentrate on the essential, relevant, and important parts of the context and solution (Thalheim, 2009 ). For instance, when writing a multimedia story to have characters dance about a screen, a person may recognise that their program only needs to attend to the coordinates of the characters and their size, and the routines that they write can be applied to numerous characters irrespective of their colours or costumes.

A further critical thought process when engaging in computational thinking is pattern recognition . Pattern recognition involves being able to infer rules based on observations and apply these rules to instances that have never been encountered (visa vi Posner & Keele, 1968 ). Pattern recognition is crucial when solving computational problems, because rules inferred based on observations can then be translated into instructions that can be used to solve problems. For instance, when a person realises that a square can be drawn by drawing a straight line and then turning 90 degrees four times, then they can easily and efficiently specify a set of instructions for a computer (or human) to execute the process. It is important to note that pattern recognition is closely related to abstraction as a form of ignoring irrelevant details, but is generally regarded as distinct by virtue of distilling those aspects of a situation that repeat or reoccur in certain ways.

The fourth computational thinking category in this study is algorithmic thinking . An algorithm is a well-defined procedure or ‘recipe’ that defines how inputs can be used to achieve a specific objective (Cormen et al, 2014 ; Sipser, 2013 ). Algorithmic thinking has roots in cognitive psychology in the form of scripts, that help people to know how to behave in social or behavioural contexts (for instance, going to a restaurant or playing a game, see Schank & Abelson, 1977 ). When solving computational problems, algorithmic thinking enables people to translate their abstract ideas and the patterns that they recognise into a set of procedures, for instance having a robot trace out a square and then dance on the spot. For the purposes of this study, algorithmic thinking also includes the thinking required to resolve errors that occur in early versions of algorithm designs (the process known in computing as ‘debugging’), thus overcoming issues associated with delineating these two intrinsically interrelated processes.

2.2 Ways of measuring computational thinking

When defining a skill, the question arises whether it is possible to measure and differentiate it from other, possibly overlapping or more general skills. For computational thinking, the existing measurement methods that can be broadly divided into assessment of computational thinking as knowledge that is applied or tested (input), the assessment of computational thinking as a skill observed during a problem solving activity (process) and (theoretically also) the assessment of computations thinking by analyzing the result of a task (output). All measures are subsequently used as indicators for the existence and the grade/level of the respective type of the computational thinking competence.

The most internationally well-known instruments for measuring general computational thinking knowledge are the Bebras Challenges. The main idea behind Bebras Challenges has been to create abstract, non-computing problems that require specific cognitive abilities rather than technical knowledge or coding experience (Dagienė & Sentance, 2016 ). Examples of Bebras tasks can be found at https://www.bebras.org/examples.html . Different studies have shown that abilities such as breaking down problems into parts, interpreting patterns and models, and designing and implementing algorithms are needed to solve Bebras problems (Lockwood & Mooney, 2018 ; Araujo, Andrade, Guerrero, & Melo, 2019 ). There are also other approaches to measuring general computational thinking knowledge, both within computer programming contexts and also other disciplinary contexts, many of which have been applied in the training of teachers. For instance, Zha, Jin, Moore, and Gaston ( 2020 ) used multiple choice knowledge quizzes about computational thinking and Hopscotch coding to measure the impact of a team-based and flipped learning introduction to the Hopscotch block coding platform. In a study exploring the effects of a 13 week algorithm education course on 24 preservice teachers, Türker & Pala ( 2020 ) used the “Computational Thinking Skills Scale” (CTSS, from Korucu, Gencturk & Gundogdu, 2017 ) comprising the computational thinking facets creativity, algorithmic thinking, collaboration, critical thinking and problem solving. Suters and Suters ( 2020 ) report on a paper-and-pencil based computational thinking knowledge assessment to measure the effects of an extend summer institute for middle school mathematics teachers ( n = 22) undertaking training in computer programming with Bootstrap Algebra and Lego® Mindstorms® robotics. The content assessment consisted of items that integrated mathematics common core content with facets of computational thinking, in line with research endeavors recognizing the need to contextualize computational thinking within specific disciplines (Gadanidis, 2017 ; Grover & Pea, 2013 ; Weintrop et al., 2016 ). All of these approaches to computational thinking knowledge assessment share an emphasis on short, often multiple choice, closed questioning to measure computational thinking, rather than examining the computational thinking that arises as part of authentic and more extended problem solving contexts.

In a second variant of possible computational thinking measurement, the process of solving a context-dependent task – mostly typically a programming task – is observed and analyzed with regard to the abilities which are considered to be part of computational thinking skill. Skill analysis based on observations is a comparatively underdeveloped field. Brennan & Resnick ( 2012 ) seminally investigated the computational thinking processes and practices that children undertook while designing their programs using the visual programming platform Scratch, noting that “framing computational thinking solely around concepts insufficiently represented other elements of students learning” (p. 6). Their qualitative observations and interviews identified computational thinking practices such as being incremental and iterative, testing and debugging, reusing and remixing, and abstracting and modularizing. However, their results were not reported based on any sort of observational coding of participants, so that there is no indication of time spent on each of these processes while solving computing problems.

While analysis of learner pre- and/or post- interview narratives has been previously conducted to determine evidence of computational thinking (Grover, 2011 ; Portelance & Bers, 2015 ), we were not able to find any computational thinking analyses involving systematic examination of narratives emerging from participants while they were solving authentic programming problems. However, there are examples of observing and thematically categorizing computer programming processes and narratives (Bower & Hedberg, 2010 ; Knobelsdorf & Frede, 2016 ). These approaches provide a basis for in-situ observation and subsequent qualitative analysis of programming activity for computational thinking constructs such as problem deconstruction, abstraction, pattern recognition and algorithmic thinking, and our study is based on these more systematic observational approaches.

2.3 Measuring the quality of computer programs

There are a range of qualities that can be used to evaluate the quality of computer programs, such as the extent to which the code functionally achieves its intentions, avoids unnecessary repetition, is well organized, and so on (Martin, 2009 ). Much of the research relating to evaluating the quality of computer programs examines how to ways of auto-assessing student work (for instance, Ihantola, Ahoniemi, Karavirta, & Seppälä, 2010 ; Pieterse, 2013 ). However, automated tools struggle to accurately assess computational thinking (Poulakis & Politis, 2021 ), and recent work points out the need to look beyond raw functionality and ‘black-box’ testing of outputs, to examine the inner working of code and algorithms (Jin & Charpentia, 2020 ). Some research also examines the extent to which computational thinking is evident with the final programming product itself, by virtue of the code fragments that are used and their sophistication. Brennan and Resnick ( 2012 ) examined whether aspects of computational thinking were present in students’ block-based Scratch programs. Grover et al. have manually evaluated computational thinking evident in students’ Scratch programs, though without providing detail of the process and rubrics (Grover, 2017 ; Grover, Pea, & Cooper, 2015 ). An increasingly renown innovation, Dr Scratch, combines automated assessment, examination of the inner workings of programs, and analysis of computational thinking to provide a measure of program quality for Scratch programs (Moreno-León & Robles, 2015 ). One study has established a strong correlation ( r = 0.682) between the Dr Scratch automated assessment of computational thinking evident within students’ Scratch programs and manual evaluation of computational thinking within Scratch programs by human experts (Moreno-León et al., 2017 ). However, the computational thinking within a computer program is not necessarily a proxy for overall program quality, and the extent to which program quality relates to the computational thinking knowledge and computational thinking processes of program authors is an open question.

2.4 Research question

Thus, having established the lack of empirical evidence to suggest that general computational thinking knowledge or in-situ computational thinking processes is related to computing performance, and armed with potential ways to operationalize and measure computational thinking knowledge, computational thinking processes, and quality of computer programs, this study examines the following research questions:

Is the quality of computer programming solutions that people produce related to their general computational thinking knowledge?

Is the quality of computer programming solutions that people produce related to the applied computational thinking processes that they undertake?

3.1 Participants

The sample for this study was drawn from 74 pre-service teachers completing a digital creativity and learning course at an Australian university. Among them 68% were female, 30% male and 2% preferred not to say. On average, participants were 23.9 years old ( SD = 5.2). In terms of language proficiency, 97% indicated that they spoke English fluently or were native speakers. In terms of prior knowledge, 97% had no or only little prior programming experience and none of the participants were familiar with the Scratch programming environment that was used for the study.

3.2 Instruments

3.2.1 measuring computational thinking knowledge.

To measure computational thinking knowledge as it arises in general problem solving contexts, participants solved an online version of adapted Bebras tasks. All tasks were chosen from the Australian versions of the Bebras contests from 2014 (Schulz & Hobson, 2015 ) and 2015 (Schulz, Hobson, & Zagami, 2016 ). Only tasks from the oldest available age group were selected (i.e., for adolescents 16 to 18 years of age and school levels 11 and 12, respectively). The tasks were slightly revised and presented without any iconic beavers or other comical pictures in order to be more appropriate for the university participants in this study. Although there is still a considerably age gap between the targeted age group of the tasks and the actual age of participants, it was not expected that this difference would cause any problems (e.g., ceiling effects) because pre-service teachers on the whole were not expected to be familiar with or particularly adept at computational thinking tasks. The scoring of participant performance was based on the recommended scoring system of the founder of the Australian version of the Bebras tasks (Schulz et al., 2016 ). There were eight tasks considered as easy level (worth two points), seven medium (three points), and five hard tasks (four points) resulting in 20 tasks in total with a maximum achievable score of 57.

3.2.2 Observing computational thinking processes

To enable participants to demonstrate how much time they spent on computational thinking-relevant processes while programming, participants were set a Scratch programming task. Scratch itself was developed in 2003 at MIT Media Laboratory and publicly launched in 2007 (Resnick et al., 2009 ). It is one the first and one of the most popular open-source visual programming environments. In visual programming environments, users connects code blocks with each other instead of actual writing a code as common in real programming languages.

To prepare students for the task, they were given 45 min to review the Scratch tutorials available from within the Scratch platform. They were also allowed to access these tutorials during the programming task. The task itself was defined as follows: “ Program a story or a game where a hero has to overcome a challenge in order to defeat the villain(s) .”

This task was chosen because it is somewhat open-ended and can be solved in the chosen Scratch development framework without prior programming knowledge. Furthermore, computational thinking subskills (problem decomposition, pattern recognition, abstraction, algorithm design) would most likely have to be used to solve the task. The way in which the Scratch programming environment, task, and participants may influence the generalizability of results is considered in the Discussion section of this paper.

To reliably assess the amount of time they spent on computational thinking processes during their Scratch programming session, a computational thinking behavior scheme (CTBS) was developed. The CTBS was based on event sampling, involving analysis of how often and for how long specific behavioral cues occur. Based on the literature review, four components were identified as main features of computational thinking and which are the latent constructs in the CTBS: decomposition, abstraction (as in ignoring unimportant details), pattern recognition, and designing and applying algorithms (see operationalization of these constructs in Table  1 below). Two researchers coded five entire videos independently to assess inter-rater reliability. As a result, at least two third of the events were identified by both raters. The range of the frequency of agreement for the five videos laid between 66.67% and 72.50% and the κ coefficients ranged from 0.58 to 0.67. Overall, the reliability can be interpreted as moderate (Landis & Koch, 1977 ). Note that the CTBS measured the time spent on computational thinking components, not the correctness of the computational thinking processes. It is to be expected that during the process of solving computational problems that people may not always immediately have correct thoughts about the right course of action, and this study sought to examine relationship between the time spent on computational thinking processes and the quality of programming products.

3.2.3 Measuring program quality

To measure participants’ program quality, two measures were used. For one, a rubric scheme loosely based on “Clean code” of Martin ( 2009 ) was developed specially for this study. The program quality criteria were based on five categories: richness of project , variety of code usage , organization and tidiness , functionality of code and coding efficiency . Richness of project described how much was happening in the Scratch project. Lower scores were given when only one element was programmed to perform only one behaviour, while Scratch projects consisting of several programmed elements that were related to each other received higher scores. The variety of code usage depended on the kinds of code blocks were used. Scratch projects were rated lower when they mainly consisted of simple code chunks such as motion or looks and high when more advanced chunks like control or sensing were used. The category organization and tidiness took into account the extent to which the control section in Scratch was organized, with more organized Scratch projects receiving higher scores. Functionality was assessed based on whether the intention of the Scratch project was clear and whether it worked as intended. Projects received higher scores when they ran smoothly and the intention was easy to understand. The category efficiency described the usage of code controlling the flow of execution, and the number unnecessary duplications. Lower scores were given to projects having many such duplicates, while more generalized and more abstract code scripts received higher scores. An example of a program with unnecessary duplication is shown in Fig.  1 (left), compared to a more efficiently represented code block in Fig.  1 (right).

figure 1

Two examples of the same function but coded differently. An example with unnecessary duplicates is shown on the left and a more efficient version is seen on the right

The five code quality categories were all rated on a scale including 0 ( not evident ), 1 ( poor ), 2 ( satisfactory ), 3 ( good ), up to 4 ( excellent ). A weighted mean over all categories was calculated to provide a general assessment. The weight for each category was based on their importance for program quality, resulting in extent and richness, variety, and functionality being weighted 20% each, efficiency 30%, and organization and tidiness 10% to the weighted mean. Quality criteria and the (weighted) scoring system of the scheme were discussed with two computer science education professionals to uphold the content validity of the measure. In addition, one of the CS education professionals rated the Scratch projects to obtain reliability assessment. Inter-rater reliability was high with ICC(3,1), 95% CI [0.87, 0.96].

The second measure for program quality was based on Dr Scratch (Moreno-León & Robles, 2015 ). Dr Scratch provides a measure of program quality based on seven dimensions relevant to CS: abstraction and problem decomposition, parallelism, logical thinking, synchronization, algorithmic notions of flow control, user interactivity and data representation. Dimensions are judged as 0 ( not evident ), 1 ( Basic ), 2 ( Developing ), and 3 ( Proficient ). Scores are aggregated over all dimensions resulting in a total evaluation score (mastery score) from 0 to 21. Mastery scores between 8 and 14 are regarded as general developing ; lower than 8 is regarded as generally basic , and more than 14 as general proficient . High correlations between Dr Scratch mastery scores and experts judgments of program quality can be used as an indicator of satisfactory criterion validity (Moreno-León, Román-González, Harteveld, & Robles, 2017 ). While Dr Scratch focuses primarily on computational thinking elements as opposed to other aspects of computer programming (e.g. organization of code, efficiency), it is based on the final computer programming solution that is produced, and thus provides an interesting alternative measure of program quality for this computational thinking study.

3.2.4 Test of nonverbal intelligence

To take account for potential confounding effect, participants’ nonverbal intelligence was also measured. For this the Test of Nonverbal Intelligence (3rd edition; TONI-3, Brown, Sherbeernou, & Johnson, 1997 ) was used. The TONI-3 is a classic culture fair test (i.e., minimally linguistically demanding) and as in many of them participants need to recognize a correct figure in a set of abstract and geometrical pictures. The test consists of 45 items and has an average testing time of 15 min and has a satisfactory level of psychometrical properties (Banks & Franzen, 2010 ).

3.3 Procedure

Initially, participants completed the Bebras Computational Thinking Knowledge test and the test of nonverbal intelligence (TONI-3 online). One week later, the second phase took place at university’s classrooms and participants attempted the task in Scratch. To collect rich video material with many verbal and nonverbal indicators for the research team to analyze, participants were organized in pairs. It was hoped that working in pairs would encourage participants to talk and engage more with each other. The pairs were formed based on similar Bebras scores to minimize any effects due to large differences in competences. In total, 37 pairs were formed and filmed while working on the task, forming the corpus for the analysis.

3.4 Analysis

All statistical analysis was conducted using the R statistics programming environment. In order to acquire a sense of the data, basic descriptive statistics including means and standard deviations were calculated for all five measures (Bebras scores, time spent on different computational thinking processes, program quality rubric score, Dr Scratch score, TONI-3 non-verbal intelligence score). Because participants worked on the programming task in pairs, all programming assessments based on the rubric scheme, the additional Scratch evaluation assessment based on Dr Scratch, and the assessment of how much time participants spent on computational thinking behavior based on CTBS, were paired values. Scores on the Bebras tasks and TONI-3 test were averaged for each pair. Of the 37 pairs of participants who agreed to complete the Bebras test and have their final programs used in the study, 27 agreed to be video recorded for the purposes of the CTBS analysis, and 32 pairs agreed to complete the TONI-3 test.

Spearman’s ρ were computed between all five measures using all available data, to determine whether the underlying variables were directly correlated. Finally, in order to account for the possibility of moderating variables, two regression models were estimated with the two program quality measures as outcomes (program quality rubric score and Dr Scratch score). These two regression models used the Bebras task scores, the CTBS, and the TONI-3 IQ scores as predictors, so that it was possible to detect if any of these were moderating variables.

4.1 Descriptive statistics and measurement outcomes

The average score for the measure of general computational thinking capability (Bebras task) was 57.03% ( SD = 18.6%). The range was from 21% to one participant who achieved 100%. Results indicated a medium level of test difficulty, with no serious problems due to ceiling or floor effects. In TONI-3-IQ, participants achieved an average intelligence score of 113.12 ( SD = 14.17). The mean of this sample was slightly higher than the expected value of the population (µ = 100, see, for example, Sternberg, 2017 ), which can be explained by the fact the sample was drawn from university students. The time participants needed to complete the Bebras tasks ( Md = 55 min) and the TONI-3-IQ ( Md = 22 min) roughly aligned with the expected time of 60 min (Dagienė & Futschek, 2008 ) and 15 min (Brown et al., 1997 ), respectively.

Table  2 shows that while writing their programs, coded participants spent nearly half of their time on computational thinking behaviors, with algorithmic design having the largest contribution and little time spent on decomposition and pattern recognition. Pattern recognition was observed in less than half of all pairs. No sign of abstraction in the sense of neglecting information was observed for any pair.

Table 3 contains an overview of scores achieved by the pairs of participants in the rubric scheme for program quality. The full range of the rating scales (0 to 4) was used. The distributions of all five dimensions had their center at around 2 (i.e., satisfactory level).

In comparison, participants’ Scratch projects typically only achieved a basic rating according to Dr Scratch, with only two dimensions rated as developing (See Table  4 ).

4.2 Correlations between variables

As a first step towards analyzing which of the two computational thinking measures (general computational thinking knowledge as measured by Bebras versus computational thinking processes as observed in practice) have a greater relationship to program quality, Spearman’s ρ were computed (see Table  4 ). Correlation between the two measures of program quality (weighted means based on the developed rubric scheme and Dr Scratch mastery scores) revealed a significant relationship, ρ = 0.61, p < .001. Based on common interpretation of effect sizes (Cohen, 1988 ), this correlation can be interpreted as large. The large correlation between the two measures of program quality reveals a degree of consistency in their assessment of student programs.

Significant positive correlations were found between general computational thinking knowledge (Bebras scores) and both measures of program quality, with a borderline small to medium effect sizes (see Table  5 ). Significant positive correlations between time spent on computational thinking processes while programming and both measures of program quality, with quite large effects.

Because of some potential (partial) conceptual overlaps between nonverbal intelligence and computational thinking, the correlations between the TONI-3 IQ and computational thinking measures were calculated as well. On one hand, the correlation between the TONI-3 IQ and the Bebras scores was significant and positive with a medium to large effect size, ρ = 0.49, p = 0.002. On the other hand, correlation between TONI-3-IQ and time spent on computational thinking processes while programming was not statistically significant, ρ = 0.09, p = 0.346

4.3 Regression analysis

As explained in the Methodology section, two regression models were estimated with both program quality measures as outcomes and the both computational thinking measures and the TONI-3 IQ scores as predictors. Standardized parameter estimations and tests of significance of the regression model are shown in Table  6 . The regression models only partly supported the findings from the correlations. The positive correlation between the Bebras score and both measures of program quality vanished when taking into account the effect of TONI-3 IQ. The only significant predictor for both measures of programming quality was the computational thinking process scores.

Post hoc analyses for both regression models were performed for power estimation. Based on the given parameters ( N = 24, number of predictors = 3, effect size = R 2 pro.qual = 0.50, R 2 DrScratch = 0.44, and α = 0.05), a power of > 0.99 for both models was achieved. Because of the small sample size, assumptions about linear multiple regressions such as homoscedasticity, multicollinearity, and residuals were rigorously checked. No serious violations of any assumption could be found, though it should be noted that the residuals when the outcome was programming quality were not normally distributed, based on Shapiro-Francias test, with W’ = 0.88, p = 0.011. In conclusion, the power of both regression models were sufficiently high enough and the regression coefficients can be interpreted as “best linear regression estimations”.

5 Discussion

The general computational thinking knowledge scores (Bebras) and the computational thinking procedural performance (as indicated by the CTBS), were both positively correlated with both program quality measures (the rubric scheme and Dr Scratch mastery score). Therefore, a general interpretation could be that the higher the level of both general computational thinking knowledge and applied computational thinking in practice, the better the program quality. However, this interpretation would be premature because regression analyses revealed that only one — the applied computational thinking in practice — was a significant predictor of program quality when controlling for other variables such as the level of nonverbal intelligence and general computational thinking knowledge. The reason why the two different computational thinking measures predict programming differently might lie in different perspectives underlying the two different measures of computational thinking, and how these might mediate the relationship with program quality.

The Bebras tasks focus on general and conceptual aspects of computational thinking. Correlations between the Bebras score and the TONI-3-IQ were between moderate and strong. As for the most instruments for nonverbal intelligence, TONI-3 is based on pictures in which participants need to identify similar instances and recognize patterns. Many of the Bebras tasks are designed in a similar fashion. The original idea behind the Bebras tasks was to create a test about CS concepts “independent from specific systems” to avoid contestants being dependent on prior knowledge of any specific IT system (Dagienė & Futschek, 2008 , p. 22). This may have led to some items being similar to those of nonverbal intelligence tests.

As found in some prior studies, this also caused confusion for some Bebras contestants. Vaníček ( 2014 ) asked participants for their opinions about the Bebras tasks. Some questioned the purpose and validity of the test, stating “I wonder what the contest questions have to do with informatics. Maybe nothing at all?”. If (at least some) Bebras tasks are similar to those of nonverbal intelligence tests and there is a high and significant positive correlation between both measures, it is possible that both tests measure similar constructs. This would explain why the relationship between the Bebras scores and program quality vanished when controlled for TONI-3-IQ. The Bebras tasks are validated by several studies (Dagienė & Stupuriene, 2016 ; Dolgopolovas, Jevsikova, Savulionienė, & Dagienė, 2015 ; Lockwood & Mooney, 2018 ) but none of these studies controlled for any potential confounding effects on similar psychological constructs such as nonverbal intelligence. We could only find one study in which the potential relationship between the Bebras tasks and nonverbal intelligence has been discussed, with similar findings to our study (Román-González, Pérez-González, & Jiménez-Fernández, 2017 ). Thus, it is possible that the Bebras tasks indeed measure computational thinking but mainly the facet of abstract thinking and pattern recognition.

It is possible that these abstract parts of computational thinking alone are not a good predictor of programming quality because extensive cognitive effort is required to transfer the skills for application in different situations and settings. Even though some similar skills are required to solve both kinds of tasks (the Bebras tasks as well as the programming task in this study), it would require a high level of transferability from these abstract logical quizzes to real applied programming situations. Moreover, according to the authors of the Bebras tasks, participants need to apply the same cognitive abilities as needed for programming tasks such as problem deconstruction, thinking abstractly, recognizing patterns, and being able to understand, design, and evaluate algorithms (Dagienė & Sentance, 2016 ). However, the content of the Bebras tasks (as for many ‘unplugged’ methods) is very different from real programming tasks. This may lead to general computational thinking as measured by Bebras tasks not providing a good predictor of program quality above and beyond that which is captured and controlled for by general measures of intelligence (such as the TONI-3-IQ).

In our opinion, the results can be well explained in terms of the thesis of disproportion between application extensity and intensity of use (Weinert, 1994 ). This theory asserts that, the more general a rule or strategy is, the more minor its contribution to solving challenging, content-specific problems. This could also apply to the computational thinking skills of the Bebras tasks. The measured skills are very general and partly overlap with general facets of intelligence. Their contribution to solving a challenging, content-specific problem might therefore be rather small and statistically hard to detect. At least, this would be one possible interpretation of the rather weak correlation and the lost connection in the regression analysis with regard to general computational thinking knowledge and program quality.

In contrast to the Bebras tasks, the focus of the CTBS lies on participants’ applied computational thinking processes in practice. Correlations indicated that the more participants spent on applied computational thinking processes, the better the programming quality of their Scratch project. It must be pointed that this was mostly due to algorithmic design, with algorithmic design being the most frequently applied computational thinking activity measured. As stated before, participants were working on their code from the start of the session and so there is a logical interpretation that the longer and the more participants spent on algorithmic design, the better the quality of their programs. Even after controlling for other measures, this relationship was still significant and persisted in both regression models with the programming quality rubric and Dr Scratch project evaluation as outcome, respectively. What is even more remarkable is that computational thinking processes were significantly correlated with program quality even though the correctness of the computational thinking processes was not assessed in this study. That is to say, that the more time spent thinking about computational thinking components while solving the computing problem led to better quality programming solutions, even when at times that computational thinking may not have necessarily been ‘right’. This is in line with the learning concept of ‘productive failure’, where thinking deeply about problems and exploring incorrect solutions can ultimately lead to greater learning overall (Kafai, De Liema, Fields, Lewandowski & Lewis, 2019 ).

These results indicate that the computational thinking process capabilities observed by the CTBS are more strongly related to program quality than computational thinking knowledge as measured by Bebras. While the Bebras Challenge is undoubtedly a valuable competition for students worldwide, the results from this study indicate the ability to solve Bebras problems may not be a good indicator of the ability to solve authentic informatics problems that involve computer programming. In fact, the result challenges the premise that generalised computational thinking knowledge underpins the ability to solve authentic programming problems to any substantial extent. The capacity to apply computational thinking processes in-situ has been shown in this study to be far more relevant and influential in terms of being able to derive high quality programming solutions than solving general computational thinking knowledge problems. To this extent, from a pedagogical perspective, educators who wish to use computational thinking as a basis for improving the ability of their students to solve programming problems should focus on developing students’ abilities to apply computational thinking processes in practice (algorithm design, problem decomposition, pattern recognition, abstraction) rather than their computational thinking knowledge in a more detached and decontextualized sense.

5.1 Limitations of the Study

In this study, students worked together in pairs as a naturalistic way to provoke social interaction and make otherwise unobservable thoughts accessible. This contributed to the authenticity of the study, with pair programming often occurring in industry and education. Moreover, pair-programming settings have been used in prior studies in terms of measuring computational thinking and programming knowledge for novices (Denner, Werner, Campe, & Ortiz, 2014 ; Wu, Hu, Ruis, & Wang, 2019 ). However, this approach involved some inherent challenges. It was not possible to perfectly group pairs according to identical levels of computational thinking, intelligence, or programming quality. Some might argue that the results and overall conclusion might have been different if all measures were obtained and analyzed solely on an individual basis. However, gauging individual measures of computational thinking programming skills from a behavioral perspective also involves challenges, as it is difficult to encourage individual participants to verbalize their thinking for the entire duration of the programming process. We believe that the benefits of analyzing computational thinking arising from a more naturalistic setting outweigh those from measuring the computational thinking of individuals, in terms of the validity of results.

It is also worth mentioning that the CTBS and the programming quality instrument were designed specifically for the purpose of this study. That means these instruments have not been tested in other studies yet. Interrater reliability assessments indicated a satisfactory level of agreement, but the results based on CTBS and programming quality rubric scheme deserve to be interpreted with caution. Some indicators of the computational thinking behavior are dependent on the environment used. For instance, the computational thinking component algorithmic design category of the scheme and encompasses all utterance and actions with the purpose of designing an algorithmic solution to a problem. The programming task in this study was designed in Scratch, for which the only way to create algorithmic solutions was to drag and drop code chunks together. If another programming environment were used, or indeed different programming problems, or even other cohorts of participants, other indicators may be identified. This potentially limits the generalization of the results of the study.

6 Conclusion and future work

Computational thinking is promoted as the literacy of the 21st century and is already implemented in various curricula all over the world. Some refer to computational thinking even as the foundation of programming and CS (Lu & Fletcher, 2009 ). Thus, the goal of this study was to analyses the role of computational thinking in promoting high quality programming products. Results showed that the answer to the question of how computational thinking is related to program quality depends on whether computational thinking is seen as a set of general conceptual understandings or a set of procedural skills in use. The results of our study found that computational thinking as general conceptual knowledge (such as that used to solve Bebras challenges) was not significantly related to program quality. On the other hand, we found that computational thinking as a set of procedural skills applied in practice was significantly related to programming quality, even when controlling for general intelligence. Thus, when discussing the role of computational thinking in developing computer programming capacity, we suggest that educators and policy makers focus on the importance of cultivating computational thinking procedural capabilities rather than in more abstract, knowledge-based and context free forms.

There are several potential avenues for research to build upon the results of this study. Visual programming environments such as Scratch are usually used to introduce computational thinking or programming concepts to people who have no knowledge about programming, as was the case in this study. In future, researchers could analyse how computational thinking is applied when experienced programmers solve a programming task in a programming language such as Java or C++. The way programmers approach problems develops over time as they gain more knowledge (Teague & Lister, 2014 ). It is possible that the level of computational thinking for experienced programmers differs from the level of novices, which might mediate the relationship between both concepts. A range of different tasks could be examined, for instance, to gauge differences in computational thinking prevalence and relationships to program quality for tasks with more closed solutions as opposed to being more open-ended in nature. Future research could attempt to analyze all concepts individually rather than in pairs as an alternative way to examine the relationship between the constructs in question. In any case, it is intended that the frameworks and methods presented in this paper provide a strong foundation for these future analyses.

Anderson, J. R. (2015). Cognitive psychology and its implications (8th.). New York, NY: Worth Publishers

Google Scholar  

Angeli, & Giannakos, M. (2020). Computational thinking education: Issues and challenges. Computers in Human Behavior , 105, 106185. https://doi.org/10.1016/j.chb.2019.106185

Article   Google Scholar  

Araujo, A. L. S. O., Andrade, W. L., Guerrero, D. D. S., & Melo, M. R. A. (2019). How many abilities can we measure in computational thinking? A study on Bebras challenge. Proceedings of the 50th ACM Technical Symposium on Computer Science Education (pp. 545–551). https://doi.org/10.1145/3287324#issue-downloads

Atmatzidou, & Demetriadis, S. (2016). Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences. Robotics and Autonomous Systems , 75, 661–670. https://doi.org/10.1016/j.robot.2015.10.008

Banks, S. H., & Franzen, M. D. (2010). Concurrent validity of the TONI-3. Journal of Psychoeducational Assessment , 28(1), 70–79. https://doi.org/10.1177/0734282909336935

Brennan, K., & Resnick, M. (2012). New frameworks for studying and assessing the development of computational thinking. In Proceedings of the 2012 annual meeting of the American Educational Research Association, Vancouver, Canada

Bower, M., & Hedberg, J. G. (2010). A quantitative multimodal discourse analysis of teaching and learning in a web-conferencing environment–the efficacy of student-centred learning designs. Computers & Education , 54(2), 462–478

Brown, L., Sherbeernou, R. J., & Johnson, S. K. (1997). Test of nonverbal intelligence-3 . Austin, TX: PRO-ED

Bull, G., Garofalo, J., & Hguyen, N. R. (2020). Thinking about computational thinking. Journal of Digital Learning in Teacher Education , 36(1), 6–18. https://doi.org/10.1080/21532974.2019.1694381

Cansu, F. K., & Cansu, S. K. (2019). An overview of computational thinking. International Journal of Computer Science Education in Schools , 3(1), 17–30. https://doi.org/10.21585/ijcses.v3i1.53

Article   MATH   Google Scholar  

Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Hillsdale, N.J.: L. Erlbaum Associates

MATH   Google Scholar  

Cormen, T. H., Leiserson, C. E., Rivest, R. L., & Stein, C. (2014). Introduction to algorithms (3rd ed.). Cambridge, MA, London: MIT Press

Csizmadia, A., Curzon, P., Dorling, M., Humphreys, S., Ng, T., Selby, C., & Woollard, J. (2015). Computational thinking - A guide for teachers . Swindon: Computing at School. http://eprints.soton.ac.uk/id/eprint/424545

Dagienė, V., & Futschek, G. (2008). Bebras international contest on informatics and computer literacy: Criteria for good tasks. In R. T. Mittermeir & M. M. Sysło (Eds.), Informatics Education - Supporting Computational Thinking: Third International Conference on Informatics in Secondary Schools - Evolution and Perspectives, ISSEP 2008 Torun Poland, July 1-4, 2008 Proceedings (pp. 19–30). Berlin, Heidelberg: Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-69924-8_2

Dagienė, V., & Sentance, S. (2016). It’s computational thinking! Bebras tasks in the curriculum. In A. Brodnik & F. Tort (Eds.), Lecture Notes in Computer Science. Informatics in Schools: 9th International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, Proceedings (Vol. 9973, pp. 28–39). Cham: Springer Verlag. https://doi.org/10.1007/978-3-319-46747-4_3

Dagienė, V., & Stupuriene, G. (2016). Bebras - A sustainable community building model for the concept based learning of informatics and computational thinking. Informatics in Education , 15(1), 25–44. https://doi.org/10.15388/infedu.2016.02

Denner, J., Werner, L., Campe, S., & Ortiz, E. (2014). Pair programming: Under what conditions is it advantageous for middle school students? Journal of Research on Technology in Education , 46(3), 277–296. https://doi.org/10.1080/15391523.2014.888272

Dolgopolovas, V., Jevsikova, T., Savulionienė, L., & Dagienė, V. (2015). On evaluation of computational thinking of software engineering novice students. In A. Brodnik & C. Lewin (Eds.), IFIP TC3 Working Conference “A New Culture of Learning: Computing and next Generations” . Vilnius, Lithuania: Vilnius University

Ezeamuzie, & Leung, J. S. C. (2021). Computational thinking through an empirical lens: A systematic review of literature. Journal of Educational Computing Research , Vol. 59, https://doi.org/10.1177/07356331211033158

Gadanidis, G. (2017). Five affordances of computational thinking to support elementary mathematics education. Journal of Computers in Mathematics and Science Teaching , 36(2), 143–151

Grover, S. (2011). Robotics and engineering for middle and high school students to develop computational thinking. In Annual Meeting of the American Educational Research Association, New Orleans, LA

Grover, S. (2017). Assessing algorithmic and computational thinking in K-12: Lessons from a Middle School Classroom. In P. J. Rich, & C. B. Hodges (Eds.), Emerging Research, Practice, and Policy on Computational Thinking (31 vol., pp. 269–288). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-52691-1_17

Chapter   Google Scholar  

Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational Researcher , 42(1), 38–43

Grover, S., Pea, R., & Cooper, S. (2015). Designing for deeper learning in a blended computer science course for middle school students. Computer Science Education , 25(2), 199–237. https://doi.org/10.1080/08993408.2015.1033142

Ihantola, P., Ahoniemi, T., Karavirta, V., & Seppälä, O. (2010). Review of recent systems for automatic assessment of programming assignments. Proceedings of the 10th Koli Calling International Conference on Computing Education Research , 86–93. https://doi.org/10.1145/1930464.1930480

International Society for Technology in Education [ISTE] & the Computer Science Teachers Association [CSTA] (2011). Operational definition of computational thinking for K–12 education. Retrieved from https://csta.acm.org/Curriculum/sub/CurrFiles/CompThinkingFlyer.pdf

Jin, K. H., & Charpentier, M. (2020). Automatic programming assignment assessment beyond black-box testing. Journal of Computing Sciences in Colleges , 35(8), 116–125

Kafai, Y. B., De Liema, D., Fields, D. A., Lewandowski, G., & Lewis, C. (2019). Rethinking debugging as productive failure for CS Education. In S. Heckman & J. Zhang (Eds.), Proceedings of the 50th ACM technical symposium on Computer Science Education . New York, NY: ACM

Knobelsdorf, M., & Frede, C. (2016, August). Analyzing student practices in theory of computation in light of distributed cognition theory. In Proceedings of the 2016 ACM Conference on International Computing Education Research (pp. 73–81).

Korucu, A. T., Gencturk, A. T., & Gundogdu, M. M. (2017). Examination of the computational thinking skills of students. Journal of Learning and Teaching in Digital Age , 2 (1), 11–19. Retrieved from https://eric.ed.gov/?id=ED572684

Landis, R., & Koch, G. G. (1977). The measurement of observer agreement for categorial data. Biometrics , 33, 159–174

Lockwood, J., & Mooney, A. (2018). Computational thinking in secondary education: Where does it fit? A systematic literary review. International Journal of Computer Science Education in Schools , 2(1), pp. 1–20. https://doi.org/10.21585/ijcses.v2i1.26

Lu, J. J., & Fletcher, G. H. L. (2009). Thinking about computational thinking. In S. Fitzgerald (Ed.), Proceedings of the 40th ACM technical symposium on Computer science education . New York, NY: ACM

Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior , 41, 51–61. https://doi.org/10.1016/j.chb.2014.09.012

Martin, R. C. (2009). Clean code: A handbook of agile software craftsmanship . Pearson Prentice Hall

McNicholl, R. (2019). Computational thinking using Code.org. Hello World , 4, 36–37. https://issuu.com/raspberry314/docs/helloworld04

Moreno-León, J., & Robles, G. (2015). Dr. Scratch: a web tool to automatically evaluate scratch projects. In J. Gal-Ezer, S. Sentance, & J. Vahrenhold (Eds.), Proceedings of the Workshop in Primary and Secondary Computing Education, London, United Kingdom, November 09 - 11, 2015 (pp. 132–133). New York: ACM. https://doi.org/10.1145/2818314.2818338

Moreno-León, J., Román-González, M., Harteveld, C., & Robles, G. (2017). On the automatic assessment of computational thinking skills. In G. Mark, S. Fussell, C. Lampe, m. schraefel, J. P. Hourcade, C. Appert, & D. Wigdor (Eds.), CHI’17: Extended abstracts: proceedings of the 2017 ACM SIGCHI Conference on Human Factors in Computing Systems : May 6-11, 2017, Denver, CO, USA (pp. 2788–2795). New York, New York: The Association for Computing Machinery. https://doi.org/10.1145/3027063.3053216

Pieterse, V. (2013). Automated assessment of programming assignments. Proceedings of the 3rd Computer Science Education Research Conference , 13 , 4–5. https://doi.org/10.5555/2541917.2541921

Portelance, D. J., & Bers, M. U. (2015). Code and tell: Assessing young Children’s learning of computational thinking using peer video interviews with ScratchJr. In M. U. Bers & G. L. Revelle (Eds.), IDC ‘15: Proceedings of the 14th international conference on interaction design and children (pp. 271–274). New York: ACM

Posner, M. I., & Keele, S. W. (1968). On the genesis of abstract ideas. Journal of Experimental Psychology , 77, 353–363. https://doi.org/10.1037/h0025953

Poulakis, E., & Politis, P. (2021). Computational Thinking Assessment: Literature Review. Research on E-Learning and ICT in Education: Technological, Pedagogical and Instructional Perspectives , 111–128

Resnick, M., Silverman, B., Kafai, Y., Maloney, J., Monroy-Hernández, A., Rusk, N., & Silver, J. (2009). Scratch: Programming for all. Communications of the ACM , 52(11), 60. https://doi.org/10.1145/1592761.1592779

Román-González, M., Pérez-González, J. C., & Jiménez-Fernández, C. (2017). Which cognitive abilities underlie computational thinking?: Criterion validity of the Computational Thinking Test. Computers in Human Behavior , 72, 678–691. https://doi.org/10.1016/j.chb.2016.08.047

Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals and understanding: An inquiry into human knowledge structures . Hillsdale, NJ: L. Erlbaum Associates. Artificial intelligence series

Schulz, K., & Hobson, S. (2015). Bebras Australia computational thinking challenge tasks and solutions 2014 . Brisbane, Australia: Digital Careers

Schulz, K., Hobson, S., & Zagami, J. (2016). Bebras Austrlia computational thinking challenge - tasks and solution 2016 . Brisbane, Australia: Digital Careers

Shivhare, & Kumar, C. A. (2016). On the Cognitive process of abstraction. Procedia Computer Science , 89, 243–252. https://doi.org/10.1016/j.procs.2016.06.051

Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking. Educational Research Review , 22, 142–158. https://doi.org/10.1016/j.edurev.2017.09.003

Sipser, M. (2013). Introduction to the theory of computation (3rd ed.). Boston: Cengage Learning

Sternberg, R. J. (2017). Human intelligence. Encyclopaedia Britannica . Retrieved from https://www.britannica.com/topic/human-intelligence-psychology/Development-of-intelligence#ref13354

Suters, L., & Suters, H. (2020). Coding for the core: Computational thinking and middle grades mathematics. Contemporary Issues in Technology and Teacher Education (CITE Journal) , 20 (3). Retrieved from https://citejournal.org/volume-20/issue-3-20/mathematics/coding-for-the-core-computational-thinking-and-middle-grades-mathematics/

Tang, K. Y., Chou, T. L., & Tsai, C. C. (2020). A content analysis of computational thinking research: An international publication trends and research typology. Asia-Pacific Education Researcher , 29(1), 9–19. https://doi.org/10.1007/s40299-019-00442-8

Teague, D., & Lister, R. (2014). Longitudinal think aloud study of a novice programmer. In J. Whalley (Ed.), Proceedings of the Sixteenth Australasian Computing Education Conference - Volume 148 . Darlinghurst, Australia: Australian Computer Society, Inc

Thalheim, B. (2009). Abstraction. In L. Liu, & M. T. Özsu (Eds.), Springer reference. Encyclopedia of database systems , 1–3. New York, NY: Springer

Tsai, Liang, J. C., Lee, S. W. Y., & Hsu, C. Y. (2021). Structural validation for the developmental model of computational thinking. Journal of Educational Computing Research , Vol. 59, https://doi.org/10.1177/07356331211017794

Türker, P. M., & Pala, F. K. (2020). A Study on students’ computational thinking skills and self-efficacy of block-based programming. Journal on School Educational Technology , 15 (3), 18–31 (14 Seiten). Retrieved from https://imanagerpublications.com/article/16669/

Vaníček, J. (2014). Bebras informatics contest: Criteria for good tasks revised. In Y. Gülbahar & E. Karataş (Eds.), Informatics in Schools. Teaching and Learning Perspectives: 7th International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, ISSEP 2014, Istanbul, Turkey, September 22-25, 2014. Proceedings (pp. 17–28). Cham: Springer International Publishing. https://doi.org/10.1007/978-3-319-09958-3_3

Weintrop, D., Beheshti, E., Horn, M., Orton, K., Jona, K., Trouille, L., & Wilensky, U. (2016). Defining computational thinking for mathematics and science classrooms. Journal of Science Education and Technology , 25(1), 127–147

Weinert, F. E. (1994). Lernen lernen und das eigene Lernen verstehen. [Learning how to learn and understanding the own learning]. In K. Reusser, & M. Reusser-Weyeneth (Eds.), Verstehen. Psychologischer Prozess und didaktische Aufgabe [Understanding. Psychological processes and didactical tasks.] (pp. 183–205). Bern: Huber

Wing, J. M. (2006). Computational thinking. Communications of the ACM , 49(3), 33–35. https://doi.org/10.1145/1118178.1118215

Wu, B., Hu, Y., Ruis, A. R., & Wang, M. (2019). Analysing computational thinking in collaborative programming: A quantitative ethnography approach. Journal of Computer Assisted Learning , 35(3), 421–434. https://doi.org/10.1111/jcal.12348

Zha, S., Jin, Y., Moore, P., & Gaston, J. (2020). Hopscotch into Coding: introducing pre-service teachers computational thinking. TechTrends , 64(1), 17–28. https://doi.org/10.1007/s11528-019-00423-0

Download references

Acknowledgements

Open Access funding enabled and organized by CAUL and its Member Institutions

Author information

Authors and affiliations.

Department of Vocational and Business Education, University of Hamburg, Hamburg, Germany

Kay-Dennis Boom & Jens Siemon

School of Education, Macquarie University, Building 29WW Room 238, Balaclava Rd North Ryde, 2109, NSW, Sydney, Australia

CLLE, University of Toulouse, CNRS, Toulouse, France

Amaël Arguel

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Matt Bower .

Ethics declarations

Conflict of interest, additional information, publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Boom, KD., Bower, M., Siemon, J. et al. Relationships between computational thinking and the quality of computer programs. Educ Inf Technol 27 , 8289–8310 (2022). https://doi.org/10.1007/s10639-022-10921-z

Download citation

Received : 05 September 2021

Accepted : 27 January 2022

Published : 03 March 2022

Issue Date : July 2022

DOI : https://doi.org/10.1007/s10639-022-10921-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Computational thinking
  • Visual programming
  • Program quality
  • Find a journal
  • Publish with us
  • Track your research

University of Northern Iowa Home

University of Northern Iowa to offer fully online MBA degree with updated curriculum

Woman working on laptop

CEDAR FALLS, Iowa –  Following unanimous approval by the Iowa Board of Regents, working professionals will now be able to earn their Masters of Business Administration (MBA) degree from the University of Northern Iowa fully online. The move comes with a revamped curriculum to meet the demands of modern business professionals. The top-ranked MBA program offered by the  Wilson College of Business highlights the university’s dedication to academic excellence and meeting student needs, making it easier and more flexible for professionals to advance their careers.

“I am thrilled about the opportunity to extend our top-tier education to a broader audience through our new online offering," said Leslie Wilson, dean of the Wilson College of Business. The program will begin offering courses online starting in the summer of 2024, with the entire course lineup available by the fall of 2025. "UNI has consistently been ranked as a 'Best Business School' by The Princeton Review for over a decade reflects our commitment to academic excellence, personal development, and career advancement within the Wilson MBA program”  

The MBA program features a redesigned curriculum that includes three stackable graduate certificates tailored to the needs of today's business professionals: Business Fundamentals, Managerial Analytics, and Strategic Leadership and Innovation. Each certificate requires the completion of four courses.

"UNI alumni want to earn a Wilson MBA. They value the coursework, experiences and relationships they have developed during their undergraduate program,” said Mary Connerley, associate dean of the Wilson College of Business. “I'm excited to deliver the same level of excellence and support to all professionals across Iowa and beyond. Our focus is on meeting the needs of employers who are looking for professionals with strong written and oral communication skills and critical thinking abilities."

Participants in the program with a desire to earn their MBA have the option of a flexible pathway to achieving their goal by completing all three certificate programs. The program aims to enhance the skills of degree holders, responding to employers' growing demand for professionals with strong communication and critical thinking skills.

“These new stackable certificates, along with the online MBA program, are designed to offer unparalleled convenience,” said Stephanie Huffman, dean of the  College of Graduate, Research and Online Education . “This allows students to tailor their education to their career goals while studying from anywhere.” 

The Wilson MBA program is comprised of 10 courses, totaling 30 credits. Those in the program can attend these courses online from any location. 

"Our decision to add an online MBA program is a direct response to the evolving needs of today's business professionals," said Alicia Rosburg, MBA program coordinator. "By adding an online format, we prioritize convenience and flexibility, reflecting our dedication to student success. This approach guarantees that all students can access our top-tier education from anywhere, creating opportunities to advance their education, especially for those living outside of the Cedar Valley or those balancing work and family commitments."

Prospective students interested in earning their MBA through the Wilson College of Business can find more information about the program and admissions process at  business.uni.edu/mba .

Media Contact: Adam Amdor

[email protected], uni introduces online pathway for special education endorsements.

critical thinking in computer programming

Brewing for a better tomorrow

critical thinking in computer programming

Global accounting firm finds its future professionals at UNI

critical thinking in computer programming

IMAGES

  1. Computational Thinking

    critical thinking in computer programming

  2. Computational Thinking Defined. What is Computational Thinking and

    critical thinking in computer programming

  3. What is Computational Thinking?

    critical thinking in computer programming

  4. Improving Critical Thinking through Coding

    critical thinking in computer programming

  5. 10 Critical Tips to Learn Programming Faster

    critical thinking in computer programming

  6. 10 Critical Tips to Learn Programming Faster

    critical thinking in computer programming

VIDEO

  1. Mastering Reliability Design

  2. Creation of A genius computer programmer

  3. Koenig Solutions Pvt. Ltd.Johannesburg branch

  4. Day In The Life of a CU Boulder Student

  5. Endless thinking

  6. Computational thinking

COMMENTS

  1. How critical thinking can help you learn to code

    In short, being a good problem-solver requires critical thinking. Today, we'll discuss what critical thinking is, why it's important, and how it can make you a better programmer. ... You'll learn the basics of programming and computer science through a series of lessons, quizzes, and exercises. You'll also learn more about some of the ...

  2. How to think like a programmer

    Simplest means you know the answer (or are closer to that answer). After that, simplest means this sub-problem being solved doesn't depend on others being solved. Once you solved every sub-problem, connect the dots. Connecting all your "sub-solutions" will give you the solution to the original problem. Congratulations!

  3. Is Computational Thinking Critical Thinking?

    The idea that the habits of mind used in computer programming could be applicable to other situations has existed since the 1950s (Tedre & Denning, 2016), but it was popularized in the twenty-first century as "computational thinking" (hereafter compT) by a highly influential editorial by Wing ().Since that time, there has been a flood of scholarly work, educational standards, and ...

  4. Some Evidence on the Cognitive Benefits of Learning to Code

    Introduction. Computer coding—an activity that involves the creation, modification, and implementation of computer code and exposes students to computational thinking—is an integral part of today's education in science, technology, engineering, and mathematics (STEM) (Grover and Pea, 2013).As technology is advancing, coding is becoming a necessary process and much-needed skill to solve ...

  5. Enhancing Students' Computer Programming Performances, Critical

    Computer programming, Peer assessment, Critical thinking, Learning attitude, Scratch. Introduction. How to effectively conduct programming education to help students develop the concepts and skills of . programming has become an important and challenging issue (Krpan, Mladenović, & Rosić, 2015; Yang et al., 2015; Brito & de . Sá-

  6. Computational Thinking for Problem Solving

    Computational thinking is a problem-solving process in which the last step is expressing the solution so that it can be executed on a computer. However, before we are able to write a program to implement an algorithm, we must understand what the computer is capable of doing -- in particular, how it executes instructions and how it uses data.

  7. Computer science education and K-12 students' computational thinking: A

    Computer programming has been considered a significant factor for developing students' CT skills (Chiazzeses et al., 2018). ... Further analysis showed that the programming group developed more in algorithmic thinking and critical thinking whereas participants in the DBL group showed improvement in problem solving and creativity.

  8. Computer Programming and Logical Reasoning: Unintended Cognitive

    Recent research results having to do with explicit instruction in computer programming and cognitive skills indicate an increased emphasis upon the structure of the ... Ennis R. H. and Paulus D., Critical Thinking Readiness in Grades 1-12 (Phase I: Deductive Reasoning in Adolescence), Critical Thinking Project, Cornell University, Ithaca ...

  9. Fostering creative thinking skills through computer programming

    Teaching programming skills has attracted a great deal of attention for more than a decade. One potential reason behind this is that the explicit teaching of computer programming can improve higher-order thinking skills, such as creativity. Moreover, whether or not creative programming learning activities, such as the use of creative-thinking-boosting-techniques integrated into programming ...

  10. PDF Fostering creative thinking skills through computer programming

    The results suggest that better creative thinking skills can be improved when the integration of creativity-boosting activities follows the explicit teaching of computer programming into computer programming teaching. The results high-lighted the need to integrate creativity-boosting activities into the teaching of com-puting programming.

  11. Computer Science and Critical Thinking

    Learning the basics of a programming language, or even familiarizing yourself with the type of logic necessary to code, can be beneficial to you in your daily life. Take a few hours and flesh out your skills; you'll be glad you did. Learning computer science can help you think more critically and, with more novel inspiration, ultimately help ...

  12. Instructional interventions for computational thinking: Examining the

    Critical Thinking "It is fun to try to solve the complex problems." ... Can computer programming improve problem-solving ability? SIGCSE Bull, 22 (1990), pp. 30-33, 10.1145/126445.126451. View in Scopus Google Scholar [48] M.C. Tsai, & C.W. Tsai (2018). Applying online externally-facilitated regulated learning and computational thinking to ...

  13. From Critique to Computational Thinking: A Peer-Assessment-Supported

    Scholars believe that computational thinking is one of the essential competencies of the 21st century and computer programming courses have been recognized as a ... Liang Z. Y., Wang H. Y. (2017). Enhancing students' computer programming performances, critical thinking awareness and attitudes towards programming: an online peer-assessment ...

  14. Improving Critical Thinking through Coding

    Students with computer programming experiences are said to typically score higher on various cognitive ability tests than students who do not have programming experiences. Vishal Raina, founder and senior instructor at YoungWonks, sums it up well by emphasising how the tech field thrives on critical thinking.

  15. How to Think like a Programmer

    Then write the code to solve that small problem. Slowly but surely, introduce complexity to solve the larger problem you were presented with at the beginning. 5. Practice, don't memorize. Memorizing code is tough, and you don't need to go down that road to think like a programmer. Instead, focus on the fundamentals.

  16. Learning to code or coding to learn? A systematic review

    The resurgence of computer programming in the school curriculum brings a promise of preparing students for the future that goes beyond just learning how to code. This study reviewed research to analyse educational outcomes for children learning to code at school. ... Critical thinking was grouped under higher order thinking skills in the ...

  17. Build Essential Computational Thinking Skills

    In summary, here are 10 of our most popular computational thinking courses. Computational Thinking for Problem Solving: University of Pennsylvania. Problem Solving Using Computational Thinking: University of Michigan. Computational Thinking with Beginning C Programming: University of Colorado System. Introduction to Mathematical Thinking ...

  18. Fostering creative thinking skills through computer programming

    The findings indicated significant improvements in both explicit teaching and creative programming periods. The results suggest that better creative thinking skills can be improved when the integration of creativity-boosting activities follows the explicit teaching of computer programming into computer programming teaching.

  19. 6 ways coding encourages logical thinking

    Critical thinking involves approaching a problem or situation analytically and breaking it into separate components for more efficient problem-solving. Critical thinking also involves being able to adequately express yourself and being mentally flexible. Computer programming does an excellent job encouraging kids to think of creative solutions ...

  20. Computational thinking development through creative programming in

    They are also important in CT because knowing about computer programming concepts and processes can help develop CT strategies (Brennan & Resnick, ... problem solving, and critical thinking. First, we discuss the opportunity of learning the object-oriented programming (OOP) paradigm from the early steps of CT learning activities. Second, we ...

  21. Review on teaching and learning of computational thinking through

    Yet, in the current literature, there is a dearth of papers that explore computational thinking through programming in K-12 contexts (Grover & Pea, 2013) as these programming studies are more often examined for tertiary students undertaking computer science courses (e.g., Katai and Toth, 2010, Moreno, 2012). Therefore, in this paper, we attempt ...

  22. Relationships between computational thinking and the quality of

    2.1 Defining computational thinking and its subcomponents. Computational thinking is generally seen as an attitude and skill for solving problems, designing complex systems, and understanding human thoughts and behaviors, based on concepts fundamental to computer science (Lye & Koh, 2014).Recent reviews of computational thinking definitions and components by Shute, Sun and Asbell-Clarke and ...

  23. [PDF] Computational thinking is critical thinking: Connecting to

    This paper compares computational and critical modes of thinking, identifying concepts and terminology that support cross‐disciplinary discourse, inform faculty and curriculum development efforts, and interconnect learning outcomes at the course, program and university level, thus helping programs better articulate contributions to institutional goals. Computational thinking complements ...

  24. PDF Critical Thinking

    COSC 1336 Programming Fundamentals COSC 1337 Object Oriented Paradigm COSC 4325 Data Comm. & Comp. Networks COSC 3385 Database Design Critical Thinking Analyze and make strategic decisions using business data Assist organizations in business decision making using appropriate analyses on organizational data

  25. University of Northern Iowa to offer fully online MBA degree with

    The program aims to enhance the skills of degree holders, responding to employers' growing demand for professionals with strong communication and critical thinking skills."These new stackable certificates, along with the online MBA program, are designed to offer unparalleled convenience," said Stephanie Huffman, dean of the College of ...